Making your own DOM events in Smart Pascal

October 20, 2017 Leave a comment

Being able to listen to events is fairly standard stuff in Smart Mobile Studio and JavaScript in general. But what is not so common is to create your own event-types from scratch that fire on a target, and that users of JS can listen to and use.

The word Events in cut out magazine letters pinned to a cork not

Now before you get confused and think this is a newbie post, I am talking about DOM (document object model) level events here; these are quite different from the event model we have in object pascal. So what im talking about is being able to create events that external libraries can use for instance. Libraries written in plain JavaScript rather than Smart Pascal.

Interesting events

While you may think that events like that, which are akin to all the other DOM events, have little or no use – think again. First of all you can dispatch them on any element and event-emitter. So you can in fact register events on common elements like Document. You can then use custom events as a bridge between your Smart code and third party libraries for instance. So if you have written a kick-ass media system and wants to sell it to a customer who only knows JavaScript – then using native JS events can act as a bridge.

Right, let’s look at a little unit I wrote to simplify this:

unit userevents;

interface

uses
  System.Types,
  System.Types.Convert,
  System.JSON,
  SmartCL.System;

type

  IW3Prototype = interface
    procedure AddField(FieldName: string; const DataType: TRTLDatatype);
    function  FieldExists(FieldName: string): boolean;
    procedure SetEventName(EventName: string);
  end;

  TW3CustomEvent = class(TObject, IW3Prototype)
  private
    FName:      string;
    FData:      TJSONObject;
    FDefining:  boolean;
    procedure   SetEventName(EventName: string);
    procedure   AddField(FieldName: string; const DataType: TRTLDatatype);
    function    FieldExists(FieldName: string): boolean;
    function    GetReady: boolean;
  public
    property    Name: string read FName;
    property    Ready: boolean read GetReady;

    function    DefinePrototype(var IO: IW3Prototype): boolean;
    procedure   EndDefine(var IO: IW3Prototype);
    function    NewEventData: TJSONObject;

    procedure   Dispatch(const Handle: TControlHandle; const EventData: TJSONObject);

    constructor Create; virtual;
    destructor  Destroy; override;
  end;

implementation

//############################################################################
// TW3CustomEvent
//###########################################################################

constructor TW3CustomEvent.Create;
begin
  inherited Create;
  FData := TJSONObject.Create;
end;

destructor TW3CustomEvent.Destroy;
begin
  FData.free;
  inherited;
end;

function TW3CustomEvent.GetReady: boolean;
begin
  result := (FDefining = false) and (FName.Length > 0);
end;

procedure TW3CustomEvent.Dispatch(const Handle: TControlHandle; const EventData: TJSONObject);
var
  LEvent: THandle;
  LParamData: variant;
begin
  if GetReady() then
  begin
    if (Handle) then
    begin
      // Check for detail-fields, get javascript object if available
      if EventData <> nil then
      begin
        if EventData.Count > 0 then
          LParamData := EventData.Instance;
      end;

      if (LParamData) then
      begin
        // Create event object with detail-data
        var LName := FName.ToLower().Trim();
        asm
        @LEvent = new CustomEvent(@LName, { detail: @LParamData });
        end;
      end else
      begin
        // Create event without detail-data
        var LName := FName.ToLower().Trim();
        asm
        @LEvent = new Event(@LName);
        end;
      end;

      // Dispatch event-object
      Handle.dispatchEvent(LEvent);
    end;
  end;
end;

procedure TW3CustomEvent.SetEventName(EventName: string);
begin
  if FDefining then
  begin
    EventName := EventName.Trim().ToLower();
    if EventName.Length > 0 then
      FName := EventName
    else
      raise EW3Exception.Create
      ('Invalid or empty event-name error');
  end else
    raise EW3Exception.Create
    ('Event-name can only be written while defining error');
end;

function TW3CustomEvent.FieldExists(FieldName: string): boolean;
begin
  if FDefining then
    result := FData.Exists(FieldName)
  else
    raise EW3Exception.Create
    ('Fields can only be accessed while defining error');
end;

procedure TW3CustomEvent.AddField(FieldName: string; const DataType: TRTLDatatype);
begin
  if FDefining then
  begin
    if not FData.Exists(FieldName) then
      FData.AddOrSet(FieldName, TDataType.NameOfType(DataType))
    else
      raise EW3Exception.CreateFmt
      ('Field [%s] already exists in prototype error', [FieldName]);
  end else
  raise EW3Exception.Create
  ('Fields can only be accessed while defining error');
end;

function TW3CustomEvent.NewEventData: TJSONObject;
const
   MAX_INT_16 = 32767;
   MAX_INT_08 = 255;
begin
  result := TJSONObject.Create;
  result.FromJSON(FData.ToJSON());

  result.ForEach(
    function (Name: string; var Data: variant): TEnumState
    begin
      // clear data with datatype value to initialize
      case TDataType.TypeByName(TVariant.AsString(Data)) of
      itBoolean:  Data := false;
      itByte:
        begin
          Data := MAX_INT_08;
          Data := $00;
        end;
      itWord:
        begin
          Data := $FFFF;
          Data := $0000;
        end;
      itLong:
        begin
          Data := 00000000;
        end;
      itInt16:
        begin
          Data := MAX_INT_16;
          Data := 0000;
        end;
      itInt32:
        begin
          Data := MAX_INT;
          Data := 0;
        end;
      itFloat32:
        begin
          Data := 1.1;
          Data := 0.0;
        end;
      itFloat64:
        begin
          Data := 20.44;
          Data := 0.0;
        end;
      itChar,
      itString:   Data := '';
      else        Data := null;
      end;
      result := esContinue;
    end);
end;

function TW3CustomEvent.DefinePrototype(var IO: IW3Prototype): boolean;
begin
  result := not FDefining;
  if result then
  begin
    FDefining := true;
    IO := (Self as IW3Prototype);
  end;
end;

procedure TW3CustomEvent.EndDefine(var IO: IW3Prototype);
begin
  if FDefining then
    FDefining := false;
  IO := nil;
end;

end.

Patching the RTL

Sadly there was a bug in the RTL that prevented the TJSONObject.ForEach() to function properly. This has been fixed in the update we are preparing now, but there will still be a few days before that is released.

You can patch this manually right now with this little fix. Just go into the System.JSON.pas file and replace the TJSonObject.ForEach() method with this one:

function TJSONObject.ForEach(const Callback: TTJSONObjectEnumProc): TJSONObject;
var
  LData:  variant;
begin
  result := self;
  if assigned(CallBack) then
  begin
    var NameList := Keys();
    for var xName in NameList do
    begin
      Read(xName, LData);
      if CallBack(xName, LData) = esContinue then
        Write(xName, LData)
      else
        break;
    end;
  end;
end;

Creating events

Events come in two flavours: those with data and those without. This is why we have the DefinePrototype() and EndDefine() methods – namely to define what data fields the event should take. If you dont populate the prototype then the class will create an event without it.

Secondly, events dont need to be registered somewhere. You create it, dispatch it to a handle (or element) and if there is an event-listener attached there looking for that name – it will fire.

Ok let’s have a peek:

  // Create a custom, new, system-wide event
  var LEvent := TW3CustomEvent.Create;
  var IO: IW3Prototype = nil;
  if LEvent.DefinePrototype(IO) then
  begin
    try
      IO.SetEventName('bombastic');
      IO.AddField('name', TRTLDataType.itString);
      IO.AddField('id', TRTLDataType.itInt32);
    finally
      LEvent.EndDefine(IO);
    end;
  end;

  // Setup a normal event-listner
  Display.Handle.addEventListener('bombastic',
    procedure (ev: variant)
    begin
      var data := ev.detail;
      if (data) then
        showmessage(JSON.stringify(Data));
    end);

  // Populate some event-data
  var MyData := LEvent.NewEventData();
  MyData.Write('name','John Doe');
  MyData.Write('id', '{F6EB5680-5DC1-422E-8F72-5C60EAC0B46F}');

  // Now send the event to whomever is listening
  LEvent.Dispatch(Display.Handle, MyData);

In the above example I use the Application.Display control as the event-target. There is no special reason for this except that it’s always available. You would naturally create events like this inside your TW3CustomControl (or perhaps the Document element, under a namespace).

You will also notice that any data sent ends up in the “detail” field of the event object. We use a variant datatype since that maps directly to any JS object and also lets us access any property (and create properties for that mapper); so thats why the “ev” parameter in addEventListner() is a variant, not a fixed class.

Well, hope you enjoy the show and happy coding!

PS: Smart now uses an event-manager to deal with input events (mouse, touch), but the other events works like before. Have a look at SmartCL.Events.pas to see some time-saving event classes. So instead of having to use ASM sections and variants, you can use object pascal classes to map any event. 

Advertisements

Smart Mobile Studio and CSS: part 4

October 18, 2017 Leave a comment

If you missed the previous articles, I urge you to take the time to read through them. While not explicit to the content of this article, they will give you a better context for the subject of CSS and how Smart Mobile Studio deals with things:

Scriptable CSS

If you are into web technology you probably know that the latest fad is so-called css compilers [sigh]. One of the more popular is called Less, which you can read up on over at lesscss.org. And then you have SASS which seem to have better support in the community. I honestly could not care less (pun intended).

So what exactly is a CSS compiler and why should it matter to you as a Smart Pascal developer? That is a good question! First, it doesnt matter to you at all. Not one iota. Why? Because Smart Mobile Studio have supported scriptable CSS for years. So while the JS punters think they have invented gunpowder, they keep on re-inventing the exact same stuff native languages and their programmers have used for ages. They just bling it up with cool names to make it seem all new and dandy (said the grumpy 44 year old man child).

In short a CSS compiler allows you to:

  • Define variables and constant values you can use throughout your style-sheet
  • Define repeating sections of CSS, a poor man’s “for-next block” if you like
  • Merge styles together, which is handy at times

Smart Mobile Studio took it all one step further, because we have a lot more technology on our hands than just vanilla JavaScript. So what we did was to dump the whole onslaught of power from Delphi Web Script – and we bolted that into our CSS linker process. So while the JS guys have a parser system with a ton of cryptic identifiers – we added something akin to ASP to our CSS module. It’s complete overkill but it just makes me giggle like a little girl whenever I use it.

smartstyle

The new themes being created now all tap into scripting to automate things

But how does it work you say? Does it execute with the program or? Nope. Its purely a part of the linking process, so it executes when you compile your program. Whatever you emit (using the Print() method) or assign via the tags, ends up at that location in the output. Think php or ASP but for CSS instead:

  1. Smart takes your CSS file (with code) and feed’s it to DWScript
  2. DWScript runs it, and spits out the result to a buffer
  3. The buffer is sent to the linker
  4. The linker saves the data either as a separate CSS file, or statically links it into your HTML file.

Pretty cool or what!

So what good can that do?

It can do a world of good. For instance, when you create a theme it’s important to use the same values to ensure that things have the same general layout, colors and styles. Since you can now use constants, variables, for/next loops, classes, records and pretty much everything DWScript has to offer – you have a huge advantage over these traditional JS based compilers.

  • Gradients are generated via a pascal function
  • Font names are managed via constants
  • Font sizes can be made uniform throughout the theme
  • Standard colors that you can also define in your Smart code, thus having a unified color system, can be easily shared between the css-pascal and the smart-pascal codebases.
  • Instead of defining the same color over and over again, perhaps in hundreds of places, use a constant. When you need to adjust something you change one value instead of 200 values!

It’s no secret that browser-standards are hard to deal with. For instance, did you know that there are 3 different webkit formats for defining a top-down gradient? Then you have the firefox version, the microsoft version (edge), the microsoft IE version, the opera version and heaven-forbid: the W3C “standard” that nobody seem interested in supporting. Now having to hand-carve the same gradients over and over for the different backgrounds (of a theme) that might use them – that can be both time consuming and infuriating.

Let’s look at some code that can be used in your stylesheet’s straight away. It’s almost like a mini-unit that perhaps should be made external later. But for now, have a peek:

<span 				data-mce-type="bookmark" 				id="mce_SELREST_start" 				data-mce-style="overflow:hidden;line-height:0" 				style="overflow:hidden;line-height:0" 			></span><?pas   const EdgeRounding          = "4px";   const clDlgBtnFace          = "#ededed";   //#############################################   //Fonts   //#############################################   const fntDefaultName = '"Ubuntu"';   const fntSmallSize   = "12px";   const fntNormalSize  = "14px";   const fntMediumSize  = "18px";   const fntLargeSize   = "24px";   const fntDefaultSize =  fntNormalSize;   type   TRGBAText = record     rs: string;     gs: string;     bs: string;     ac: string;   end;   type   TBrowserFormat = (     gtWebkit1,     gtWebkit2,     gtMoz,     gtMs,     gtIE,     gtAny   );   function GetR(ColorDef: string): string;   begin     if ColorDef.StartsWith('#') then     begin       delete(ColorDef, 1, 1);       var temp := Copy(ColorDef, 1, 2);       result := HexToInt(temp).ToString();     end else     result := '00';   end;   function GetG(ColorDef: string): string;   begin     if ColorDef.StartsWith('#') then     begin       delete(ColorDef, 1, 1);       var temp := Copy(ColorDef, 3, 2);       result := HexToInt(temp).ToString();     end else     result := '00';   end;   function GetB(ColorDef: string): string;   begin     if ColorDef.StartsWith('#') then     begin       delete(ColorDef, 1, 1);       var temp := Copy(ColorDef, 5, 2);       result := HexToInt(temp).ToString();     end else     result := '00';   end;   function OpacityToStr(const Opacity: float): string;   begin     result := FloatToStr(Opacity);     if result.IndexOf(',') ><span 				data-mce-type="bookmark" 				id="mce_SELREST_end" 				data-mce-style="overflow:hidden;line-height:0" 				style="overflow:hidden;line-height:0" 			></span> 0 then
      result := StrReplace(result, ',', '.')
  end;

  function ColorDefToRGB(const ColorDef: string): TRGBAText;
  begin
    result.rs := GetR(ColorDef);
    result.gs := GetG(ColorDef);
    result.bs := GetB(ColorDef);
    result.ac := '255';
  end;

  function ColorDefToRGBA(const ColorDef: string; Opacity: float): TRGBAText;
  begin
    result.rs := GetR(ColorDef);
    result.gs := GetG(ColorDef);
    result.bs := GetB(ColorDef);
    result.ac := OpacityToStr(Opacity);
  end;

  function GetRGB(ColorDef: string): string;
  begin
    result += 'rgb(';
    result += GetR(ColorDef) + ', ';
    result += GetG(ColorDef) + ', ';
    result += GetB(ColorDef);
    result += ')';
  end;

  function GetRGBA(ColorDef: string; Opacity: float): string;
  begin
    result += 'rgba(';
    result += GetR(ColorDef) + ', ';
    result += GetG(ColorDef) + ', ';
    result += GetB(ColorDef) + ', ';
    result += OpacityToStr(Opacity);
    result += ')';
  end;

  function SetGradientRGBSInMask(const Mask: string; First, Second: TRGBAText): string;
  begin
    result := StrReplace(Mask,   '$r1', first.rs);
    result := StrReplace(result, '$g1', first.gs);
    result := StrReplace(result, '$b1', first.bs);

    if result.contains('$a1') then
      result := StrReplace(result, '$a1', first.ac);

    result := StrReplace(result, '$r2', Second.bs);
    result := StrReplace(result, '$g2', Second.bs);
    result := StrReplace(result, '$b2', Second.bs);

    if result.contains('$a2') then
      result := StrReplace(result, '$a2', second.ac);
  end;

  function GradientTopBottomA(FromColorDef, ToColorDef: TRGBAText;
           BrowserFormat: TBrowserFormat): string;
  begin
    var xfirst := FromColorDef;
    var xSecond := ToColorDef;

    case BrowserFormat of
    gtWebkit1:
      begin
        var mask := "-webkit-gradient(linear, left top, left bottom, color-stop(0, rgba($r1,$g1,$b2,$a1)), color-stop(100, rgba($r2,$g2,$b2,$a2)))";
        result := SetGradientRGBSInMask(mask, xFirst, xSecond);
      end;

    gtWebkit2:
      begin
        var mask := "-webkit-linear-gradient(top, rgba($r1,$g1,$b2,$a1) 0%, rgba($r2,$g2,$b2,$a2) 100%)";
        result := SetGradientRGBSInMask(mask, xFirst, xSecond);
      end;

    gtMoz:
      begin
        var mask := "-moz-linear-gradient(top, rgba($r1,$g1,$b2,$a1) 0%, rgba($r2,$g2,$b2,$a2) 100%)";
        result := SetGradientRGBSInMask(mask, xFirst, xSecond);
      end;

    gtMs:
      begin
        var mask := "-ms-linear-gradient(top, rgba($r1,$g1,$b2,$a1) 0%, rgba($r2,$g2,$b2,$a2) 100%)";
        result := SetGradientRGBSInMask(mask, xFirst, xSecond);
      end;

    gtIE:
      begin
        var mask := "filter: progid:DXImageTransform.Microsoft.gradient(startColorstr=rgba($r1,$g1,$b2,$a1), endColorstr=rgba($r2,$g2,$b2,$a2),GradientType=0)";
        result := SetGradientRGBSInMask(mask, xFirst, xSecond);
      end;

    gtAny:
      begin
        var mask := "linear-gradient(to bottom, rgba($r1,$g1,$b2,$a1) 0%, rgba($r2,$g2,$b2,$a2) 100%)";
        result := SetGradientRGBSInMask(mask, xFirst, xSecond);
      end;
    end;
  end;

  function GradientTopBottom(FromColorDef, ToColorDef: string;
           BrowserFormat: TBrowserFormat): string;
  begin
    (* var xfirst  := ColorDefToRGB(FromColorDef);
    var xSecond := ColorDefToRGB(ToColorDef);
    var mask := ''; *)

    case BrowserFormat of
    gtWebkit1:
      begin
        var mask := "-webkit-gradient(linear, left top, left bottom, color-stop(0, $a), color-stop(100, $b))";
        result := StrReplace(mask, '$a', FromColorDef);
        result := StrReplace(result, '$b', ToColorDef);
      end;

    gtWebkit2:
      begin
        var mask := "-webkit-linear-gradient(top, $a 0%, $b 100%)";
        result := StrReplace(mask, '$a', FromColorDef);
        result := StrReplace(result, '$b', ToColorDef);
      end;

    gtMoz:
      begin
        var mask := "-moz-linear-gradient(top, $a 0%, $b 100%)";
        result := StrReplace(mask, '$a', FromColorDef);
        result := StrReplace(result, '$b', ToColorDef);
      end;

    gtMs:
      begin
        var mask := "-ms-linear-gradient(top, $a 0%, $b 100%)";
        result := StrReplace(mask, '$a', FromColorDef);
        result := StrReplace(result, '$b', ToColorDef);
      end;

    gtIE:
      begin
        var mask := "filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='$a', endColorstr='$b',GradientType=0)";
        result := StrReplace(mask, '$a', FromColorDef);
        result := StrReplace(result, '$b', ToColorDef);
      end;

    gtAny:
      begin
        var mask := "linear-gradient(to bottom, $a 0%, $b 100%)";
        result := StrReplace(mask, '$a', FromColorDef);
        result := StrReplace(result, '$b', ToColorDef);
      end;
    end;
  end;
?>

This code has to be placed on top of your CSS. It should be the very first of the css file. Now let’s make some gradients!

.TW3ButtonBackground {
  background-color: <?pas=clDlgBtnFace?>;
  background-image: <?pas=GradientTopBottom('#FFFFFF','#F0F0F0', gtWebkit1)?>;
  background-image: <?pas=GradientTopBottom('#FFFFFF','#F0F0F0', gtWebkit2)?>;
  background-image: <?pas=GradientTopBottom('#FFFFFF','#F0F0F0', gtMoz)?>;
  background-image: <?pas=GradientTopBottom('#FFFFFF','#F0F0F0', gtMs)?>;
  background-image: <?pas=GradientTopBottom('#FFFFFF','#F0F0F0', gtIE)?>;
  background-image: <?pas=GradientTopBottom('#FFFFFF','#F0F0F0', gtAny)?>;
}

.TW3ButtonBackground:active {
  background-color: <?pas=clDlgBtnFace?>;
  background-image: <?pas=GradientTopBottom('#E7E7E7','#FFFFFF', gtWebkit1)?>;
  background-image: <?pas=GradientTopBottom('#E7E7E7','#FFFFFF', gtWebkit2)?>;
  background-image: <?pas=GradientTopBottom('#E7E7E7','#FFFFFF', gtMoz)?>;
  background-image: <?pas=GradientTopBottom('#E7E7E7','#FFFFFF', gtMs)?>;
  background-image: <?pas=GradientTopBottom('#E7E7E7','#FFFFFF', gtIE)?>;
  background-image: <?pas=GradientTopBottom('#E7E7E7','#FFFFFF', gtAny)?>;
}

Surely you agree that the above makes gradients a lot easier to work with? (and we can simplify it even more later). You can also abstract it further right now by putting the start and stop colors into constants – thus making it super easy to maintain and change whatever style use those constant-colors.

Now let’s use our styles for something. Start a new Smart Mobile Studio Visual Project. Do as mentioned in the previous articles and make the stylesheet visible (project options, use custom css).

Now copy and paste in the code on top of your css-file, then copy and paste in the style-code above at the end of the css-file.

In the Smart IDE, drop a button on the form, then go into the code-editor and locate InitializeForm. Add the following to the procedure:

w3button1.StyleClass := '';
w3Button1.TagStyle.add('TW3ButtonBackground');

Compile and run the progra, and voila, you will now have a button with a nice, gradient background. A gradient that will work in all modern browsers, and that will be easy to maintain and change later should you want to.

Start today

Smart has had support for scriptable CSS files for quite some time. If you go into the Themes folder of your Smart Mobile Studio installation, you will find plenty of CSS files. Many of these use scripting as a part of their makeup. But it’s only recently that we have started to actively use it as it was meant to be used.

But indeed, spend a little time looking at the code in the existing stylesheets, and feel free to play around with the code I have posted here. The sky is the limit when it comes to creative and elegant solutions – so I’m sure you guys will do miracles with it.

Smart Mobile Studio and CSS: part 3

October 12, 2017 Leave a comment

In the first article we looked at some ground rules for how Smart deals with CSS. The most important part is how Smart Mobile Studio maps pascal class names to CSS style names. Simple, but extremely effective.

In the second article we looked at how you should write classes to make styling easy. We also talked about code discipline and that you should never use TW3CustomControl directly, because it makes styling time-consuming and cumbersome.

In this article we are going to cover two things: first we are going to look at probably the most powerful feature CSS has to offer, namely cascades. And then we are going to talk a bit about the new theme system we are working on. Please note that the new theme system is not yet available in the alpha releases yet. Like all things it has to go through the testing stage. All our visual controls needs a little adjustment to support the new themes as well, which doesn’t affect you – but it is time-consuming work.

Cascades

Writing a CSS style for your control should be pretty easy if you have read our previous two articles. But do you really want a large, complex and monolithic style? If you have a look at the stylesheet that ships with Smart Mobile Studio (any of them, there are several), you probably agree that it’s not easy to understand at times. Each control have it’s style definition, that part is clear, but every style includes font, backgrounds, text colors, text shadowing, margins, borders, border shadows, gradients (ad nauseum). Long story short: stylesheets like this is hell to maintain and extremely time-consuming to make.

CSS have this cool feature where you can actually take any number of styles and apply them to the same element. This might sound nutty at first but think it through, because it is going to make your like a lot easier:

  • We can isolate the border style separately
  • We can have multiple border styles and pick the ones we want, rather than a single, hardcoded and fixed version
  • We can define the backgrounds, any number of them, as separate styles

That doesn’t sound to bad does it? But wait, there is more!

Remember how I told you that animations are also defined in CSS? Since CSS allows you to add multiple styles to a control, this also means you can define a style with an animation – and then just add it when you want something to happen, and then remove the style when you don’t need it any more.

You have probably seen these spinners that websites use right? So while the website is loading something, a circle or dot keeps rotating to signal that work is being performed in the background? Well, that’s pretty easy to achieve when you understand how cascades work. You just define the animation, use it in a style — and then add that style to your control. When you want to stop this behavior you just remove the style. Thats it!

But let’s start with something simple. Let’s define a border and a background and apply that to a control using code. And remember: styles you add does not exclude the initial style. Like we talked about earlier – Smart will take the pascal classname and use a CSS style with that name. So whatever you add to the control is extra. Which is really powerful!

You probably want to start a new visual project for this one. And remember to pick a theme in the project options (in my case I picked the “Android-HoloLight.,css” theme, just so you know), save the project, then go back into the options and check the “Use custom theme” checkbox. Again exit the dialog and click save – the IDE will now create a copy of whatever theme you picked and give you direct access to it from the IDE.

Remember to check the custom theme in project options

With that out of the way you should now have a fresh, blank visual application with an item called “Custom CSS” in the project manager list. Now double click on that like before so we can get cracking, and go down to the end of the file. Add the following text:

<?pas   const EdgeRounding = "4px";   const clDlgBtnFace = "#ededed"; ?>

.TMyButtonBorder {
  border-radius:  <!--?pas=EdgeRounding?-->;
  border-top:     1px solid rgba(250, 250, 250, 0.7);
  border-left:    1px solid rgba(250, 250, 250, 0.7);
  border-right:   1px solid rgba(240, 240, 240, 0.5);
  border-bottom:  1px solid rgba(240, 240, 240, 0.5);

  -webkit-box-shadow: 0px 0px 1px 1px rgba(81, 81, 81, 0.8);
     -moz-box-shadow: 0px 0px 1px 1px rgba(81, 81, 81, 0.8);
          box-shadow: 0px 0px 1px 1px rgba(81, 81, 81, 0.8);
}

.TMyButtonBorder:active {
  border-radius:  <!--?pas=EdgeRounding?-->;
  border-top:     1px solid rgba(240, 240, 240, 0.5);
  border-left:    1px solid rgba(240, 240, 240, 0.5);
  border-right:   1px solid rgba(250, 250, 250, 0.7);
  border-bottom:  1px solid rgba(250, 250, 250, 0.7);

  -webkit-box-shadow: 0px 0px 1px 1px rgba(81, 81, 81, 0.8);
     -moz-box-shadow: 0px 0px 1px 1px rgba(81, 81, 81, 0.8);
          box-shadow: 0px 0px 1px 1px rgba(81, 81, 81, 0.8);
}

.TMyButtonBackground {
  background-color: <!--?pas=clDlgBtnFace?-->;
  background-image: -webkit-gradient(linear, 0% 0%, 0% 100%,color-stop(0, rgb(255, 255, 255)),color-stop(1, rgb(240, 240, 240)));
  background-image: -webkit-repeating-linear-gradient(top,rgb(255, 255, 255) 0%,rgb(240, 240, 240) 100%);
  background-image: repeating-linear-gradient(to bottom,rgb(255, 255, 255) 0%,rgb(240, 240, 240) 100%);
  background-image: -ms-repeating-linear-gradient(top,rgb(255, 255, 255) 0%,rgb(240, 240, 240) 100%);
}

.TMyButtonBackground:active {
  background-color: <!--?pas=clDlgBtnFace?-->;
  background-image: -webkit-gradient(linear, 0% 0%, 0% 100%,color-stop(0, rgb(231, 231, 231)),color-stop(0.496, rgb(231, 231, 231)),color-stop(0.5, rgb(231, 231, 231)),color-stop(1, rgb(255, 255, 255)));
  background-image: -webkit-repeating-linear-gradient(top,rgb(231, 231, 231) 0%,rgb(231, 231, 231) 49.6%,rgb(231, 231, 231) 50%,rgb(255, 255, 255) 100%);
  background-image: repeating-linear-gradient(to bottom,rgb(231, 231, 231) 0%,rgb(231, 231, 231) 49.6%,rgb(231, 231, 231) 50%,rgb(255, 255, 255) 100%);
  background-image: -ms-repeating-linear-gradient(top,rgb(231, 231, 231) 0%,rgb(231, 231, 231) 49.6%,rgb(231, 231, 231) 50%,rgb(255, 255, 255) 100%);
}

Now this might look like a huge mess, but most of this is gradient coloring. If you look closer you will notice that it’s the exact same gradients but with different browser prefixing. This is to ensure that things look exactly the same no matter what browser people use. Making gradients like this is easy, there are a ton of websites that deals with this. One of my favorites is ColorZilla, which will make all this code for you.

If you don’t know your CSS you might be wondering – what is that :active postfix? You have two declarations with the same name – but one of them has :active appended to it? The active selector (which is the fancy name) simply tells the browser that whenever someone interacts with an element that has this state – it should switch and display the :active one instead. Typically a button will look 3d when it’s not pressed, and sunken when you press it. This is automated and you just need to define how an element should look when it’s pressed via the :active postfix (note: since different controls do different things, “active” can hold different meanings. But for most controls it means when you click it, touch it or otherwise interact with it).

And now for the big question: what on earth is that first segment that looks like pascal code? Well, that is pascal code! All your CSS stylesheets are processed by a scripting engine and only the result is actually given to the linker. So yes indeed, you can write both functions and procedures and use them to make your CSS like easier (take that Adobe!).

What we have done in the pascal snippet is to define a standard rounding value. That way we don’t have to update 300 places where border-radius is set (or you can blank it out if you dont want round edges). We change the constant and it will spread to any style that uses it. Clever huh?

OK, lets use our styles for something fun! What we have here is a nice border definition, both active and non-active, and also a nice background. Let’s use cascades to change how a button looks like!

What is a button anyways

If you switch to Form1 in your application and place a TW3Button on the form, we can start to work with it. The first thing you need to do is to clear out the styleclass so that Smart doesn’t apply the default styling. That way it’s easier to see what happens. So here is how it looks when I just run and compile it:

cascade_01

Now go into the procedure TForm1.InitializeForm() in the unit Form1. And write the following code:

procedure TForm1.InitializeForm;
begin
  inherited;

  // Remove the default styling
  w3button1.StyleClass := '';

  // Add our border
  w3Button1.TagStyle.Add('TMyButtonBorder');

  // And add our background
  w3button1.TagStyle.Add('TMyButtonBackground');

  // Make the font autosize to the container
  w3button1.font.AutoSize := true;
end;

Now save, compile and run and we get the following result:

cascade_02

Suddenly our vanilla Android button have all the majesty of Ubuntu Linux! And all we did was define a couple of styles and then manually add them. We could of course have stuffed all of this into a single, monolithic style – no shame in that, but im sure you agree that by fragmenting border from background, and background from content – we have a lot of power on our hands!

As an experiment: Remove the line that clears the StyleClass string and see what happens. When you click the button the browser actually blends the two backgrounds together! Had we used RGBA values in our background gradients – the browser would have blended the standard theme button with our added styles. It’s pretty frickin’ awesome if you ask me.

Here is a more extensive example of our upcoming Ubuntu Linux theme. This is not yet ready for alpha, but it represents the first theme system where all our controls makes use of multiple styles. It looks and behaves beautifully.

cascade_03

From the labs: A Ubuntu Linux inspired theme that is done using cascading exclusively

Brave new themes

So far we I have written exclusively about things you can do right now. But we are working every single day on Smart Mobile Studio, and right now my primary task is to finish a working prototype of our theme engine. As you can see from the picture above we still have a few controls that needs to be adjusted. In the previous article I mentioned the importance of respecting borders, padding and margins from the stylesheet; well let’s just say that I have learnt that the hard way.

Most of our controls were written with no consideration regarding these things, we use an absolute boxing model after all so we don’t have to. But not having to do something and taking the time to do something is often the difference between quality and fluff. And this time we are doing things right every step of the way.

Much like the effect system (SmartCL.Effects.pas) the theming system makes use of partial classes. This means that it simply doesn’t exist until you include the unit SmartCL.Theme in your project.

With the theme unit included (actually it’s included by the RTL so it’s there no matter what, but it wont be visible unless you include it in your unit scope) TW3CustomControl suddenly gains a couple of properties and methods:

  • ThemeBorder property
  • ThemeBackground property
  • ThemeReset() method

When you create custom controls you can (if you need to) define a style for that control, but this time you don’t need to define borders or backgrounds. A style is now reduced to padding, margins, some font stuff and perhaps shading if you need that. Then simply assign a ThemeBorder and ThemeBackground in the StyleTagObject() method of your control – and you can make your control look and feel at home with everything else using that theme.

Lets look at the standard borders first:

  • btNone
  • btFlatBorder
  • btControlBorder
  • btContainerBorder
  • btButtonBorder
  • btDialogButtonBorder
  • btDecorativeBorder
  • btEditBorder
  • btListBorder
  • btToolContainerBorder
  • btToolButtonBorder
  • btToolControlBorder
  • btToolControlFlatBorder

And then we have pre-defined backgrounds matching these:

  • bsNone
  • bsDisplay
  • bsControl
  • bsContainer
  • bsList
  • bsListItem
  • bsListItemSelected
  • bsEdit
  • bsButton
  • bsDialogButton
  • bsDecorative
  • bsDecorativeInvert
  • bsDecorativeDark
  • bsToolContainer
  • bsToolButton
  • bsToolControl

And as mentioned, you can assign these to any control you like

Same defines, many themes

The cool thing about the new system is that it’s not just one theme. We start with one of course but ultimately all our themes will follow the new styling scheme. The goal is to use pure constants, much like what Delphi did with colors (clBtnFace and so on) so that we only need to change the coloring constants – and then the changes will spread to the whole theme.

You as a Smart Mobile Studio developer don’t need to care about the details. As long as you stick to the normal types listed above, your custom controls will always match whatever theme is being used. And it will always look good and match the theme.

cascade_04

Still  few controls to style, but I’m sure you agree that its starting to look nice

Well that has been a rather long introduction to Smart and CSS. I hope you have enjoyed reading it. I will keep you all posted on the progress we make, which is moving ahead very fast these days!

Personally I can’t wait until Smart Mobile Studio 3.0 is ready, and I hope people value the effort we have put into this. And we are just getting started!

Smart Mobile Studio and CSS: part 2

October 11, 2017 Leave a comment

In my previous article we had a quick look at some fundamental concepts regarding CSS. These concepts are not unique to Smart Mobile Studio, but simply just the way things work with CSS in general. The exception being the way Smart maps your pascal class-name to a CSS style of the same name.

To sum up what we covered last time:

  • Smart maps a control’s class-name to a CSS style with the same name. So if your control is called TMyControl, it expects to find a CSS style cleverly named “.TMyControl”. This works very well and is easy to apply.
  • CSS can affect elements recursively, so you can write CSS that changes the appearance and behavior of child controls. This technique is typically used if you inject HTML directly via the InnerHTML property
  • CSS is cascading, meaning that you can add multiple styles to the same control. The browser will merge them into a final, computed style. The rule of thumb is to avoid styles that affect the same properties
  • CSS can define more than colors; things like animations, gradients, animated gradients and whatnot can all be defined in CSS
  • Smart Mobile Studio ships with units for creating, applying and working with CSS from your pascal code. It also ships with effect classes that can trigger defined CSS animations.
  • Smart Mobile Studio has a special effect unit (SmartCL.Effects) that when added to the uses list, adds quite a few effect procedures to TW3MovableControl. These effect methods are prefixed with Fx (ex: fxMoveTo, fxFadeOut, fxScaleTo).

Best practices

When you write your own controls, don’t cheat. I have seen a lot of code where people create instances of TW3CustomControl for instance, and then jump through hoops trying to make that look good. TW3CustomControl is a base-class, it’s designed to be inherited from – not used “as is”. I can understand the confusion to some extent, I mean since TW3CustomControl manage a DIV by default – people with some HTML background probably think creating one of these is the same as making a DIV. But by doing so they essentially short-circuit the whole theme-system since (as underlined above) all pascal classes will use a style with the same name. And TW3CustomControl is just a transparent block of nothing.

No matter how small a thing you are creating, always inherit out your own classes and give them distinct names. This is extremely important with regards to styling, but also as a discipline of writing readable, maintainable code. Using TW3CustomControl all over the place will make the code a mess to maintain – let alone share with others who don’t have a clue what you are doing.

A practical example

To show how easy it is to style things once you have written code that uses distinct class names and clear-cut structure, let’s take the time to write a little list-box. Nothing fancy, just a control that can take X number of child rows, style them and display the items vertically like a list. Let’s begin with the class code:

type

  // Define an exception especially for our control
  EMyControl = class(EW3Exception);

  // Define a baseclass, that way we can grow in the future
  TMyChild = class(TW3CustomControl)
  end;

  // Define a class type, good when working with lists or
  // collections of elements that share ancestors
  TMyChildClass = class of TMyChild;

  // Define a clear child class, that way we can apply
  // styling without problems
  TMyChildRed = class(TMyChild)
  end;

  // Create a custom version with sensitive properties only
  // available to ancestors. Here we place these in the protected
  // section (items and count)
  TCustomMyControl = class(TW3CustomControl)
  protected
    property Items[const index: integer]: TMyChild
            read ( TMyChild(GetChildObject(Index)) ); default;
    property Count: integer read ( GetChildCount );

    procedure Resize; override;
  public
    function Add(const NewItem: TMyChild): TMyChild; overload;
    function Add(const &Type: TMyChildClass): TMyChild; overload;
  end;

  // The actual control we use, this is the one we write
  // CSS code for and that we create and use in our applications.
  // This step is optional ofcourse, but it has it's perks
  TMyControl = class(TCustomMyControl)
  public
    property Items;
    property Count;
  end;

Implementation

procedure TCustomMyControl.Resize;
var
  LCount: integer;
  bl, bt, br, bb: integer;
  wd, dy: integer;
  LItem: TMyChild;
begin
  inherited;

  // Avoid doing work if there is nothing there
  LCount := GetChildCount();
  if LCount > 0 then
  begin
    // Get the values of the borders/padding etc from CSS
    // We need to respect these when working in the client-rect
    bl := Border.Left.Width + Border.Left.Padding + Border.Left.Margin;
    bt := Border.Top.Width + Border.Top.Padding + Border.Top.Margin;
    br := Border.Right.Width + Border.Right.Padding + Border.Right.Margin;
    bb := Border.Bottom.Width + Border.Bottom.Padding + Border.Bottom.Margin;

    // This is the maximum width an element can have without
    // bleeding over whatever styling is present
    wd := ClientWidth - (bl + br);

    // Start at the top
    dy := bt;

    // Now layout each element vertically
    for var x := 0 to LCount-1 do
    begin
      LItem := Items[x];
      LItem.SetBounds(bl, dy, wd, LItem.Height);
      inc(dy, LItem.Height);
    end;
  end;
end;

function TCustomMyControl.Add(const &Type: TMyChildClass): TMyChild;
begin
  if &Type <> nil then
  begin
    // Start update
    BeginUpdate();

    // Create our control & return it
    result := &Type.Create(self);

      // Define that a resize must be issued
    AddToComponentState([csSized]);

    // End update. If update was not called elsewhere
    // the resize will happen now. If not, it will happen
    // when the last EndUpdate() is called (clever stuff!)
    EndUpdate();
  end else
  raise EMyControl.Create('Failed to add item, classtype was nil error');
end;

function TCustomMyControl.Add(const NewItem: TMyChild): TMyChild;
begin
  result := NewItem;
  if NewItem <> nil then
  begin
    // Are we the current parent?
    if not Handle.Contains(NewItem.Handle) then
    begin
      // Remove from other parent
      NewItem.RemoveFrom();

      // Start update
      BeginUpdate();

      // Add child to ourselves
      RegisterChild(NewItem);

      // Define that a resize must be issued
      AddToComponentState([csSized]);

      // End update. If update was not called elsewhere
      // the resize will happen now. If not, it will happen
      // when the last EndUpdate() is called (clever stuff!)
      EndUpdate();
    end;
  end else
  raise EMyControl.Create('Failed to add item, instance was nil error');
end;

If you are wondering about the strange property getter’s, where we don’t call a function but instead have some code inside (), that is another perk of Smart Pascal. The GetChildObject() method is a part of TW3TagContainer which ultimately TW3CustomControl inherits from, so we simply typecast and call that. This is perfectly legal in Smart as long as it’s a simple function or expression with matching type.

And now lets look at the CSS for our new control and its red child:

.TMyChildRed {
  padding: 2px;
  background-color: #FF0000;
  font-family: "Ubuntu", "Helvetica Neue", Helvetica, Verdana;
  color: #FFFFFF;
  border-bottom: 1px solid #AA0000;
}

.TMyControl {
  padding: 4px;
  background-color: #FFFFFF;
  border: 1px solid #000000;
  border-radius: 4px;
  margin: 1px;
}

We need to populate the list before we can see anything of course, so if we add the following code to InitializeForm() things will start to happen:

  // Lets create our control. We use an inline variable
  // here since this is just an example and I wont be
  // accessing it later. Otherwise you want to define it
  // as a form field in the form-class
  var LTemp := TMyControl.Create(self);
  LTemp.SetBounds(100, 100, 300, 245);

  // We call beginupdate here to prevent the
  // control calling Resize() for every elements.
  // It will only resize when the last EndUpdate (below)
  // is called. Also see how we use this inside the
  // procedures that needs to force a change
  LTemp.BeginUpdate();
  for var x := 1 to 10 do
  begin
    // Create a new "red" child
    var NewItem := LTemp.Add(TMyChildRed);

    // Fill the content with something
    NewItem.InnerHTML := 'Item number ' + x.ToString();
  end;
  LTemp.EndUpdate;

The end result might not look fancy but it demonstrates some very basic concepts that is fundamental to working with Smart Mobile Studio. Namely how to define CSS that map to your classes, and also how to use BeginUpdate() and EndUpdate() to prevent a ton of calls to Resize() when adding multiple items.

mycontrol

It wont win any prices for looks, but it demonstrates some very important principles when writing controls

Effects

Being able to style and layout child elements in your own controls is cool, but applications can quickly become dull and static without visual feedback. This is why I wrote the effect unit, namely to make it so easy to use GPU powered effects in your applications that anyone can make stuff move around.

So let’s make a little change to our mini-list control. When a user press one of the items, we want the item to scale up while the mouse is pressed, and then gracefully shrink back to normal size when you let go of the mouse. We could make it spin around for that matter, but let’s start with something a bit more down to earth.

This is where defining our own classes comes into play. We are going to add some code to our root child class, TW3MyChild, because this behavior should be universal. For sake of simplicity im just going to use the controls own events for this purpose. So Let’s expand our ancestor class to the following:

  TMyChild = class(TW3CustomControl)
  private
    FDown: boolean;
    procedure HandleMouseDown(Sender: TObject; Button: TMouseButton;
                        Shift: TShiftState; X, Y: integer);
    procedure HandleMouseUp(Sender: TObject; Button: TMouseButton;
                        Shift: TShiftState; X, Y: integer);
  protected
    procedure InitializeObject; override;
  end;

The implementation needs to keep track of when a scale is in process, otherwise we can scale the element out of sync with the UI. Again this is just an example, there are many ways to keep track of things but let’s keep it simple:

procedure TMyChild.InitializeObject;
begin
  inherited;
  self.OnMouseDown := HandleMouseDown;
  self.OnMouseUp := HandleMouseUp;
end;

procedure TMyChild.HandleMouseDown(Sender: TObject; Button: TMouseButton;
                    Shift: TShiftState; X, Y: integer);
begin
  if Button = TMouseButton.mbLeft then
  begin
    if not FDown then
    begin
      FDown := true;
      fxScaleUp(1.0, 1.5, 0.3);
    end;
  end;
end;

procedure TMyChild.HandleMouseUp(Sender: TObject; Button: TMouseButton;
                    Shift: TShiftState; X, Y: integer);
begin
  if Button=TMouseButton.mbLeft then
  begin
    if FDown then
    begin
      fxScaleDown(1.5, 1.0, 0.3, procedure ()
      begin
        FDown := false;
      end);
    end;
  end;
end;

The result? Well, when we press one of the items in our list that items grows to 1.5 of it’s original size (the parameter names for the effects are easy to understand). So we scale from 1.0 (normal size) to 1.5, and we tell the system to execute this transition in 0.3 seconds.

All the effect methods have an optional callback procedure you can use (anonymous procedure) that will fire when the effect is finished. As you can see in the HandleMouseUp() method we use this to reset the FDown field, allowing the effect to be executed again on the next click.

mycontrol2

Smooth scaling via hardware

Next time

Hopefully the past two articles have been interesting. In our next article we will look at some of the stuff we are building in our labs. That means talking about styling and how we are working to improving this (read: not yet available but in the process).

In the meantime, have a peek at what you can do with proper use of CSS effects

mycontrol3

You can do some amazing things with effects and JS (click image)

Happy coding!

Smart Mobile Studio and CSS: part 1

October 9, 2017 Leave a comment

If I were to pinpoint a single feature of the modern HTML5 rendering engine that demands both respect and care, it would have to be CSS. While it’s true that no other piece of technology has seen the level of development as “the browser” for the past 20 years – the piece that has seen the most is without a doubt CSS.

When we designed Smart Mobile Studio styling became an issue almost from the start. I knew CSS well and I was reluctant to create a theming engine for Smart, because it’s so easy to fall into the same pit that Macromedia once did; namely that you end up boxing the user into a corner with the best of intentions. So instead of writing a large and complex styling engine, we designed the simplest possible system we could imagine – and left the rest to our users.

For advanced users that know their way around CSS, HTML and Javascript as well as they know object pascal, this has been a great bonus. But for users that come directly from Delphi or Lazarus with little or no background in web technology – CSS has been a black box they would rather not touch. Which is really sad because well written CSS makes up as much as 40% of a good application. If not more (!).

CSS for smarties

Most Delphi developers in their 40’s who never really got into Web development (because they were too busy coding in Delphi) probably think of CSS as a coloring language. I keep on hearning the same thing over and over “CSS? You can set colors, background pictures and stuff”. In part they are right, back in the late 90s that is. Yes CSS allows you to define how things should be colored and stuff like that – but CSS have evolved side by side with modern JavaScript and HTML, and as such it’s capable of a lot more than just setting colors.

The most important features you want to know about is:

  • You can define gradients as backgrounds, not just a static color or picture
  • You can use alpha blending (rgba) rather than fixed colors (#rrggbb)
  • You can define elaborate animations
  • Animations can use most CSS properties: colors, size, opacity and / or position
  • CSS is recursive, you can define rules that applies to child elements of a control using a style. You can also target child elements by name.
  • CSS is no longer just 2D but also 3d (Note: Sprite3d has been ported to Smart, see SmartCL.Sprite3d.pas), so you can place elements in 3d space
  • Rotation is now standard, be it purely 2d or 3d
  • You can define transitions directly on a property, like how long a move should take
  • CSS is cascading (hence the term “cascading style sheets”)
  • CSS allows elements to inherit properties from their parents, which is extremely handy if you want all child elements to use the font you set in the first, actual control you are making.
  • Filters! You can now apply great graphics filters on your content
  • CSS is powered by the GPU (graphical processing unit) and makes full use of the graphics chipset on the target device

This is just the tip of the iceberg of what modern CSS has to offer, but before you dive in, lets look at some fundamental facts you need to know when working in Smart Mobile Studio.

Class to style mapping

Have you ever wondered how a custom control in Smart knows what css style to use? For instance, if you drop a TW3Panel on a form – where does the style come from? Is there some magical spell that automatically assigns a piece of css to the visual control? Sure you know there is a CSS file that’s generated for the application, and you can pick between a few themes, but how is the panel CSS style attached to an instance of TW3Panel?

Like I mentioned above, we tried to leave CSS alone in fear of boxing the user into a system that was too limited or too lose; But we did one stroke of genius, and that was to automatically map the pascal class-name to the CSS class name. And this turned out to be a very efficient method of dealing with styling.

So to make this crystal clear: Let’s say you create a new control called TMyControl right? When you create an instance of that control in your pascal code, it will automatically try to use a CSS style with the same name. So far that is the only rule we have enforced. But it is extremely important to know this and understand how powerful that is.

Recursive CSS

The next thing I want to explain is how you can define recursive styles. Again Let’s say you have created a new custom-control called TMyControl. You go into your project options, click on “Linker” from the treeview on the left – and then check the “Use custom theme” checkbox. This makes a copy of whatever theme you picked for your application and stores that copy within your project file. When you click “OK” to exit the project options dialog and click “Save”, your project will get a new item cleverly named “Custom CSS”. This is where you add your own styles.

projectOptions

So ok, we have a control called TMyControl and now we want to style it. So we double-click on the “Custom CSS” node in our project, and we are greeted with a ton of weird looking CSS code.

So let’s go ahead and create a style with the same name as our pascal class, that way they will find each other:

.TMyControl {
  background-color: #FF0000;
}

Click “Save” again (or “CTRL + S” on your keyboard) and compile + run your program. If you had created an instance of TMyControl on your form, you should now see a red box. Not much to look at just yet, but we will deal with that later.

But a blank control is really not much fun. So for sake of argument let’s say you want to display a header inside your control. So you create a second class called TMyHeader and then create that in the constructor of TMyControl. And we want to place it at the top of our TW3MyControl display, 32 pixels high. So we end up with something like this:

unit Unit1;

interface

uses
  System.Types, System.Colors, System.Types.Convert,
  SmartCL.System, SmartCL.Graphics, SmartCL.Components, SmartCL.Forms,
  SmartCL.Fonts, SmartCL.Borders;

type

// our header
TMyHeader = class(TW3CustomControl)
end;

// our new cool control
TMyControl = class(TW3CustomControl)
private
  FHeader: TMyHeader;
protected
  procedure InitializeObject; override;
  procedure FinalizeObject; override;
  procedure Resize; override;
public
  property Header: TMyHeader read FHeader;
end;

implementation

procedure TMyControl.InitializeObject;
begin
  inherited;
  FHeader := TMyHeader.Create(self);
end;

procedure TMyControl.FinalizeObject;
begin
  FHeader.free;
  inherited;
end;

procedure TMyControl.Resize;
begin
  inherited;
  FHeader.SetBounds(0, 0, ClientWidth, 32);
end;

end.

At this point we can ofcourse do the same as we just did, namely to add a CSS style called “.TMyHeader” and define our header there – which is also how you should do things. But there will be cases where you dont have this fine control over things – perhaps you are using a library or maybe you are generating html and just injecting it via the innerHTML property? Who knows, but the point is we can actually write CSS that targets ANY child element without knowing much about it. And we do that using something called a CSS selector.

So let’s say I want to color all children of TMyControl, regardless of type, green (just for the hell of it). Well, then I can do like this in our CSS:

.TMyControl {
  background-color: #FF0000;
}

/* Color all (*) children green! */
.TMyControl > * {
  background-color: #00FF00;
}

We can also be more spesific and say: Color the first P (paragraph) inside the first DIV child green! And I should mention that the default tag that TW3CustomControl manages is a DIV. Well, to target the text paragraph inside the first child we would write:

.TMyControl {
  background-color: #FF0000;
}

/* Color the P inside the first DIV green! */
.TMyControl > :first-child > P {
  background-color: #00FF00;
}

Now you are probably wondering, but where did that “P” come from? There is no paragraph in my code? Well, like mentioned we can add that via the innerHTML property if we like:

procedure TMyControl.InitializeObject;
begin
  inherited;
  FHeader := TMyHeader.Create(self);
  FHeader.innerHTML := '<p>This is the text!</p>';
end;

Note: WordPress has a tendency to kill html tags, so if you dont see a paragraph tag in the string above, wordpress gobbled it up.

Now the point of this code so far has not been to teach how to write good code. In fact, you really should try to avoid code like this unless you really know what you are doing. The point here was to show you how CSS can be made to operate on structures. If a style is selected by a control, selector code like I demonstrated above will kick-in automatically and you can do some pretty amazing things with it. Just changing the background doesnt really give this system the credit it deserves. You can add animations, change the row-color of every odd listitem, add a glowing rectangle only around a particular element — the sky is the limit!

The cascading part

This is probably one of the simplest features ever, yet it’s one that people fail to remember when they sit down to write CSS code. So please make a note of this one because it will save you so much time.

So far we have looked at single styles attached to a control. But truth be told, you can assign 100 styles to the same control – at the same time (!). What happens is that the rendering engine will merge them all together and draw whatever the outcome is onto the display. The only rule is: they must not collide. If you define two backgrounds the style engine will try to merge them, but odds are only one of them will survive.

But let’s stop for a minute and think about what this means:

  • Instead of one large, monolithic style for a control, you can divide it into smaller and more managable parts
  • You can define borders in one style, background in another and fonts in a third
  • You can have two separate animations running at the same time targeting the same element – and as long as they dont manipulate the same properties – it will work just fine.

It can take a while for the true potential of this to really sink in.

To give you a practical example: This is how Smart Mobile Studio deals with disabled states. Whenever you disable a control, a style called “DisabledState” is added to the control. This takes over opacity, disables mouse and touch events, changes the mouse cursor and draws a diagonal pattern that covers the control.

When the control is enabled again, we simply remove the style and it reverts back to normal. It’s pretty cool if I say so myself!

TW3CustomControl, which is the foundation for all visible controls on the palette, has a property called “CSSClasses”. This has been deprecated and replaced by “TagStyles”, but both still works. This class gives you easy methods for adding, removing and checking if any extra styles (apart from the default style) has been added.

It looks like this:

TW3TagStyle = class(TW3OwnedObject)
  private
    FCache:     TStrArray;
    FCheck:     integer;
    FHandle:    TControlHandle;
  protected
    function    GetCount: integer; virtual;
    function    GetItem(const Index: integer): string; virtual;
    procedure   SetItem(const Index: integer; const Value: string); virtual;
    {$IFDEF USE_CSS_PARSER}
    procedure   ParseToCache(CssStyleText: String); virtual;
    {$ENDIF}
    procedure   CacheToTag; virtual;
    procedure   TagToCache; virtual;
    function    AcceptOwner(const CandidateObject: TObject): Boolean; override;
  public
    property    Handle: TControlHandle read FHandle;
    property    Count: integer read GetCount;
    property    Items[const Index: integer]: string read GetItem write SetItem;

    procedure   Update; virtual;

    class procedure AddClassToControl(const Handle: TControlHandle; CssClassName: string);
    class function ControlContainsClass(const Handle: TControlHandle; CssClassName: string): boolean;
    class procedure RemoveClassFromControl(const Handle: TControlHandle; CssClassName: string);

    function    Contains(const CssClassName: string): boolean;
    function    Add(CssClassName: string): integer;
    function    Remove(const Index: integer): string;
    function    RemoveByName(CssClassName: string): string;
    function    IndexOf(CssClassName: string): integer;
    function    ToString: string;
    procedure   Clear;
    constructor Create(AOwner: TObject); override;
    destructor Destroy; override;
  end;

So Let’s say you have a fancy animated background you want to show while doing something, then simply call the AddClassToControl() method.

I should mention that I have used the word “style” so far to avoid confusion. A css definition is not really called a style in HTML land, but a style class. I just used style to make the distinction easier for everyone.

Summing up

In this short article we have had a look at the fundamental rules of CSS. We have looked at how a control match and finds it’s css style, how to define your own styles. We also brushed into the concept of CSS selectors, which can recursively affect child elements in your controls — and last but not least, we have talked about cascading and how you can assign multiple styles to the same element.

In our next article we are going to look at some of the next-generation features in our RTL regarding styles, and also talk a bit about what we have cooking in our labs. Needless to say, CSS is going to become easier and much more powerful in the weeks to come, so it’s important that you pick up on the basics now!

Homework (if you need it) is to have a look at the CSS pascal classes in our RTL. They contain a lot of nice features, helper classes and more to generate platform independent CSS code that you can use right now.

You want to go through the following units:

  • SmartCL.CSS.StyleSheet
  • SmartCL.CSS.Classes
  • SmartCL.Effects

Have a peek at the methods “TSuperStyle.AnimGlow” and see how CSS can be written as code, although in most cases it’s easier to just write it as vanilla CSS. You will also be happy to know that stylesheets can be created as normal pascal objects, so you dont have to put all your eggs into one basket.

The last unit in that list, SmartCL.Effects is special. It uses something called “partial classes” which is not supported by Delphi or Lazarus. In general it means that you can spread the declaration of a class over many units.

When you add SmartCL.Effects to your form’s uses clause, TW3CustomControl suddenly gains a ton of effect methods (prefixed by “fx”). These are CSS animation effects that you can call on any control. You can also daisy-chain them together and they will execute in sequence. Again this demonstrates what you can achieve with CSS and some clever programming.

Until next time!

Webfonts in Smart Mobile Studio

October 4, 2017 2 comments

Webfonts is something I have wanted to include in Smart for ages now. It’s such a simple feature, but when you use it right it becomes powerful and assuring.

What is a webfont?

Well, you know how you have to define what fonts you use under html right? And if the user doesn’t have that font, you have fallback fonts it can use instead? If you have worked with web technology for a while you no doubt know how haphazardly the results can be. You would think that a font like “verdana” looks exactly the same from system to system -but that is not always the case.

webfonts

Adding webfonts to your project is very easy

Apple for instance have their own tweak on just about every typeface; Linux often have alternatives that looks good but might not be 100% identical (on some distros, Linux is not exactly “one thing”). And Microsoft tends to live in their own universe.

The solution? Webfonts. In short it means that the browser will double-check if the user has the font you need installed. And if they don’t – the font is downloaded from a font provider (like Google) when your web application starts.

Fonts, glorious fonts!

The result is that your application will look and feel the same no matter what device is used. And that is a very important thing – because coding flexible, adaptive UI’s that should work on Android, iOS, TV’s and ordinary browsers is no picnic to begin with. Having to worry that your fancy Ubuntu based UI is rendered using vanilla Sans-Serif (read: looking like something out of the 80s) has been an ever-present reality for two decades now.

webfonts2

Plenty of good looking fonts on Google

If you head over to https://fonts.google.com and take a gander at the fonts available, I’m sure you agree that this is a fabulous idea. And as always, when you combine good-looking fonts with some cool CSS – the results can be spectacular.

Still in Alpha

We are still in Alpha for Smart Mobile Studio 3.0, so there might be hiccups along the way. But all in all you should be able to enjoy webfonts in our next update.

Cheers!

Why buy a Vampire accelerator?

August 24, 2017 2 comments

With the Amiga about to re-enter the consumer market, a lot of us “old timers” are busy knocking dust of our old machines. And I love my old machines even though they are technically useless by modern standards. But these machines have a lot of inspiration in them, especially if you write code. And yes there is a fair bit of nostalgia involved in this, there is no point in lying about any of this.

I mean, your mobile phone is probably 100 times faster than a vintage Amiga. But like you will discover with the new machines that are about to hit the market, there is more to this computer than you think. But vintage Amiga? Sadly they lack the power to anything useful [in the “modern” sense].

Enter the vampire

The Vampire is a product that started shipping about a year ago. It’s a FPGA based accelerator, and it’s quite frankly turning the retro scene on its head! Technically it’s a board that you just latch onto the CPU socket of your classical Amiga; it then takes over the whole machine and replace the CPU and chipset with its versions of these. Versions that are naturally a hell of a lot faster!

vanpireThe result is that the good old Amiga is suddenly beefy enough to play Doom, Quake, MP3 files and MPG video (click here to read the datasheet). In short: this little board gives your old Amiga machine a jolt of new life.

Emulation vs. FPGA

Im not going to get into the argument about FPGA not being “real”, because that’s not what FPGA is about. Nor am I negative to classical hardware – because I own a ton of old Amiga gear myself. But I will get in your face when it comes to buying a Vampire.

Before we continue I just want to mention that there are two models of the vampire. There is the add-on board I have just mentioned which is again divided into different models for various Amiga versions (A600, A500 so far). The second model is a completely stand-alone vampire motherboard that wont even need a classic Amiga to work. It will be, for all means and purposes, a stand alone SBC (single board computer) that you just hook up power, video, storage and mouse – and off you go!

This latter version, the stand-alone, is a project I firmly believe in. The old boards have been out of production since 1993 and are getting harder to come by. And just like people they will eventually break down and stop working. There is also price to consider because getting your 20-year-old A500 fixed is not easy. First of all you need a specialist that knows how to fix these old things, and he will also need parts to work with. Since parts are no longer in production and homebrew can only go so far, well – a brand new motherboard that is compatible in every way sounds like a good idea.

There is also the fact that FPGA can reach absurd speeds. It has been mentioned that if the Vampire used a more expensive FPGA modules, 68k based Amiga’s could compete with modern processors (Source: https://www.generationamiga.com/2017/08/06/arria-10-based-vampire-could-reach-600mhz/). Can you imagine a 68k Amiga running side by side with the latest Intel processors? Sounds like a lot of fun if you ask me !

20934076_1512859975447560_99984790080699660_o

Amiga 1000, in my view the best looking Amiga ever produced

But then there is emulation. Proper emulation, which for Amiga users can only mean one thing: UAE in all its magnificent diversity and incarnations.

Nothing beats firing up a real Amiga, but you know what? It has been greatly exaggerated. I recently bought a sexy A1000 which is the first model that was ever made. This is the original Amiga, made way back before Commodore started to mess around with it. It cost me a small fortune to get – but hey, it was my first ever Amiga so I wanted to own one again.

But does it feel better than my Raspberry PI 3b powered A500? Nope. In fact I have only fired up the A1000 twice since I bought it, because having to wait for disks to load is just tedious (not to mention that you can’t get new, working floppy disks anymore). Seriously. I Love the machine to bits but it’s just damn tedious to work on in 2017. It belongs to the 80s and no-one can ever take away its glory or it’s role in computer history. That achievement stands forever.

High Quality Emulation

If you have followed my blog and Amiga escapades, you know that my PI 3b based Amiga, overclocked to the hilt, yields roughly 3.2 times the speed of an Amiga 4000/040. This was at one point the flagship Commodore computer. The Amiga 4000’s were used in movie production, music production, 3d rendering and heavy-duty computing all over the world. And the 35€ Raspberry PI gives you 3.2 times the power via the UAE4Arm emulator. I don’t care what the vampire does, the PI will give it the beating of its life.

Compiling anything, even older stuff that is a joke by today standard, is painful on the Raspberry PI. Here showing my retro-fitted A500 PI with sexy led keyboard. It will soon get a makeover with an UP board :)

My retrofitted Raspberry PI 3b Amiga. Serious emulation at high speed allowing for software development and even the latest Freepascal 3.x compiler

Then suddenly, out of the blue, Asus comes along with the Tinkerboard. A board that I hated when it first came out (read part-1 here, part-2 here) due to its shabby drivers. The boards have been collecting dust on my office shelf for six months or so – and it was blind luck that i downloaded and tested a new disk image. If you missed that part you can read the full article here.

And I’m glad I did because man – the Tinkerboard makes the Raspberry PI 3b look like a toy! Asus has also adjusted the price lately. It was initially priced at 75€, but in Norway right now it retails for about 620 NKR – or 62€. So yes, it’s about twice the price of the PI – but it also gives you twice the memory, twice the graphics performance, twice the IO performance and a CPU that is a pleasure to work with.

The Raspberry PI 3b can’t be overclocked to the extent the model 1 and 2 could. You can over-volt it and tweak the GPU and memory and make it run faster. But people don’t call that “overclock” in the true sense of the word, because that means the CPU is set to run at speeds beyond the manufacturing specifications. So with the PI 3b there is relatively little you can do to make it run faster. You can speed it up a little bit, but that’s it. The Tinkerboard can be overclocked to the hilt.

A1222

The A1222 motherboard is just around the corner [conceptual art]

Out of the box it runs at 1.5 Ghz, but if you add a heatsink, fan (important) and a 3A PSU – you can overclock it to 2.6 Ghz. And like the PI you can also tweak memory and gpu. So the Tinkerboard will happily run 3 times faster than the PI. If you add a USB3 harddisk you will also beef up IO speeds by 100 megabyte a second – which makes a huge difference. Linux does memory paging and it slows down everything if you just use the SD card.

In short: if you fork out 70€ you get a SBC that runs rings around both the vampire and the Raspberry PI 3b. If we take height for some Linux services and drivers that have to run in the background, 3.2 x 3 = 9.6. Lets round that off to 9 since there will be performance hits by the background services. But still — 70€ for an Amiga that runs 9 times faster than A4000 @ MC68040 cpu ? That should blow your mind!

I’m sorry but there has to be something wrong with you if that doesn’t get your juices flowing. I rarely game on my classic Amiga setup. I’m a coder – but with this kind of firepower you can run some of the biggest and best Amiga titles ever made – and the Tinkerboard wont even break a sweat!

You can’t afford to be a fundamentalist

There are some real nutbags in the Amiga community. I think we all agree that having the real deal is a great experience, but the prices we see these days are borderline insane. I had to fork out around 500€  to get my A1000 shipped from Belgium to Norway. Had tax been added on the original price, I would have looked at something in the 700€ range. Still – 500€ for a 20-year-old computer that can hardly run Workbench 1.2? Unless you add the word “collector” here you are in fact barking mad!

If you are looking to get an Amiga for “old times sakes”, or perhaps you have an A500 and wonder if you should fork out for the Vampire? Will it be worth the 300€ pricetag? Unless you use your Amiga on a daily basis I can’t imagine what you need a vampire for. The stand-alone motherboard I can understand, that is a great idea – but the accelerator? 300€?

I mean you can pay 70€ and get the fastest Amiga that ever existed. Not a bit faster, not something on second place – no – THE FASTEST Amiga that has ever existed. If you think playing MP3 and MPG media files is cool with the vampire, then you are in for a treat here because the same software will work. You can safely download the latest patches and updates to various media players on the classic Amiga, and they will run just fine on UAE4Arm. But this time they will run a hell of a lot faster than the Vampire.

14269402_10153769032280906_1149985553_n

My old broken A500 turned into an ass-kicking, battle hardened ARM monster

You really can’t be a fundamentalist in 2017 when it comes to vintage computers. And why would you want to? With so much cool stuff happening in the scene, why would you want to limit your Amiga experience to a single model? Aros is doing awesome stuff these days, you have the x5000 out and the A1222 just around the corner. Morphos is stable and good on the G5 PPC — there has never been a time when there were so many options for Amiga enthusiasts! Not even during the golden days between 1989-1994 were there so many exciting developments.

I love the classic Amiga machines. I think the Vampire stand-alone model is fantastic and if they ramp up the fpga to a faster model, they have in fact re-created a viable computer platform. A 68080 fpga based CPU that can go head to head with x86? That is quite an achievement – and I support that whole heartedly.

But having to fork out this amount of cash just to enjoy a modern Amiga experience is a bit silly. You can actually right now go out and buy a $35 Raspberry PI and enjoy far better results than the Vampire is able to deliver. How that can be negative? I have no idea, nor will I ever understand that kind of thinking. How do any of these people expect the Amiga community to grow and get new, young members if the average price of a 20-year-old machine costs 500€? Which incidentally is 50€ more than a brand new A1222 PPC machine capable of running OS 4.

And with the Tinkerboard you can get 9 times the speed of an A4000? How can that not give you goosebumps!

People talk about Java and Virtual-Machines like its black magic. Well UAE gives you a virtual CPU and chipset that makes mince-meat of both Java and C#. It also comes with one of the largest software libraries in the world. I find it inconceivable that no-one sees the potential in that technology beyond game playing – but when you become violent or nasty over hardware, then I guess that explains quite a bit.

I say, use whatever you can to enjoy your Amiga. And if your perfect Amiga is a PI or a Tinkerboard (or ODroid) – who cares!

I for one will not put more money into legacy hardware. I’m happy that I have the A1000, but that’s where it stops for me. I am looking forward to the latest Amiga x5000 PPC and cant wait to get coding on that – but unless the Appollo crew upgrades to a faster FPGA I see little reason to buy anything. I would gladly pay 500 – 1000 € for something that can kick modern computers in the behind. And I imagine a lot of 68k users would be willing to do that as well. But right now PPC is a much better option since it gives you both 68k and the new OS 4 platform in one price. And for affordable Amiga computing, emulation is now of such quality that you wont really notice the difference.

And I love coding 68k assembler on my Amibian emulator setup. There is nothing quite like it 🙂

The Tinkerboard Strikes Back

August 20, 2017 Leave a comment

For those that follow my blog you probably remember the somewhat devastating rating I gave the Tinkerboard earlier this year (click here for part 1, and here for part 2). It was quite sad having to give such a poor rating to what is ultimately a fine piece of hardware. I had high hopes for it – in fact I bought two of the boards because I figured there was no way it could suck with that those specs. But suck it did and while the muscle was there, the drivers were in such a state that it never emerged for the user. It was released prematurely, and I think most people that bought it agrees on this.

asus

The initial release was less than bad, it was horrible

Since my initial review those months ago good things have happened. Asus seem to have listened to the “poonami” of negative feedback and adapted their website accordingly. Unlike the first time I visited when you literally had to dig into recursive menus (which was less than intuitive in this case) just to download the software – the disk images are now available at the bottom of the product page. So thumbs up for that (!)

They have also made the GPIO programming API a lot easier to get; downloading it is reduced to a “one liner” for C developers, which is the way it should be. And they have likewise provided wrappers for other languages, like ever popular python and scratch.

I am a bit disappointed that they don’t provide freepascal units. A lot of developers use object pascal on these board after all, because Object Pascal gives you a better balance between productivity and depth. Pascal is easier to learn (it was designed for that after all) but avoids some of the pitfalls of C/C++ while retaining all the good things. Porting over C headers is fairly easy for a good pascal programmer – but it would be cool of Asus remember that there are more languages in the world than C and python.

All of this aside: the most important change of all is what Asus has done with the drivers! They have finally put together drivers that shows off the capabilities of the hardware and unleash the speed we all hoped for when the board was first announced. And man does it show! My previous experience with the Tinkerboard was horrible; it was the text-book example of a how not to release a product (the whole release has been odd; Asus is a huge, multi-national corporation. Yet their release had basement 3 man band written all over it).

So this is fantastic news! Finally the Tinkerboard delivers and can be used for real life projects!

Smart IOT

At The Smart Company we both create and use our core product, Smart Mobile Studio, to deliver third-party solutions. As the name implies Smart is a software development system initially made for mobile applications; but it quickly grew into a much larger toolchain and is exceptionally good for making embedded applications. With embedded applications I mean things that run on kiosk systems, cash machines and stuff like that; basically anything with a touch-screen that does something.

smarts

The Smart desktop gives you a good starting point for embedded work

One of the examples that ship with Smart Pascal is a fully working desktop embedded environment. Smart compiles for both ordinary browsers (JavaScript environments with a traditional HTML5 display) but also for node.js, which is JavaScript unbound by the strict rules of a browser. Developers typically use node.js to write highly scalable server software, but you are naturally not limited to that. Netflix is written 100% in Node.js, so we are talking serious firepower here.

Our embedded environment is called The Smart Desktop (also known as Amibian.js) and gives you a ready-made node.js back-end that couples with a HTML5 front-end. This is a ready to use environment that you can deploy your own applications through. Things like storage, a nice looking UI, user logon and credentials and much, much more is all implemented for you. You don’t have to use it of course, you can write your own system from scratch if you like. We created “Amibian” to demonstrate just how powerful Smart Pascal can be in the right hands.

With this in mind – my main concern when testing SBC’s (single board computers) is obviously web performance. By default JavaScript is a single core event-driven runtime system; you can spawn threads of course but its done somewhat different from how you would work in Delphi or C++.  JavaScript is designed to be system friendly and a gentle giant if you like, which has turned out to be a good thing – because the way JS schedules execution makes it ideal for clustering!

Most people find it hard to believe that JavaScript can outperform native code, but the JavaScript runtimes of today is almost a whole eco system in themselves. With JIT compilers and LLVM optimization — it’s a whole new ballgame.

Making a scale

To give you a better context to see where the Tinkerboard is on a scale, I decided to set up a couple of simple tests. Nothing fancy, just running the same web applications and see how each of them perform on different boards. So I used the same 3 candidates as before, namely the Raspberry PI 3b, the Hardkernel ODroid XU4 and last but not least: the Asus Tinkerboard.

I setup the following applications to compile with the desktop system, meaning that they were compiled with the Smart project. We got plenty of web applications but for this I wanted to pack the most demanding apps in our library:

  • Skid-Row intro remake using the CODEF library
  • Quake 3 asm.js build
  • Plex

OK let’s go through them and see where the chips land!

The Raspberry PI 3b

bassoon

Bassoon ran well, its not that demanding

The Raspberry PI was aweful (click here for a video). There is no doubt that native applications like UAE4Arm runs extremely well on the PI (which contains hand optimized assembler, not exactly a fair fight)- but when it comes to modern HTML5 the PI doesn’t stand a chance. You could perhaps use a Raspberry PI 3b for simple applications which are not graphic and cpu intensive, but you can forget about anything remotely taxing.

It ran Bassoon reasonably fast, but all in all you really don’t want a raspberry when doing high quality IOT, unless its headless code and node.js perhaps. Frameworks like Johnny #5 gives you a ton of GPIO features out of the box – in fact you can target 40 embedded systems without any change to your code. But for large, high quality web front-ends, the PI just wont cut it.

  • Skid-Row: 1 frame per second or less
  • Quake: Can’t even start, just forget it
  • Plex: Starts but it lags so much you can’t watch anything

But hey, I never expected $35 to give me a kick ass ARM experience anyways. There are 1000 things the PI does very well, but HTML is not one of them.

ODroid XU4

XU4CloudShellAssemble29

The ODroid packs a lot of power!

The ODroid being faster than the Raspberry PI is nothing new, but I was surprised at how much power this board delivers. I never expected it to give me a Linux experience close to that of a x86 PC; I mean we are talking about a 45€ SBC here. And it’s only 10€ more than the Raspberry PI, which is a toy at best. But the ODroid XU4 delivers a good Linux desktop; And it’s well worth the extra 10€ when compared to the PI.

Personally I don’t understand why people keep buying PI’s when there is so much better options on the market now. At least not if web technology is involved. A small server or emulator sure, but not HTML5 and browsers. The PI just cant handle it.

  • Skid-Row: 4-5 frames per second
  • Quake: Runs at very enjoyable speed (!)
  • Plex: Runs well but you may want to pick SD or 720p to avoid lags

What really shocked me was that ODroid XU4 can run Quake.js! The PI can’t even start that because it’s so demanding. It is one of the largest and most resource hungry asm.js projects out there – but ODroid XU4 did a fantastic job.

Now it’s not a silky smooth experience, I would guess something along the lines of 17-20 fps. But you know what? Thats pretty good for a $45 board.

I have owned far worse x86 PC’s in my day.

The Tinkerboard

Before i powered up the board I was reluctant to push it too far, because I thought it would fail me once again. I did hope that something had been done by Asus to rectify the situation though, because Asus really should have done a better job before releasing it. It’s now been roughly 6 months since I bought it, and roughly 8 months since it was released here in Europe. It would have been better for them to have waited with the release. I was not alone about butchering the whole board, its been a source of frustration for those that bought it. 75€ is not much, but no-one likes to throw money out the window like that.

Long story short: I downloaded the latest Ubuntu image and burned that to an SD card (I actually first downloaded the Debian Jessie image they have, but sadly you have to do a bit of work to turn that into a desktop system – so I decided to go for Ubuntu instead). If the drivers are in order I have a feeling the Jessie image will be even faster – Ubuntu has always been a high-quality distribution, but it’s also one of the most demanding. One might even say it’s become bloated. But it does deliver a near Microsoft Windows like experience which has served the Linux community well.

But the Tinkerboard really delivers! (click here for the video) Asus have cleaned up their act and implemented the drivers properly, and you can feel that the moment the desktop comes into view. With the PI you are always fighting with lagging performance. When you start a program the whole system freezes for a while, when you quit a program the system freezes – hell when you move the mouse around the system bloody freezes! Well that is not the case with the Tinkerboard that’s for sure. The tinkerboard feels more like running vanilla Ubuntu on a normal x86 PC to be honest.

  • Skid-Row: 10-15 frames per second
  • Quake: Full screen 32bit graphics, runs like hell
  • Plex: Plays back fullscreen HD, just awesome!

All I can say is this: if you are going to do any bit of embedded coding, regardless if you are using Smart Mobile Studio or some other devkit — this is the board to get (!)

Like already mentioned it does cost almost twice as much as the PI, but that extra 30€ buys you loads of extra power. It opens up so many avenues of code and you can explore software far more complex than both the PI and ODroid combined. With the tinkerboard you can finally deliver a state of the art product built with off the shelves web components. It’s in a league of its own.

The ‘tinker’ rocks at last

When I first bought the tinker i felt cheated. It was so frustrating because the specs were so good and the terrible performance just came down to sloppy work and Asus releasing it prematurely for cash (lets face it, they tapped into the lucrative market established by the PI foundation). By looking at the specs you knew it had the firepower to deliver so much, but it was held back by ridicules drivers.

There is still a lot that can be done to make the Tinkerboard run even faster. Like I mentioned Ubuntu is not the racecar of distributions out there. Ubuntu is fat, there is no other way of saying it. So if someone took the time to create a minimalistic Jessie image, recompile every piece with maximum llvm optimization and as few running services as possible — the tinkerboard would positively fly!

So do I recommend it? I am thrilled to say that yes, I can finally recommend the tinkerboard! It is by far the coolest board in my collection now. In fact it’s so good that I’m donating one to my daughter. She is presently using an iMac which is overkill for her needs at age 10. Now I can make a super simple menu with Netflix and Youtube, buy a nice touch-screen display and wall mount it in her room.

Well done Asus!

Where is PowerPC today?

August 5, 2017 2 comments
ppcproto_2_big

Phase 5 PowerUP board prototype

Anyone who messed around with computers back in the 90s will remember PowerPC. This was the only real alternative for Intel’s complete dominance with the x86 CPU’s and believe me when I say the battle was fierce! Behind the PowerPC you had companies like IBM and Motorola, companies that both had (or have) an axe to grind with Intel. At the time the market was split in half – with Intel controlling the business PC segment – while Motorola and IBM represented the home computer market.

The moment we entered the 1990s it became clear that Intel and Microsoft was not going to stay on their side of the fence so to speak. For Motorola in particular this was a death match in the true sense of the word, because the loss of both Apple and Commodore represented billions in revenue.

What could you buy in 1993?

The early 90’s were bitter-sweet for both Commodore and Apple. Faster and affordable PC’s was already a reality and as a consequence – both Amiga machines and Mac’s were struggling to keep up.

The Amiga 1200 still represented a good buy. It had a massive library of software, both for entertainment and serious work. But it was never really suited for demanding office applications. It did wonders in video and multimedia development, and of course games and entertainment – but the jump in price between A1200 and A4000 became harder and harder to justify. You could get a well equipped Mac with professional tools at that range.

Apple on the other hand was never really an entertainment company. Their primary market was professional graphics, desktop publishing and music production (Photoshop, Pro-tools, Logic etc. were exclusive Mac products). When it came to expansions and ports they were more interested in connecting customers to industrial printers, midi devices and high-volume storage. Mac was always a machine for the upper class, people with money to burn; The Amiga dominated the middle-class. It was a family type computer.

But Apple was not a company in hiding, neither from Commodore or the Wintel threat. So in 1993 they introduced the Macintosh Quadra series to the consumer market. Unlike their other models this was aimed at home users and students, meaning that it was affordable, powerful and could be used for both homework and professional applications. It was a direct threat to upper middle-class that could afford the big box Amiga machines.

Macintosh_Quadra_605

The 68k Macintosh Quadra came out in October of 1993

But no matter how brilliant these machines were, there was no hiding the fact that when it came to raw power – the PC was not taking any prisoners. It was graphically superior in every way and Intel started doubling the CPU speed exponentially year by year; Just like Moore’s law had predicted.

With the 486-DX2 looming on the horizon, it was game over for the old and faithful processors. The Motorola 68k family had been there since the late 70’s, it was practically an institution, but it was facing enemies on all fronts and simply could not stand in the way of evolution.

The PowerPC architecture

If you are in your 20’s you wont remember this, but back in the late 80’s early 90’s, the battle between computer vendors was indeed fierce. You have to take into consideration that Microsoft and Intel did a real number on IBM. Microsoft stabbed IBM in the back and launched Windows as a direct competitor for IBM’s OS2. When I write “stabbed in the back” I mean that literally because Microsoft was initially hired to create parts of OS/2. It was the typical lawsuit mess, not unlike Microsoft and Sun later, where people would pick sides and argue who the culprit really was.

As you can imagine IBM was both bitter and angry at Microsoft for stealing the home PC market in such a shameful way. They were supposed to help IBM and be their ally, but turned out to be their most fierce competitor. IBM had also created a situation where the PC was licensed to everyone (hence the term “ibm clone”) – meaning that any company could create parts for it and there was little IBM could do to control the market like they were used to. They would naturally get revenue from these companies in the form of royalties (and would later retire 99% of all their products. Why work when they get billions for doing nothing?), but at the time they were still in the game.

Motorola was in a bad situation themselves, with the 68k line of processors clearly incapable of facing the much faster x86 CPU’s. Something new had to be created to ensure their market share.

The result of this “marriage of necessity” was the PowerPC line of processors.

VMM-iMacs-LG

The Apple “Candy” Mac’s made PPC and computing sexy

Apple jumped on the idea. It was the only real alternative to x86. And you have to remember that – had Apple gone to x86 at that point, they would basically have fed the forces that wanted them dead. You could hardly make out where Microsoft started and Intel ended during the early 90s.

I’m going to spare you the whole fall and rebirth of Apple. Needless to say Apple came to the point where their branch of PowerPC processors caused more problems than they had benefits. The type of PowerPC processors Apple used generated an absurd amount of heat, and it was turning into a real problem. We see this in their later models, like the dual cpu G5 PowerMac where 40% of the cabinet is dedicated purely to cooling.

And yes, Commodore kicked the bucket back in 1994 so they never finished their new models. Which is a damn shame because unlike Apple they went with a dedicated RISC processor. These models did not suffer the heating problems the PPC’s used by Apple had to deal with.

Note: PPC and RISC are two sides of the same coin. PPC processors are RISC based, but naturally there exists hundreds of different implementations. To avoid a ton of arguments around this topic I treat PPC as something different from PA-RISC which Commodore was playing with in their Hombre “skunkworks” projects.

You can read all about Apple’s strain of PowePC processors here, and PA-RISC here.

PPC in modern computers?

I am going to be perfectly honest. When I heard that the new Amiga machines were based on PowerPC my reaction was less than polite. I mean who the hell would use PowerPC in our day and age? Surely Apple’s spectacular failure would stand as a warning for all time? I was flabbergasted to say the least.

the_red_one_498240f858553The Amiga One came out and I didn’t even give it the time of day. The Sam440 motherboards came out, I couldn’t care less. It would have been nice to own one, but the price at the time and the lack of software was just to disproportionate to make sense.

And now there is the Amiga x5000 and a smaller, more affordable A1222 (a.k.a “Tabour”) model just around the corner. And they are both equipped with a PPC CPU. There are just two logical conclusions you can make when faced with this: either the maker of these products is nuttier than a snicker’s bar, or there is something the general public doesn’t know.

What the general public doesn’t know has turned out to be quite a lot. While you would think PPC was dead and buried, the reality of PPC is not that simple. Turns out there is not just one PPC family (or branch) but several. The one that Apple used back in the day (and that MorphOS for some odd reason support) represents just one branch of the PPC tree if you like. I had no idea this was the case.

The first thing you are going to notice is that the CPU in the new Amiga’s doesn’t have the absurd cooling problems the old Mac’s suffered. There are no 20cm cooling ribs and you don’t need 2 fans on Ritalin to prevent a cpu meltdown; and you also don’t need a custom aluminium case to keep it cool (everyone thinks the “Mac Pro” cases were just to make them look cool. Turned out it was more literal, it was to turn the inside into a fridge).

In other words, the branch of PPC that we have known so far, the one marketed as “PowerPC” by Apple, Phase5 and everyone back in the 90’s is indeed dead and buried. But that was just one branch, one implementation of what is known as PPC.

Remember when ARM died?

When I started to dig into the whole PPC topic I could not help but think about the Arm processor. It’s almost spooky to reflect on how much we, the consumer, blindly accept as fact. Just think about it: You were told that PowerPC was the bomb, so you ended up buying that. Then you were told that PowerPC was crap and that x86 was the bomb, so you mentally buried PowerPC and bought x86 instead. The consumer market is the proverbial cheep farm where most of us just blindly accept whatever advertising tell us.

This was also the case with Arm. Remember a company called Acorn? It was a great british company that invented, among other things, the Arm core. I remember reading articles about Acorn when I was a kid. I even sold my Amiga for a while and messed around with an Acorn Archimedes. A momentary lapse of sanity, I know; I quickly got rid of it and bought back my Amiga. But I did learn a lot from messing around in RISC OS.

lg__MG_5441

The Acorn Archimedes, a brilliant RISC based machine that sadly didnt make it

My point is, everyone was told that Arm was dead back in the 80’s. The Acorn computers used a pure RISC processor at the time (again, PPC is a RISC based CPU but I treat them as separate since the designs are miles apart), but it was no secret that they were hoping to equip their future Acorn machines with this new and magic Arm thing. And reading about the power and speed of Arm was very exciting indeed. Sadly such a computer never saw the light of day back in the 80’s. Acorn went bust and the market rolled over Acorn much like it would Commodore later.

The point im trying to make is that, everyone was told that Arm died with Acorn. And once that idea was planted in the general public, it became a self-fulfilling prophecy. Arm was dead. End of story. It doesn’t matter that Acorn had set up a separate company that was unaffected by the bankrupcy. Once the public deem something as dead, it just vanish from the face of the earth.

Fast forward to our time and Arm is no longer dead, quite the opposite! It’s presently eating its way into just about every piece of electronics you can think of. In many ways Arm is what made the IOT revolution possible. The whole Raspberry PI phenomenon would quite frankly never have happened without Arm. The low price coupled with the fantastic performance -not to mention that these cpu’s rarely need cooling (unless you overclock the hell out of them) has made Arm the most successful CPU ever made.

The PPC market share

With Arm’s so-called death and re-birth in mind, let’s turn our eyes to PPC and look at where it is today. PPC has suffered pretty much the same fate as Arm once did. Once a branch of the tech is defined “dead” by media and spin-doctors, regardless if the PPC is actually a cluster of designs not a single design or “chip”, the general public blindly follows – and mentally bury the whole subject.

And yes I admit it, I am guilty of this myself. In my mind there was no distinction between PPC and PowerPC. Which is a bit like not knowing the difference between Rock & Roll as a genre, and KISS the rock band. If we look at this through a parallel what we have done is basically to ban all rock bands, regardless of where they are from, because one band once gave a lousy concert.

And that is where we are at. PPC has become invisible in the consumer market, even though it’s there. Which is understandable considering the commercial mechanisms at work here, but is really PPC dead? This should be a simple question. And commercial mechanisms not withstanding the answer is a solid NO. PPC is not dead at all. We have just parked it in a mental limbo. Out of sight, out of mind and all that.

Tray-of-Broadway-Processors

Playstation 3, Nintendo WII U and Playstation VR all use Freescale PPC

PPC today has a strong foothold in industrial computing. The oil sector is one market that use PPC SBC’s extensively (read: single board computers). You will find them in valve controllers, pump and drill systems and pretty much any systems that require a high degree of reliability.

You will also be surprised to learn that cheap PPC SBC’s enjoy the same low energy requirements people adopt Arm over (3.3 – 5.0 V). And naturally, the more powerful the chip – the more juice it needs.

The reason that PPC is popular and still being used with great success is first of all reliability. This reliability is not just physical hardware but also software. PPC gives you two RTOS’s (real-time operating system) to choose from. Each of them comes with a software development toolchain that rivals whatever Visual Studio has to offer. So you get a good-looking IDE, a modern and up to date compiler, the ability to debug “live” on the boards – and also real-time signal processing protocols. The list of software modules you can pick from is massive.

neutrino_desktop

QNX RTOS desktop, This is a module you can embed in your own products

The last part of that paragraph, namely real-time signal processing, is extremely important. Can you imagine having an oil valve with 40.000 cubic tons of pressure failing, but the regulator that is supposed to compensate doesn’t get the signal because Linux or Windows was busy with something else? It get’s pretty nutty at that level.

The second market can be found with set-top boxes, game consoles and tv signal decoders. While this market is no doubt under attack from cheap Arm devices – PPC still has a solid grip here due to their reliability. PPC as an embedded platform has roughly two decades head start over Arm when it comes to software development. That is a lifetime in computing terms.

When developers look at technology for a product they are prototyping, the hardware is just one part of the equation. Being able to easily write software for the platform, perform live debugging of code on the boards, and maintain products over decades rather than consumer based 1-3 year warranties; it’s just a completely different ballgame. Technology like external satellite-dish parts runs for decades without maintenance. And there are good reasons why you dont see x86 or Arm here.

ps3-productthumbnailr-v2-us-14aug14

Playstattion 3 and the new PSX VR box both have a Freescal PPC cpu

As mentioned earlier, the PPC branch used today is not the same branch that people remember. I cannot stress this enough, because mixing these is like mistaking Intel for AMD. They may have many of the same features but ultimately they are completely different architectures.

The “PowerPC” label we know from back in the day was used to promote the branch that Apple used. Amiga accelerators also used that line of processors for their PowerUP boards. And anyone who ever stuffed a PowerUP board in their A1200 probably remember the cooling issues. I bought one of the more affordable PowerUP boards for my A1200, and to this day I associate the whole episode as a fiasco. It was haunted by instability, sudden crashes and IO problems – all of it connected to overheating.

But PPC today as delivered by Freescale Semiconductors (bought by NXP back in 2015) are different. They don’t suffer the heat problem of their remote and extinct cousins, have low power requirements and are incredibly reliable. Not to mention leagues more powerful than anything Apple, Phase 5 or Commodore ever got their hands on.

Is Freescale for the Amiga a total blunder?

Had you asked me a few days back chances are I would said yes. I have known for a while that Freescale was used in the oil sector, but I did not take into consideration the strength of the development tools and the important role an RTOS system holds in a critical production environment.

I must also admit that I had no idea that my Playstation and Nintendo consoles were PPC based. Playstation 4 doesn’t use PPC on its motherboard, but if you buy the fantastic and sexy VR add-on package, you get a second module that is again – a PPC based product.

It also turns out that IBM’s high-end mainframes, those Amazon and Microsoft use to build the bedrock for cloud computing are likewise PPC based. So once again we see that PPC is indeed there and it’s playing an important role in our lives – but most people don’t see it. So all of this is a matter of perspective.

Basic_Pack

The Nintendo WII U uses a Freescal PPC cpu, not exactly a below-par gaming system

But the Amiga x5000 or A1222 will not be controlling a high-pressure valve or serving half a million users (hopefully); so does this affect the consumer at all? Does any of this hold any value to you or me? What on earth would real-time feedback mean for a hobby user that just want to play some games, watch a movie or code demos?

The answer is: it could have a profound benefit, but it needs to be developed and evolved first.

Musicians could benefit greatly from the superior signal processing features, but as of writing I have yet to find any mention of this in the Amiga NG SDK. So while the potential is there I doubt we will see it before the Amiga has sold in enough volume.

Fast and reliable signal dispatching in the architecture will also have a profound effect on IPC (inter process communication), allowing separate processes to talk with each other faster and more reliably than say, windows or Linux. Programmers typically use a mutex or a critical-section to protect memory while it’s being delivered to another process (note: painting in broad strokes here), this is a very costly mechanism under Windows and Linux. For instance, the reason UAE is still single threaded is because isolating the custom chips in separate threads and having them talk – turned out to be too slow. If PPC is able to deal with that faster, it also means that processes would communicate faster and more interesting software can be made. Even practical things like a web-server would greatly benefit from real-time message dispatching.

ppc

There is no lack of vendors for PPC SBC’s online, here from Abaco Systems

So for us consumers, it all boils down to volume. The Freescale branch of PPC processors is not dead and will be around for years to come; they are sold by the millions every year to great variety of businesses; and while most of them operate outside the traditional consumer awareness, it does have a positive effect on pricing. The more a processor is sold the cheaper it becomes.

Most people feel that the Amiga x5000 is to expensive for a home computer and they blame that on the CPU. Forgetting that 50% of the sub total goes into making the motherboard and all the parts around the CPU. The CPU alone does not represent the price of a whole new platform. And that’s just the hardware! On top of this you have the job of re-writing a whole operating system from scratch, add features that have evolved between 1994 and 2017, and make it all sing together through custom written drivers.

So it’s not your average programming project to say the least.

But is it really too expensive? Perhaps. I bought an iMac 2 years back that was supposed to be my work machine. I work as a developer and use VMWare for almost all my coding. Turned out the i5 based beauty just didn’t have the ram. And fitting it with more ram (it came with 16 gigabytes, I need at least 32) would cost a lot more than a low-end PC. The sad part is that had I gone for a PC I could have treated myself to an i7 with 32 gigabyte ram for the same price.

I later bit the bullet and bought a 3500€ Intel i7 monster with 64 gigabytes of ram and the latest Nvidia graphics card. Let’s just say that the Amiga x5000 is reasonable in context with this. I basically have an iMac i have no use for, it just sits there collecting dust and is reduced to a music player.

Secondly we have to look at potential. The Mac and Windows machines now have their potential completely exposed. We know what these machines do and it’s not going to change any time soon.

The Amiga has a lot of hidden potential that has yet to be realized. The signal processing is just one of them. The most interesting is by far the Xena chip (XMOS) that allow developers to implement custom hardware in software. It might sound like FPGA but XMOS is a different technology. Here you write code using a custom C compiler that generates a special brand of opcodes. Your code is loaded onto a part of the chip (the chip is divided into X number of squares, each representing a piece of logic, or “custom chip” if you like) and will then act as a custom-chip.

790_2

The Amiga x5000 in all her glory, notice the moderate cooling for the CPU

The XENA technology could really do wonders for the Amiga. Instead of relying on traditional library files that are executed by the main CPU, things like video decoding, graphical effecs, auxiliary 3D functionality and even emulation (!) can now be dealt-with by XENA and executed in parallel with the main CPU.

If anything is going to make or break the Amiga, it’s not going to be the Freescal PPC processor – it’s going to be the XENA chip and how they use it to benefit the consumer.

Just imagine running UAE almost solely on the XENA chip, emulating 68k applications at near native speed – without using the main CPU at all? Sounds pretty good! And this is a feature you wont find on a PC motherboard. Like always they will add it should it become popular, but right now it’s not even on the radar.

So I for one do believe that the next generation Amiga machines have a shot. The A1222 is probably going to be the defining factor. It will retail at an affordable price (around 450€) and will no doubt go head-to-head with both consoles and mid-range PC’s.

So like always it’s about volume, timing and infrastructure. Everything but the actual processor to be honest.

Last words

Its been a valuable experience to look around and read up on PPC. When I started my little investigation I had a dark picture in my head where the new Amiga machines were just a waste of time. I am happy to say that this is not true and the Freescale processors are indeed alive and kicking.

It was also interesting to see how widespread PPC technology really is. It’s not just a specialist platform, although that is absolutely where it’s strength is financially; it ships in everything from your home router to your tv-signal decoder or game system. So it does have a foot in the consumer market, but like I have outlined here – most consumers have parked it in a blind-spot and we associate the word “PowerPC” with the fiasco of Apple in the past. Which is a bit sad because it’s neither true or fair.

djnick-amigaos4-screengrab

Amiga OS 4.x is turning out to be a very capable system

I have no problem seeing a future where the Amiga becomes a viable commercial product again. I think there is some way to go before that happens, and the spear-head is going to be the A1222 or a similar product.

But like I have underlined again and again – it all boils down to developers. A platform is only as good as the software you can run on it, and Hyperion should really throw themselves into porting games and creativity software. They need to build up critical mass and ship the A1222 with a ton of titles.

For my personal needs I will be more than happy just owning the x5000. It doesn’t need to be a massive commercial success, because Amiga is in my blood and I will always enjoy using it. And yes it is a bit expensive and I’m not in the habit of buying machines like this left and right. But I can safely say that this is a machine that I will be enjoying for many, many years into the future – regardless of what others may feel about it.

I would suggest that Hyperion lower their prices to somewhere around 1000€ if possible. Right now they need to think volume rather than profit, and hopefully Hyperion will start making the OS compatible with Arm. Again my thoughts go to volume and that IOT and embedded systems need an alternative to Linux and Windows 10 embedded.

But right now I’m itching to start developing for it – and I’m not alone 🙂

Amiga OS 4, object pascal and everything

August 2, 2017 1 comment

Those that read my blog knows that I’m a huge fan of the Commodore Amiga machines. This was a line of computers that took the world by storm around 1985 and held its ground until 1993. Sadly the company had to file for bankruptcy after a series of absurd financial escapades by its management.

carlsassenrath_teamamiga

The original team before it fell prey to mismanagement

The death of Commodore is one of the great tragedies in computing history. There is no doubt that Commodore represented a much-needed alternative to Microsoft and Apple – and the death of Commodore meant innovation of technology took a turn for the worse.

Large books have been written on this subject, as well as great documentaries and movies – so I’m not going to dig further into the drama here. Ars Technica has a range of articles covering the whole story, so if you want to understand how the market got the way it is today, head over and read up on the story.

On a personal level I find the classic Amiga machines a source of great inspiration even now. Despite Commodore dying in the 90’s, today 30 years after the fact I still stumble over amazing source-code on this awesome computer; There are a few things in Amiga OS that “hint” to its true age, but ultimately the system has aged with amazing elegance and grace. It just blows people away when they realize that the Amiga desktop hit the market in 1984 – and much of what we regard as a modern desktop experience is actually inherited from the Amiga.

amigasys4_04

Amiga OS is highly customizable. Here showing OS 3.9 [the last of the classic OS versions]

As I type this the Amiga is going through a form of revival. It’s actually remarkable to be a part of this because the scope of such an endeavour is monumental. But even more impressive is just how many people are involved. It’s not like some tiny “computer cult” where a bunch of misfits hang out in sad corners of the internet. Nope, we are talking about thousands of educated and technical people who still use their Amiga computers on a daily basis.

For instance: the realization of the new Amiga models have cost £ 1.2 million, so there are serious players involved in this.

The user-base is varied of course, it’s not all developers and engineers. You have gamers who love to kick back with some high quality retro-gaming. You have graphics designers who pixel large masterpieces (an almost lost art in this day and age). And you have musicians who write awesome tracks; then use that to spice up otherwise flat and dull PC based tracks.

What is even more awesome is the coding. Even the latest Freepascal has been ported, so if you were expecting people hand punching hex-codes you will be disappointed. While the Amiga is old in technical terms, it was so far ahead of the competition that people are surprised just how capable the classic systems are.

And yes, people code games, demos and utility programs for the classical Amiga systems even today. I just installed a Dropbox cloud driver on my system and it works brilliantly.

The brand new Amiga

Classic Amiga machines are awesome, but this post is not about the old models; it’s about the new models that are coming out now. Yes, you read right: next generation Amiga computers that have finally become a reality. Having waited for 22 years I am thrilled to say that I just ordered a brand new Amiga 5000! (and cant wait to install Freepascal and start coding).

It’s also quite affordable. The x5000 model (which is the power system) retails at around €1650, which is roughly half the price I paid for my Intel i7, Nvidia GeForce GTX 970 workstation. And the potential as a developer is enormous.

Just think about the onslaught of Delphi code I can port over, and how instrumental my software can become by getting in early. Say what you will about Freepascal but it tends to be the second compiler to hit a platform after GCC. And with Freepascal in place a Delphi developer can do some serious magic!

20431276_643626252509574_7473564293748990830_nRight. So the first Amiga  is the power model, the Amiga 5000. This can be ordered today. It cost the same as a good PC (1600€ range depending on import tax and vat). This is far less than I paid for my crap iMac (that I never use anymore).

The power model is best suited for people who do professional work on the machine. Software development doesn’t necessarily need all the firepower the x5000 brings, but more demanding tasks like 3d rendering or media composition will.

The next model is the A1222 which is due out around x-mas 2017 /slash/ first quarter

IMG_9251

The A1222 “Tabour”

2018. You would perhaps expect a mid-range model, something retailing at around €800 or there abouts – but the A1222 is without a doubt a low-end model.

It should retail for roughly €450. I think this is a great idea because AEON (who makes hardware) have different needs from Hyperion (who makes the new Amiga OS [more about that further into the article]). AEON needs to get enough units out to secure the foundation – while Hyperion needs vertical market penetration (read: become popular and also hit other hardware platforms as well). These factors are mutually exclusive, just like they are for Windows and OS X. Which is probably why Apple refuse to sell OS X without a mac, or they could end up competing with themselves.

A brave new Amiga OS

But there is more to this “revival” than just hardware. Many would even say that hardware is the least interesting about the next generation systems, and that the true value at this point in time is the new and sexy operating system. Because what the world needs now more than hardware (in my opinion) is a lightweight alternative to Linux and Windows. A lean, powerful, easy to use, highly customizable operating system that will happily boot on a $35 Raspberry PI 3b, or a $2500 Intel i7 monster. Something that makes computing fun, affordable and most of all: portable!

AmigaOS 41 Final (Commodore-Amiga, 2015, Amiga)_2

My setup of Amiga OS 4, with FPC and Storm C/C++

And with lean I have to stress that the original Amiga operating system, the classic 3.x system that was developed all the way to the end – was initially created to thrive in as little as 512kb. At most I had 2 megabytes of ram in my Amiga 1200 and that was ample space to write and run large programs, play the latest games and enjoy the rich, colorful and user-friendly desktop environment. We have to remember that Amiga had a multi-tasking, window based OS a decade before Microsoft.

Naturally the next-generation systems is built to deal with the realities of 2017 and beyond, but incredibly enough the OS will run just fine with as little as 256 megabytes. Not even Windows embedded can boot up on that. Linux comes close with distributions like Puppy and DSL, but Amiga OS 4 gives you a lot more functionality out of the box.

What way to go?

OK so we have new hardware, but what about the software? Are the new Amiga’s supposed to run some ancient version of Amiga OS? Of-course not! The people behind the new hardware have teamed up with a second company, Hyperion, that has believe it or not, done a full re-implementation of Amiga OS! And naturally they have taken the opportunity to get rid of annoying behavior – and adding behavior people expect in 2017 (like double-clicking on a window header to maximize it, easy access to menus and much more). Visually Amiga OS 4  is absolutely gorgeous. Just stunning to look at.

Now there are many different theories and ideas about where a new Amiga should go. Sadly it’s not just as simple as “hey let’s make a new amiga“; the old system is literally boiled in patent and legislation issues. It is close to an investors worst nightmare since ownership is so fragmented. Back when Commodore died, different parts of the Amiga was sold to different companies and individuals. The main reason we havent seen a new Amiga until now – is because the owners have been fighting between themselves. The Amiga as we know it has been caught in limbo for close to two decades.

My stance on the whole subject is that Trevor Dickenson, the man behind the next generation Amiga systems, has done the only reasonable thing a sane human being can when faced with a proverbial patent kebab: the old hardware is magical for us that grew up on it – but by todays standard they are obsolete dinosaurs. The same can be said about the Amiga OS 3.9. So Trevor has gone for a full re-implementation and hardware.

The other predominant idea is more GNU/Linux in spirit, where people want Amiga OS to be platform independent (or at least written in a way that makes the code run on different hardware as long as some fundamental infrastructure exists). This actually resulted in a whole new OS being written, namely Aros, which is a community made Amiga OS clone. A project that has been perpetually maintained for 20 years now.

131-3

Aros, a community re-implementation of Amiga OS for x86

While I think the guys behind Aros should be applauded, I do feel that AEON and Hyperion have produced something better. There are still kinks to work out on both systems – and don’t get me wrong: I am thrilled that Aros is available, I just enjoy OS 4 more than I do Aros. Which is my subjective opinion of course.

New markets

Right. With all this in mind, let us completely disregard the old Amiga, the commodore drama and instead focus on the new operatingsystem as a product. It doesn’t take long before a few thrilling opportunities present themselves.

The first that comes to my mind is how well suited OS 4 would be as an embedded platform. The problem with Linux is ultimately the same that haunts OS X and Windows, namely that size and complexity grows proportionally over time. I have seen Linux systems as small as 20 megabytes, but for running X based full screen applications, taking advantage of hardware accelerated graphics – you really need a bigger infrastructure. And the moment you start adding those packages – Linux puts on weight and dependencies fast!

embedded-systems-laboratory-systems

The embedded market is one place where Amiga OS would do wonders

With embedded systems im not just talking about head-less servers or single application devices. Take something simple like a ticket booth, an information kiosk or POS terminal. Most of these run either Windows embedded or some variation of Linux. Since both of these systems require a fair bit of infrastructure to function properly, the price of the hardware typically start at around 300€. Delphi and C++ based solutions, at least those that I have seen, end up using boards in the 300€ to $400€ range.

This price-tag is high considering the tasks you need to do in a POS terminal or ticket system. You usually have a touch enabled screen, a network connection, a local database that will cache information should the network be down – the rest is visual code for dealing with menus, options, identification and fault tolerance. If a visa terminal is included then a USB driver must also be factored in.

These tasks are not heavy in themselves. So in theory a smaller system if properly adapted for it could do the same if not better job – at a much better price.

More for less, the Amiga legacy

Amiga OS would be able to deliver the exact same experience as Windows and Linux – but running on more cost-effective hardware. Where modern Windows and Linux typically need at least 2 gigabyte of ram for a heavy-duty visual application, full network stack and database services – Amiga OS is happy to run in as little as 512 megabytes. Everything is relative of course, but running a heavy visual application with less than a gigabyte memory in 2017 is rare to say the least.

Already we have cut cost. Power ARM boards ships with 4 gigabytes of ram, powered by a snappy ARM v9 cpu – and medium boards ship with 1 or 2 gigabytes of ram and a less powerful cpu. The price difference is already a good 75€ on ram alone. And if the CPU is a step down, from ARM v9 to ARM v8, we can push it down by a good 120€. At least if you are ordering in bulk (say 100 units).

The exciting part is ultimately how well Amiga OS 4 scales. I have yet to try this since I don’t have access to the machine I have ordered yet – and sadly Amiga OS 4.1 is compiled purely for PPC. This might sound odd since everyone is moving to ARM, but there is still plenty of embedded systems based on PPC. But yes, I would urge our good friend Trevor Dickenson to establish a migration plan to ARM because it would kill two birds with one stone: upgrading the faithful Amiga community while entering into the embedded market at the same time. Since the same hardware is involved these two factors would stimulate the growth and adoption of the OS.

amihw

The PPC platform gives you a lot of bang-for-the-buck in the A1222 model

But for sake of argument let’s say that Amiga OS 4 scales exceptionally well, meaning that it will happily run on ARM v8 with 1 gigabyte of ram. This would mean that it would run on systems like the Asus Tinkerboard that retails at 60€ inc. vat. This would naturally not be a high performance system like the A5000, but embedded is not about that – it’s about finding something that can run your application safely, efficiently and without problems.

So if the OS scales gracefully for ARM, we have brought the cost down from 300€ to 60€ for the hardware (I would round that up to 100€, something always comes up). If the customers software was Windows-based, a further 50€ can be subtracted from the software budget for bulk licensing. Again buying in bulk is the key.

Think different means different

Already I can hear my friends that are into Linux yell that this is rubbish and that Linux can be scaled down from 8 gigabytes to 20 megabytes if so needed. And yes that is true. But what my learned friends forget is that Linux is a PITA to work with if you havent spent a considerable amount of time learning it. It’s not a system you can just jump into and expect to have results the next day. Amiga OS has a much more friendly architecture and things that are often hard to do on Windows and Linux, is usually very simple to achieve on the Amiga.

Another fact my friends tend to forget is that the great majority of commercial embedded projects – are done using commercial software. Microsoft actually presented a paper on this when they released their IOT support package for the Raspberry PI. And based on personal experience I have to agree with this. In the past 20 years I have only seen 2 companies that use Linux as their primary OS both in products and in their offices. Everyone else uses Windows embedded for their products and day-to-day management.

So what you get are developers using traditional Windows development tools like Visual Studio or Delphi (although that is changing rapidly with node.js). And they might be outstanding programmers but Linux is still reserved for server administrators and the odd few that use it on hobby basis. We simply don’t have time to dig into esoteric “man pages” or explore the intricate secrets of the kernel.

The end result is that companies go with what they know. They get Windows embedded and use an expensive x86 board. So where they could have paid 100€ for a smaller SBC and used Amiga OS to deliver the exact same product — they are stuck with a 350€ baseline.

Be the change

The point of this little post has been to demonstrate that yes, the embedded market is more than open for alternatives. Linux is excellent for those that have the time to learn its many odd peculiarities, but over the past 20 years it has grown into a resource hungry beast. Which is ironic because it used to be Windows that was the bloated scapegoat. And to be honest Windows embedded is a joy to work with and much easier to shape to your exact needs – but the prices are ridicules and it wont perform well unless you throw at least 2 gigabyte on it (relative to the task of course, but in broad strokes that’s the ticket).

But wouldn’t it be nice with a clean, resource friendly and extremely fast alternative? One where auto-starting applications in exclusive mode was a “one liner” in the startup-sequence file? A file which is actually called “startup-sequence” rather than some esoteric “init.d” alias that is neither a folder or an archive but something reminiscent of the Windows registry? A system where libraries and the whole folder structure that makes up drivers, shell, desktop and service is intuitively named for what they are?

asus

Amiga OS could piggyback on the wave of low-cost ARM SBC’s that are flooding the market

You could learn how to use Amiga OS in 2 days tops; but it holds great depth so that you can grow with the system as your needs become more complex. But the general “how to” can be picked up in a couple of days. The architecture is so well-organized that even if you know nothing about settings, a folder named “prefs” doesn’t leave much room for misinterpretation.

But the best thing about AmigaOS is by far how elegant it has been architected. You know, when software is planned right it tends to refactor out things that would otherwise be an obstacle. It’s like a well oiled machinery where each part makes perfect sense and you don’t need a huge book to understand it.

From where I am standing, Amiga OS is ultimately the biggest asset the Hyperion and AEON have to offer. I love the new hardware that is coming out – but there is no doubt in my mind, and I know I am right about this, that the market these companies should focus on now is not PPC – but rather ARM and embedded systems.

It would take an effort to port over the code from a PPC architecture to ARM, but having said that – PPC and ARM have much more in common than say, PPC and x86.

I also think the time is ripe for a solid power ARM board for desktop computers. While smaller boards gets most of the attention, like the Raspberry PI, the ODroid XU4 and the (S)Tinkerboard – once you move the baseline beyond 300€ you see some serious muscle. Boards like iMX6 OpenRex SBC Ultra packs a serious punch, and like expected it ships with 4 gigabyte of ram out of the box.

While it’s impossible to do a raw comparison between the A1222 and the iMX6 OpenRex, I would be surprised if the iMX6 delivered terrible performance compared to the A1222 chipset. I am also sure that if we beefed up the price to 700€, aimed at home computing rather than embedded – the ARM power boards involved would wipe the floor with PPC. There are a ton of factors at play here – a fast CPU doesn’t necessarily mean better graphics. A good GPU should make up at least 1/5 of the price.

Another cool factor regarding ARM is that the bios gives you a great deal of features you can incorporate into your product. All the ARM board I have gives you FAT32 support out of the box for instance, this is supported by the SoC itself and you don’t need to write filesystem drivers for it. Most boards also support Ext2 and Ext3 filesystems. This is recognized automatically on boot. The rich bios/mini kernel is what makes ARM so attractive to code for, because it takes away a lot of the boring, low-level tasks that took months to get right in the past.

Final words

This has been a long article, from the early years of Commodore – all the way up to the present day and beyond. I hope some of my ideas make sense – and I also hope that those who are involved in the making of the new Amiga perhaps pick up an idea or two from this material.

Either way I will support the Amiga with everything I got – but we need a couple of smart ideas and concrete plans on behalf of management. And in my view, Trevor is doing exactly what is needed.

While we can debate the choice of PPC, it’s ultimately a story with a long, long background to it. But thankfully nothing is carved in stone and the future of the Amiga 5000 and 1222 looks bright! I am literally counting the days until I get one!

Amibian.js on bitbucket

August 1, 2017 Leave a comment

The Smart Pascal driven desktop known as Amibian.js is available on bitbucket. It was hosted in a normal github repository earlier – so make sure you clone out from this one.

About Amibian.js

Amibian is a desktop environment written in Smart Pascal. It compiles to JavaScript and can be used through any modern HTML5 compliant browser. The project consists of both a client and server, both written in smart pascal. The server is executed by node.js (note: please install PM2 to have better control over scaling and task management: http://pm2.keymetrics.io/).

smartdeskAmibian.js is best suited for embedded projects, such as kiosk systems. It has been used in tutoring software for schools, custom routers and a wide range of different targets. It can easily be molded into a rich environment for SAD (single application devices) based software – but also made to act more as a real operating system:

  • Class driven filesystem, easy to target external services
    • Ram device-type
    • Browser cache device-type
    • ZIPfile device-type
    • Node.js device-type
  • Cross domain application hosting
    • Traditional IPC protocol between hosted application and desktop
    • Shared resources
      • css styling
      • glyphs and images
    • Event driven visual controls
  • Windowing manager makes it easy to implement custom applications
  • Support for fullscreen API

Amibian ships with UAE.js (based on the SAE.js codebase) making it possible to run Amiga software directly on the desktop surface.

The bitbucket repository is located here: https://bitbucket.org/hexmonks/client

 

Drag and drop with smart pascal

July 28, 2017 Leave a comment

Drag and drop under HTML5 is incredibly simple; even more simple than Delphi’s mechanisms. Having said that it can be a PITA to work with due to the async nature of the JavaScript API.

This functionality is just begging to be isolated in a non-visual controller (read: component), and it’s on my list of RTL features. But it will have to wait until we have wiped the list clean.

fig9-1

Drag and drop is useful for many web applications

Anyways, people asked me about a simple way to capture a drag & drop event and kidnap the file-data without any type for form tags involved. So here is a very simple ad-hoc example.

The FView variable is a reference to a visible control. In this case the form itself, so that you can drop files anywhere.

FView.handle.ondragover := procedure (event: variant)
begin
  // In order to hijack drag & drop, this event must prevent
  // the default behavior. So we hotwire it
  event.preventDefault();
end;

FView.Handle.ondrop := procedure (event: variant)
begin
  event.preventDefault();

  var ev := event.dataTransfer;
  if (ev) then
  begin
    if (ev.items) then
    begin
      for var x:=0 to ev.items.length-1 do
      begin
        var LItem := ev.items[x];
        if (LItem) then
        begin
          if string(LItem.kind).ToLower() = "file" then
          begin
            var file := LItem.getAsFile();

            var reader: variant;
            asm
              @reader = new FileReader();
            end;
            reader.onload := procedure (data: variant)
            begin
              if reader.readyState = 2 then
              begin
                writeln("File data ready:");

                var binbuffer := reader.result;
                var raw: TDefaultBufferType;
                asm
                  @raw = new Uint8Array(@binbuffer);
                end;

                var Buffer := TBinaryData.Create();
                Buffer.AppendBuffer(raw);

                writeln(buffer.ToBase64());

              end;
            end;
            reader.readAsArrayBuffer(file);

          end;
        end;
      end;
    end;
  end;
end;

Object models, a Smart Pascal example

July 22, 2017 Leave a comment

Most information developers work with is hierarchal and organized in a classical parent and child manner. Parent-child relationships is so universal that they are everywhere, from how visual controls are organized to how elements in a document is stored. No matter if it’s a visual treeview in Delphi, an html element in a document or entries in a pdf-file; it’s pretty much all organized in a series of inter-linked, parent-child relationships.

Since parent-child trees is so universal, developers usually end up with a unit that contains a couple of simple base-classes. Typically these classes are used to create a model of things. The tree can contain the actual data itself – or more commonly, it’s used to represent a model that is later processed or realized visually.

synchro

Most frameworks are essentially parent-child hierarchies

Here is an example of such a unit. It compiles under Smart Pascal and should work fine for most versions. It has virtually no dependencies.

Populate the data property with whatever data you want to represent, then you can enumerate and work with the model. What you use it for is eventually up to you. I use it a lot when dealing with menus; it makes it easier to define menus and sub-menus and then simply feed the model to a construction routine.

unit DataNodes;

interface

uses
  System.Types;

type

  // for older SMS versions that lack "system.types", un-remark this:
  // TEnumResult = (erContinue = $A0, erBreak = $10);

  TCustomDataNode = class;
  TCustomDataNodeList = array of TCustomDataNode;

  TDataNodeEnumEnterProc = function (const Root: TCustomDataNode): TEnumResult;
  TDataNodeEnumExitProc = function (const Root: TCustomDataNode): TEnumResult;
  TDataNodeEnumProc = function  (const Child: TCustomDataNode): TEnumResult;
  TDataNodeCompareProc = function (const Value: variant): boolean;

  TCustomDataNode = class
  protected
    property  Parent: TCustomDataNode;
    property  Caption: string;
  public
    property  Data: variant;
    property  Children: TCustomDataNodeList;

    procedure Clear;

    function  Search(const Compare: TDataNodeCompareProc): TCustomDataNode;

    procedure ForEach(const Before: TDataNodeEnumEnterProc;
              const Process: TDataNodeEnumProc;
              const After: TDataNodeEnumExitProc); overload;

    procedure ForEach(const Process: TDataNodeEnumProc); overload;

    function  Serialize: string;
    class function  Parse(const JSonData: string): TCustomDataNode;

    constructor Create(const NodeOwner: TCustomDataNode;
                NodeText: string; const NodeData: variant); overload;

    constructor Create(const NodeOwner: TCustomDataNode;
                const NodeData: variant); overload;
  end;

  TDataNode = class(TCustomDataNode)
  public
    property  Parent;
    property  Caption;
  end;

  TDataNodeTree = class(TCustomDataNode)
  public
    property  Caption;
  end;

implementation

//#############################################################################
// TCustomDataNode
//#############################################################################

constructor TCustomDataNode.Create(const NodeOwner: TCustomDataNode;
            NodeText: string; const NodeData: variant);
begin
  Parent := NodeOwner;
  Caption := NodeText;
  Data := NodeData;
end;

constructor TCustomDataNode.Create(const NodeOwner: TCustomDataNode;
         const NodeData: variant);
begin
  Parent := NodeOwner;
  Data := NodeData;
end;

function TCustomDataNode.Serialize: string;
begin
  asm
    @result = JSON.stringify(@self);
  end;
end;

class function TCustomDataNode.Parse(const JSonData: string): TCustomDataNode;
begin
  asm
    @result = JSON.parse(@JSONData);
  end;
end;

procedure TCustomDataNode.Clear;
begin
  try
    for var x := 0 to Children.length-1 do
    begin
      if Children[x] <> nil then
        Children[x].free;
    end;
  finally
    Children.Clear();
  end;
end;

function TCustomDataNode.Search(const Compare: TDataNodeCompareProc): TCustomDataNode;
var
  LResult: TCustomDataNode;
begin
  ForEach( function (const Child: TCustomDataNode): TEnumResult
    begin
      if Compare(Child.Data) then
      begin
        LResult := Child;
        result := TEnumResult.erBreak;
      end;
    end);
  result := LResult;
end;

procedure TCustomDataNode.ForEach(const Before: TDataNodeEnumEnterProc;
          const Process: TDataNodeEnumProc;
          const After: TDataNodeEnumExitProc);
begin
  if assigned(Before) then
  Before(self);
  try
    if assigned(Process) then
    begin
      for var LChild in Children do
      begin
        if Process(LChild) = erBreak then
        break;
      end;
    end;
  finally
    if assigned(After) then
    After(self);
  end;
end;

procedure TCustomDataNode.ForEach(const Process: TDataNodeEnumProc);
begin
  if assigned(Process) then
  begin
    for var LChild in Children do
    begin
      if Process(LChild) = erBreak then
      break;
    end;
  end;
end;

end.

LDef parser done

July 21, 2017 Leave a comment

Note: For a quick introduction to LDef click here: Introduction to LDef.

Great news guys! I finally finished the parser and model builder for LDef!

02237439ec5958f6ec7362f726a94696-cogwheels-red-circle-icon-by-vexelsThat means we just need to get the assembler ported. This is presently running fine under Smart Pascal (I like to prototype things there since its faster) – and it will be easy to port it over to Delphi and Freepascal after the model has gone through the steps.

I’m really excited about this project and while I sadly don’t have much free time – this is a project I truly enjoy working on. Perhaps not as much as Smart Pascal which is my baby, but still; its turning into a fantastic system.

Thoughts on the architecture

One of the things I added support for, and that I have hoped that Embarcadero would add to Delphi for a number of years now, is support for contract coding. This is a huge topic that I’m not jumping into here, but one of the features it requires is support for entry and exit sections. Essentially that you can define code that executes before the method body and directly after it has finished (before the result is returned if it’s a function).

This opens up for some very clever means of preventing errors, or at the very least give the user better information about what went wrong. Automated tests also benefits greatly from this.

For example,  a normal object pascal method looks, for example, like this:

procedure TForm1.MySpecialMethod;
begin
  writeln("You called my-special-method")
end;

The basis of contract design builds on the classical and expands it as such:

procedure TForm1.MySpecialMethod;
  Before()
  begin
    writeln("Before my-special-method");
  end;

  After()
  begin
    writeln("After my-special-method");
  end;

begin
  writeln("You called my-special-method")
end;

Note: contract design is a huge system and this is just a fragment of the full infrastructure.

What is cool about the before/after snippets, is that they allow you to verify parameters before the body is even executed, and likewise you get to work on the result before the value is returned (if any).

You mights ask, why not just write the tests directly like people do all the time? Well, that is true. But there will also be methods that you have no control over, like a wrapper method that calls a system library for instance. Being able to attach before/after code for externally defined procedures helps take the edge off error testing.

Secondly, if you are writing a remoting framework where variant data and multi-threaded invocation is involved – being able to check things as they are dispatched means catching potential errors faster – leading to better performance.

As always, coding techniques is a source of argument – so im not going into this now. I have added support for it and if people don’t need it then fine, just leave it be.

Under LDef assembly it looks like this:

public void main() {
  enter {
  }

  leave {
  }
}

Well I guess that’s all for now. Hopefully my next LDef post will be about the assembler being ready – leaving just the linker. I need to experiment a bit with the codegen and linker before the unit format is complete.

The bytecode-format needs to include enough information so that the linker can glue things together. So every class, member, field etc. must be emitted in a way that is easy and allows the linker to quickly look things up. It also needs to write the actual, resulting method offsets into the bytecode.

Have a happy weekend!

FMX 4 linux gets an update

July 20, 2017 Leave a comment

The Firemonkey framework that allows you to compile for Linux desktop (Linux x86 server is already supported) just got a nice update. Amoung the changes is a nice Radial Gradient pattern – and several bugs squashed.

fmxl

This is an awesome addition if you already have Delphi XE 10.2 and if writing Ubuntu desktop applications is something you want – then this is the package to get!

Check it out: http://fmxlinux.com/

Smart Pascal: Amibian vs. FriendOS

July 20, 2017 Leave a comment

This is not a new question, and despite my earlier post I still get hammered with these on a weekly basis – so lets dig into this subject and clean it up.

I fully understand that for non-developers suddenly having two Amiga like web desktops can be a bit confusing; especially since they superficially at least do many of the same things. But there is actually a lot of co-incidence surrounding all this, as well as evolution of the general topic. People who work with a topic will naturally come up with the same ideas from time to time.

But ok, lets dig into this and clear away any confusion

You know about FriendOS right? It looks a lot like Amibian

20100925-Designer

Custom native web servers has been a part of Delphi for ages, so it’s not that exciting for us

“A lot” is probably stretching it. But ok:  FriendOS is a custom server system with a sexy desktop front-end written in HTML5. So you have a server that is custom written to interact with the browser in a special way. This might sound like a revolution to non-developers but it’s actually an established technology; its been a part of Delphi and C++ builder for at least 12 years now (Intraweb being the best example, Raudus another). So if you are wondering why im not dazzled, it’s because this has been there for a while.

The whole point of Amibian.js is to demonstrate a different path; to get away from the native back-end and to make the whole system portable and platform independent. So in that regard the systems are diametrically different.

maxresdefault

Custom web servers that talk to your web-app is old news. Delphi developers have done this for a decade at least and it’s not really interesting at this point. Node.js holds much greater promise.

What FriendOS has done that is unique, and that I think is super cool – is to couple their server with RDP (remote desktop protocol) and some nice video streaming for smooth video chat. Again these are off the shelves parts that anyone can add once you have a native back-end, it’s not really hard to code but time-consuming; especially when you are potentially dealing with large number of users spawning threads all over the place. I think Friend-Labs have done an exceptional good job here.

When you combine these features it creates the impression of an operating system like environment. And this is perfectly fine for ordinary users. It all depends on your needs and what exactly you use the computer for.

And just to set the war-mongers straight: FriendOS is not going up against Amibian. it’s going up against ChromeOS, Nayu and and a ton of similar systems; all of them with deep pockets and an established software portfolio. We focus on software development. Not even in the same ballpark.

To be perfectly frank: I see no real purpose for a web desktop except when connected to a context. There has to be an advantage beyond isolating web functions in one place. You need something special that your system does better than others, or different than others. Amibian has been about UAE.js and to run retro games in a familiar environment. And thus create a base that Amiga lovers can build on and play with. Again based on our prefab for customers that make embedded systems and use our compiler and RTL for that.

If you have a hardware product like a NAS, a ticket system or a retro-game machine and want to have a nice web front-end for it: then it makes sense. But there is absolutely nothing in both our systems that you can’t whip-up using Intraweb or Raudus in a few weeks. If you have the luxury of a native back-end, then adding Active Directory support is a matter of dropping a component. You can even share printers and USB devices over the wire if you like, this has been available to Delphi and c++ developers for ages. The “new” factor here, which FriendOS does very well i might add, is connectivity.

This might sound like criticism but it’s really not. It’s honesty and facts. They are going to need some serious cash to take on Google, Samsung, LG and various other players that have been doing similar things for a long time (or about to jump on the same concepts) — Amibian.js is for Amiga fans and people who use Smart Pascal to write embedded applications. We don’t see anything to compete with because Amibian is a prefab connected to a programming language. FriendOS is a unification system.

A programming language doesnt have the aspirations of a communication company. So the whole “oh who is best” or “are you the same” is just wrong.

Ok you say it’s not competing, but why not?

To understand Amibian.js you first need to understand Smart Pascal (see Wikipedia article on Smart Pascal). Smart Pascal (smartmobilestudio.com) is a software development studio for writing software using web technology rather than native machine-code. It allows you to create whatever you like, from games to servers, or kiosk software to the next Facebook clone.

Our focus is on enabling our customers to quickly program robust mobile applications, servers, kiosk software, games or large JavaScript projects; products that would otherwise be hard to manage if all you have is vanilla JavaScript. I mean why spend 2 years coding something when you can do it in 2 months using Smart? So a web desktop is just ridicules when you understand how large our codebase is and the scope of the product.

smart

Under Smart Pascal what people know as Amibian.js is just a project type. There is no competition between FriendOS and Amibian because a web desktop represents a ridicules small piece of our examples; it’s literally mistaking the car for the factory. Amibian is not our product, it is a small demo and prefab (pre fabricated system that others can download and build on) project that people use to save time. So under Smart, creating your own web desktop is a piece of cake, it’s a click, and then you can brand it, expand it and do whatever you like with it. Just like you would any project you create in Visual Studio, Delphi or C++ builder.

So we are not in competition with FriendOS because we create and deliver development tools. Our customers use Smart Pascal to create web environments both large and small, and naturally we deliver what they need. You could easily create a FriendOS clone in Smart if you got the skill, but again – that is but a tiny particle in our codebase.

Really? Amibian.js is just a project under Smart Pascal?

Indeed. Our product delivers a full object-oriented pascal compiler, debugger and IDE. So you can write classes, use inheritance and enjoy all the perks of a high-level language — and then compile this to JavaScript.

You can target node.js, the browser and about 90+ embedded devices out of the box. The whole point of Smart Pascal is to avoid the PITA that is writing large applications in JavaScript. And we do this by giving you a classical programming language that was made especially for application authoring, and then compile that to JavaScript instead.

Screenshot

Amibian.js is just a tiny, tiny part of what Smart Pascal is all about

This is a massive undertaking that started back in 2009/2010 and involves a high-quality compiler, linker, debugger and code generator; a full IDE with a ton of capabilities and last but not least: a huge run-time library that allows you to work with the DOM (document object model, or HTML) and node.js from the vantage point of a programmer.

Most people approach web development as a designer. They write html and then style them using a stylesheet. They work with colors, aspects and pages. Which means people who traditionally write programs falls between two chairs: first they must learn about html and css, and secondly a language which is ill equipped for large scale applications (imagine writing adobe photoshop in nothing but JS. Sure it’s possible, but wouldnt you rather spend a month coding that than a year? In a language that actually makes sense?).

With Smart you approach web development like you do writing programs. You work with visual controls, change properties, write code in response to events. Even writing your own visual controls that you can re-use and inherit from later is both fun and easy. So rather than ending up with a huge was of spaghetti code, which sadly is the fate of most large-scale JavaScript projects — Smart lets you work like you are used to. In a language better suited for the task.

And yes, I was not kidding when I said this was a huge undertaking. The source code in our codebase is close to 2.5 gigabytes. And keep in mind that this is source-code and libraries. So it’s not something you slap together over the weekend.

20108543_10154652373945906_5493167218129195758_n

The Smart source-code is close to 2.5 gigabytes. It has taken years to complete

But why do Amibian and FriendOS both focus on the Amiga?

That is pure co-incidence. The guys over at Friend Labs started out on the Amiga just like we did. So when I updated our desktop project (previously called Quartex Media Desktop) the Amiga look and feel came natural to me.

commodoreI’m a huge retro-computing fan that loves the Amiga. When I sat down to rewrite our window manager I loved the way Amiga OS 4.x looked, so I decided to implement an UI inspired by that.

People have to remember that the Amiga was a huge success in Scandinavia, so finding developers that are in their late 30s or early 40s that didn’t own an Amiga is harder than you think.

So the fact that we all root our ideas back to the Amiga is both co-incidence and a mutual passion for a great platform. One that really should have survived the financial onslaught of fat CEO’s and thir minions in the board.

But Amibian does a lot of what FriendOS does?

Probably. JavaScript is multi-tasking by default so if loading external URL’s into window containers, doing live resize and other things is what you refer to then yes. But that is the nature of web programming. Its like creating a bucket if you want to carry water; it is a natural first step of an evolutionary pattern. It’s not like FriendOS is copying us I would imagine.

240_F_61497799_GnuUiuJliH9AyOJTeo6i3bS8JNN7wgr2

For the record Smart started back in 2010 and the media desktop came in with the first hotfix, so its been available years before Friend-Labs even existed. Creating a desktop has not been a huge part of what we do because mobile applications, building a rich and solid run-time-library with hundreds of classes for our customers – and making an IDE that is great to use, that is our primary job.

We didn’t even know FriendOS existed. Let alone that it was a Norwegian product.

But you posted that you worked for FriendOS earlier?

Yes I did, very briefly. I was offered a position and I worked there for a month. It was a chance to work side by side with legends like David John Pleasance, ex head of Commodore for europe; and also my childhood hero Francois Lionet, author of Amos Basic for the Amiga way back in the 80’s and 90s.

blastfromthepast

We never forget our childhood heroes

Sadly we had our wires crossed. I am an awesome object pascal developer, while the guys at Friend-Labs are awesome C developers. I work primarily on Windows while they work mostly on Linux. So in essence they hired a Delphi developer to work in a language he doesn’t know on a platform he havent used.

They simply took for granted that I worked in C/C++, while I took for granted that they used object pascal. Its an easy mistake to make and its not the first time; and probably not the last.

Needless to say the learning curve would be extremely high for any developer (learning a new operating-system and programming language at the same time as you are supposed to be productive).

When my girlfriend suddenly faced a life threatening illness the situation became worse. It was impossible for me to commute or leave her side for the unforeseeable future; so when you add the six months learning curve to this situation; six months of not being able to contribute on the level I am used to; well I am old enough to know how that ends. So I did what was best for everyone and resigned.

Besides, I am a damn good Delphi developer with standing invitation to many companies; so it made more sense to just take a step backwards. Which was not fun because I really enjoyed the short time I was there. But, it was not meant to be.

And that is basically all there is to it.

Ok. But if Smart is a development tool, will it support Friend-OS ?

This is something that I really want to do. But since The Smart Company is a proper company with stocks, shareholders and investors – it’s not a decision I can take on my own. It is something that must be debated by the board. But personally yeah, I would love that.

friend

As they grow, so does the need for proper development tools

One of the reasons I hope FriendOS succeeds is because it’s a win-win situation. The more they expand the more relevant Smart becomes. Say what you will about JavaScript but writing large and complex applications is not easy by any measure.

So the moment we introduce Smart Pascal for Friend, their users will be able to write large applications rapidly, with better time-to-market and consequent ROI. So it’s a win-win. If they succeed then we get a bigger market; If they don’t we havent lost anything.

This may sound extremely self-serving, but Friend-Labs have had the same chance as everyone else to invest in Smart; our investor plans have been available for quite some time, and we have to do what is best for our company.

But what about Amibian, was it just a short thing?

Not at all. It is put on hold for a few months while we release the next generation RTL. Which is probably the biggest update in the history of Smart Pascal. We have a very clear agenda ahead of us and Amibian.js is (as underlined) a very small part of what we do.

But Amibian is written using our next generation RTL, and without that our customers cant really do much with it. So it’s important to get the RTL out first and then work on the IDE to reflect its many new features. After that – Amibian.js development will continue.

The primary target for Amibian.js is embedded devices and kiosk systems, coupled with full-screen web applications and hardware front-ends (NAS and backup devices being great examples). So the desktop will run on affordable, off the shelves hardware starting at $40 and all the way up to the most powerful and expensive x86 boards on the market. Cheap solutions like Raspberry PI, ODroid XU4 and Tinkerboard will deliver what you today need a dedicated $120 x86 board to achieve.

kiosk-systems

Our desktop will run on many targets and is platform independent by design

This means that our deskop has a wildly different modus operandi. We will not require a constant connection to a remote server. Amibian will happily boot up on a single device, regardless of processor type.

Had we coded our backend using Delphi or C++ builder (native like FriendOS have done) we would have been finished months ago. And I could have caught up with FriendOS in a couple of months if I wanted to. But that is not in our agenda. We have written our server framework for node.js as we coded the desktop  – which means it’s platform and OS agnostic by design. If node.js runs, Amibian will run. It wont care if you are running on a $40 embedded board or the latest Intel i9 cpu.

Last words

I really hope this has helped and that the confusion between Amibian.js and our agenda, versus what Friend-Labs is doing, is now clearer.

Amibian666

From Norway with love

I wish Friend-Labs the very best and hope they are successful in their endeavour. They have worked very hard on the product and deserve that. And while I might come over as arrogant at times, im really not.

Web desktops have been around for a long time now (Asustor is my favorite) through Delphi and C++ builder and that is just facts. But that doesn’t mean you can’t put things together in new and interesting ways! Smart itself was first put together by existing technology. It was said to be impossible by many because JavaScript and object pascal are unthinkable companions. But it turned out to be a perfect match.

As for the future – personally I don’t believe in the web-desktop outside a specific context, something to give it purpose if you like. I believe for instance that Amibian.js will be awesome for Amiga users when its running on a $99 ARM laptop. Where the system boots straight into a full-screen desktop and where UAE.js is fully integrated into the core, making retro-gaming and running old programs close to seamless. That I can believe in.

But it would make no sense running Amibian or FriendOS in a browser on top of a Windows desktop or a full Ubuntu X session. Unless the virtual desktop functions as your corporate window with access to company mail, documents and essentially what every web-based intranet already does. So once again we end up with the fact that this has already been done. And unless you create a unique context for it, it just wont have any appeal. This is also why I havent pursued the same tech Friend-Labs have, because that’s not where the exciting stuff is happening.

But I will happily be proven wrong, because that means an even bigger market for us should we decide to support the platform.

LDef and bytecodes

July 14, 2017 Leave a comment

LDef, short for Language Definition format, is a standard I have been formulating for a couple of years. I have taken my experience with writing various compilers and parsers, and also my experience of writing RTL’s and combined it all into a standard.

programming-languages-for-iot-e1467856370607LDef is a way for anyone to create their own programming language. Just like popular libraries and packages deals with the low-level stuff, like Gr32 which is an excellent graphics library — LDef deals with the hard stuff and leaves you with the pleasant job of defining what the language should look like.

The idea is to make a language construction kit if you like, where the underlying engine is flexible enough to express the languages we know and love today – and also powerful enough to express new ideas. For example: let’s say you want to create an awesome new game system (just as an example, it applies to any system that can be automated). You have the means and skill to create the actual engine – but how are you going to market it? You will be up against monoliths like Unity and simple “click and play” engines like ClickTeam Fusion, Game Maker and the likes.

Well, the only way to make good games is hard work. There is no two ways about it. You can fake your way only so far – so at the end of the day you want to give your users something solid.

In our example of publishing a game-engine, I think that you would stand a much better chance of attracting users if you hooked that engine up to a language. A language that is easy to use, easy to learn and with commands that are both specific and non-specific to your engine.

There are some flavours of Basic that has produced knock-out games for decades, like BlitzBasic. That language alone has produced hundreds of titles for both PC, XBox and even Nintendo. So it’s insanely fast and not a pushover.

And here is the cool part about LDEF: namely that it makes it easy for you to design your own languages. You can use one of the pre-defined languages, like object pascal or visual basic if that is what you like – but ultimately the fun begins when you start to experiment with new ideas and language features. And it’s fun when you get to that point, because all the nitty gritty is handled. You get to focus on the superficial stuff like syntax and high level functions. So you can shave off quite a bit of development time and make coding fun again!

The paradox of faster bytecodes

Bytecodes used to be to slow for anything substantial. On 16-bit machines bytecodes were used in maybe one language (that I know of) and that was the ‘E’ compiler. The E language was maybe 30 years ahead of its time and is probably the only language I can think of that fits cloud programming like hand in glove. But it was also an excellent system automation language (scripting) and really turned some heads back in the late 80s and early 90s. REXX was only recently added to OS X, some 28 years after the Amiga line of computers introduced it to the general public.

ldef_bytecodes

Bytecode dump of a program compiled with the node.js version of the compiler

In modern times bytecodes have resurfaced through Java and the .NET framework which for some reason caused a stir in the whole development community. I honestly never bought into the hype, but I am old enough to remember the whole story – so I’m probably not the Microsoft demographic anyways. Java hyped their virtual machine opcodes to the point of exhaustion. People really are stupid. Man did they do a number on CEO’s and heads of R&D around the world.

Anyways, end of the story was that Intel and AMD went with it and did some optimizations that could help bytecodes run faster. The stack was optimized with Java, because let’s face it – it is the proverbial assault on the hardware. And the cache was expanded on command from the emper.. eh, Microsoft. Also (if I remember correctly) the “jump to pointer” and various branch instructions were made to execute faster. I remember reading about this in Dr. Dobbs Journal and Microsoft Developer Magazine; granted it was a few years ago. What was interesting is the symbiotic relationship that exists between Intel and Microsoft, I really didn’t know just how closely knit these guys were.

Either way, bytecodes in 2017 is capable of a lot more than they ever were on 16-bit and early 32-bit systems. A cpu like Intel i5 or i7 will chew through bytecodes like a warm knife on butter. It depends on how you orchestrate the opcodes and how much work you delegate to the various instructions.

Modeled instructions

Bytecodes are cool but they have to be modeled right, or its all going to end up as a bloated, slow and limited system. You don’t want to be to low-level, otherwise what is the point of bytecodes? Bytecodes should be a part of a bigger picture, one that could some day be modeled using FPGA’s for instance.

The LDef format is very flexible. Each instruction is ultimately a single 32-bit longword (4 bytes) where each byte holds key information about the command, what data is forward in the cache and how it should be read.

The byte organization is:

  • 0 – Actual opcode
  • 1 – Instruction layout

Depending on the instruction layout, the next two bytes can hold different values. The instruction layout is a simple value that defines how the data for the instruction is passed.

  • Constant to register
  • Variable to register
  • Register to register
  • Register to variable
  • Register to stack
  • Stack to register
  • Variable to variable
  • Constant to variable
  • Stack to variable
  • Program counter (PC) to register
  • Register to Program counter
  • ED (exception data) to register
  • Register to exception-data

As you can probably work out from the information here, this list hints to some archetectual features. Variables are first class citizens in LDef, they are allocated, managed and released using instructions. Constants can be either automatically handled and references by id (a resource chunk is linked to the class binary) or “in place” and compiled directly into the assembly as part of the instruction. For example

load R[0], "this is a test"

This line of code will take the constant “this is a test” and move it into register #0. You can choose to have the text-data stored as a proper resource which is appended to the compiled bytecode (all classes and modules have a resource chunk) or just compile “as is” and have the data read directly. The first option is faster and something you can adjust with compiler optimization options. The second option is easier to work with when you debug since you can see the data directly as a part of the debug memory dump.

And last but not least there are registers. 32 of them in number (so for the low-level coders out there you should have few limitations with regards to register mapping). All operations (like divide, multiply etc) operate on registers only. So to multiply two variables they first have to be moved into registers and the multiplication is executed there – then you can move the result to a variable afterwards.

ldef_asm

LDef assembly code. Simple but extremely effective

The reason registers is used in my runtime system – is because you will not be able to model a FPGA with high-level concepts like “variables” should someone every try to implement this as hardware. Things like registers however is very easy to model and how actual processors work. You move things from memory into a cpu register, perform an action, and then move the result back into memory.

This is where Java made a terrible mistake. They move all data onto the stack and then call the operation. This simplifies execution of instructions since there is never any registers to keep track of, but it just murders stack-space and renders Java useless on mobile devices. The reason Google threw out classical Java (e.g Java as bytecodes) is due to this fact (and more). After the first android devices came out they quickly switched to a native compiler – because Java was too slow, to power-hungry and required too much memory (especially stack space) to function properly. Battery life was close to useless and the only way to save Java was to go native. Which is laughable because the entire point of Java was mobility, “compile once run everywhere” — yeah well, that didn’t turn out to well did it 😀

Dot net improved on this by adding a “load resource” type instruction, where each method will load in the constant data by number – and they are loaded into pre-defined slots (the variables you have used naturally). Then you can execute operations in typical “A + B to C” style (actually all of that is omitted since the compiler already knows both A, B and C). This is much more stack friendly and places performance penalty on the common language runtime (CLR).

Sadly Microsoft’s platform, like everything Microsoft does, requires a pretty large infrastructure. It’s not simple, elegant and fast – it’s more monolithic, massive and resource hungry. You don’t see .net being the first thing ported to a new platform. You typically see GCC followed by Freepascal.

LDef takes the bytecode architecture one step further. On assembly level you reference data using identifiers just like .net, and each instruction is naturally executed by the runtime-engine – but data handling is kept within the virtual realm. You are expected to use the registers as temporary holding slots for your information. And no operations are ever done directly on a variable.

The benefit of this is:

  • Better payload balancing
  • Easier to JIT since the architecture is closer to real assembly
  • Retains important aspects of how real hardware works (with FPGA in mind)

So there are good reasons for the standard, all of them very good.

C like intermediate language

With assembler so clearly defined you would expect  assembly to be the way you work. In essence that what you do, but since OOP is built into the system and there are structures you are expected to populate — structures that would be tedious to do in raw unbridled assembler, I have opted for a C++ inspired intermediate language.

ldef_app

The LDEF assembler kitchen sink

You would half expect me to implement pascal, but truth be told pascal parsing is more complex than C parsing, and C allows you to recycle parsers more easily, so dealing with sub structures and nested regions is less maintainance and easier to write code for.

So there is no spesific reason why I picked C++ as a intermediate language. I would prefer pascal but I also think it would cause a lot of confusion since object pascal will be the prime citizen of LDef languages. My other language, N++ also used curley brackets so I’m honestly not strict about what syntax people prefer.

Intermediate language features supported are:

  • Class declarations
  • Struct declarations
  • Parameter to register mapping
  • Before mehod code (enter)
  • After method code (leave)
  • Alloc section for class fields
  • Alloc section for method variables

The before and after code for methods is very handy. They allow you to define code that should execute before the actual procedure. On a higher level when designing a new language, this is where you would implement custom allocation, parameter testing etc.

So if you call this method:

function testcode() {
    enter {
      writeln("this is called before the method entry");
    }
    leave { 
      writeln("this is called after the method exits");
    }
  writeln("this is the method body");
}

Results in the following output:

this is called before the method entry
this is the method body
this is called after the method exits

 

When you work with designing your language, you eventually.

Truly portable

Now I have no aspirations in going into competition with neither Oracle, Microsoft or anyone in between. Like most geeks I do things I find interesting and enjoy working within a field of computing that is stimulating and personally rewarding.

Programming languages is an area where things havent really changed that much since the golden 80s. Sure we have gotten a ton of fancy new software, and the way people use languages has changed – but at the end of the day the languages we use havent really changed that much.

JavaScript is probably the only language that came out of the blue and took the world by storm, but that is due to the central role the browser holds for the internet. I sincerely doubt JavaScript would even have made a dent in the market otherwise.

LDef is the type of toolkit that can change all this. It’s not just another language, and it’s not just another bytecode engine. A lot of thought has gone into its architecture, not just notions of “how can we do this or that”, but big ideas about the future of computing and how IOT will sculpt the market within 5-8 years. And the changes will be permanent and irrevocable.

Being able to define new languages will be utmost important in the decade ahead. We don’t even know the landscape yet but we can extrapolate some ideas based on where technology is going. All of it in broad strokes of course, but still – there are some fundamental facts about computers that the timeless and havent aged a day. It’s like mathematics, the Pythagorean theorem may be 2500 years old but it’s just as valid today as it was back then. Principles never die.

I took the example of a game engine at the start of this article. That might have been a poor choice for some, but hopefully the general reader got the message: the nature of control requires articulation. Regardless if you are coding an invoice system or a game engine, factors like time, portability and ease of use will be just as valid.

There is also automation to keep your eye on. While most of it is just media hype at this point, there will be some form of AI automation. The media always exaggerates things, so I think we can safely disregard a walking, self-aware Terminator type robot replacing you at work. In my view you can disregard as much as 80% of what the media talks about (regardless of topic). But some industries will see wast improvement from automation. The oil and gas sector are the most obvious. A the moment security is as good as humans can make them – which means it is flawed and something goes wrong every day around the globe. But smart pumping stations and clever pressure measurements and handling will make a huge difference for the people who work with oil. And safer oil pipelines means lives saved and better environmental control.

The question is, how do we describe programs 20 years from now? Is our current tools up for the reality of IOT and billions of connected devices? Do we even have a language that runs equally well as a 1000 instance server-cluster as it would as a stand alone program on your desktop? When you start to look into parallel computing and multi-cluster data processing farms – languages like C# and C++ makes little sense. Node.js is close, very close, but dealing with all the callbacks and odd limitations of JavaScript is tedious (which is why we created Smart Pascal to begin with).

The future needs new things. And for that to happen we first need tools to create them. Which is where my passion is.

Node, native and beyond

When people create compilers and programming languages they often do so for a reason. It could be that their own tools are lacking (which was my initial motivation), or that they have thought of a better way to achieve something; the reasons can be many. In Microsofts case it was revenge and spite, since they were unsuccessful in stealing Java away from Sun Microsystems (Oracle now owns Java).

LDEF

LDef binaries are fairly straight forward. The less fluff the better

Point is, you implement your idea using the language you know – on the platform you normally use. So for me that is object pascal on windows. I’m writing object pascal because while the native compiler and runtime is written in Delphi – it is made to compile under Freepascal for Linux and OS X.

But the primary work is done in Smart Pascal and compiled to JavaScript for node.js. So the native part is actually a back-port from Smart. And there is a good reason I’m doing it this way.

First of all I wanted a runtime and compiler system that would require very little to run. Node.js has grown fat in features over the past couple of years – but out of the box node.js is fast, portable and available almost anywhere these days. You can write some damn fast and scalable cloud servers with node (and with fast i mean FAST, as in handling thousands of online gamers all playing complex first person worlds) and you can also write some stable and rock solid system services.

Node is turning into a jack of all trades, capable of scaling and clustering way beyond what native software can do. Netflix actually re-wrote their entire service stack using node back in 2014. The old C++ and ASP approach was not able to handle the payload. And every time they did a small change it took 45 minutes to compile and get a binary to test. So yeah, node.js makes so much more sense when you start looking a big-data!

So I wanted to write LDef in a way that made it portable and easy to implement. Regardless of platform, language and features. Out of the box JavaScript is pretty naked stuff and the most advanced high-level feature LDef uses is buffers to deal with memory. everything else is forced to be simple and straight forward. No huge architecture or global system services, just a small and fast runtime and your binaries. And that’s all you need to run your compiled applications.

Ultimately, LDef will be written in LDef itself and compile itself. Needing only a small executable stub to be ported to a new platform. Most of mono C# for Linux is written in C# itself – again making it super easy to move mono between distros and operating systems. You can’t do that with the Visual Studio, at least not until Microsoft wants you to. Neither would you expect that from Apple XCode. Just saying.

The only way to achieve the same portability that mono, freepascal and C/C++ has to offer, is naturally to design the system as such from the beginning. Keep it simple, avoid (operatingsystem) globalization at all cost, and never-ever use platform bound APIs except in the runtime. Be Posix but for everything!

Current state of standard and licensing

The standard is currently being documented and a lot of work has been done in this department already. But it’s a huge project to document since it covers not only LDEF as a high-level toolkit, but stretches from the compiler to the source-code it is designed to compile to the very binary output. The standard documentation is close to a book at this stage, but that’s the way it has to be to ensure every part is understood correctly.

But the question most people have is often “how are you licensing this?”.

Well, I really want LDEF to be a free standard. However, to protect it against hijacking and abuse – a license must be obtained for financial entities (as in companies) using the LDEF toolkit and standard in commercial products.

I think the way Unreal software handles their open-source business is a great example of how things should be done. They never charge the little guy or the Indie developer – until they are successful enough to afford it. So once sales hits a defined sum, you are expected to pay a small percentage in royalties. Which is only fair since Unreal engine is central to the software to begin with.

So LDef is open source, free to use for all types of projects (with an obligation to pay a 3% royalty for commercial products that exceeds $4999 in revenue). Emphasis is on open source development. As long as the financial obligations by companies and developers using LDEF to create successful products is respected, only creativity sets the limit.

If you use LDEF to create a successful product where you make 50.000 NKR (roughly USD 5000) you are legally bound to pay 3% of your product revenue monthly for the duration of the product. Which is extremely little (3% of $5000 is $150 which is a lot less than you would pay for a Delphi license, the latter costing between upwards of USD 3000).

 

Freepascal online, awesome work!

June 20, 2017 Leave a comment

Freepascal is, with the exception of gcc – the gnu c/c++ compiler toolchain, probably the most versatile and adaptable compiler in the world. Unlike most compilers outside the C camp – Freepascal is complete toolchain, and it has more than a few similarities to c/c++ in that it’s largely self-reliant.

MUI_PascalDemo

Freepascal scales from the old mc68000 family to the latest Intel i7, impressive!

FPC is easy to port over to new (or old) platforms, and as far as languages go – object pascal has a level of productivity that I have yet to find in other languages.

Now I have no interest in language wars or fanboy stereotyping, all languages have aspects that are unique and beneficial. So this is not an attempt to promote A over B.

Targeting both old and new from a single codebase

While mainstream and work related topics never brush into this – there is actually a healthy group of people still using older platforms. Naturally this is well within the “hobby” range – where passion and curiosity are the prime motivators rather than cash. But still, it is interesting to see how little of modern software is capable of scaling back to processors 10, 15 or 20 years out of date.

Alb42 is the nickname of a very industrious individual who has taken it on himself to port freepascal back to his roots. Which just happens to be the Amiga line of computers.

MiniSlugAmiga

Some of the best arcade games ever made are “vintage” by nature

But he has also done a lot of work for modern systems. A new Amiga was actually released this year (although most people havent noticed it) – and finding development software for a brand new platform is not easy. You have a variant of C/C++ naturally – but the honest truth is that C/C++ is not the language most people use to create desktop productivity software in. That is left to object pascal, C# and variations of Java. Delphi alone is responsible for a huge portfolio of popular software on Windows for instance. Only surpassed recently by monoliths like C#.

So getting freepascal running on Amiga OS 4 (which is the latest and modern operating system shipping with the new Amiga x5000 model) is very important to the market share of the platform.

Do it online!

Alb42 has setup an online compiler testbed where you can type in your freepascal snippets and compile to executable code on the spot. You can choose between MorphOS, Classic 68k, Aros (x86) and Amiga OS 4 (ppc). It’s a very simple setup yet it’s possible to do ad-hoc compilation and play around with the system 🙂

onlinecompile

While humble, Alb42 has setup an interesting online kitchen sink that demonstrates how flexible and powerful object pascal really is

To give it a go, head over to: https://home.alb42.de/fpamiga/

And as always, visit his blog for both VMware images with cross compilation and binaries for your favorite retro system.

But perhaps even more important, your new x5000 (!)

Dont forget to join us at Amiga Dispatch on Facebook.

Cheers!

A day with the Tinkerboard, part 2

June 10, 2017 Leave a comment

Please read the first article here before you continue.

Having gone through a wealth of alternative boards, alternative to the Raspberry PI 3b that is; I have to admit that I am really, really disappointed by the results. Not just with the Tinkerboard but also boards like the ODroid XU4 and the whole fruit-basket of banana, orange and pineapple PI clones.

asus

What a waste

And please keep in mind that I have not been out to discredit anyone. I have been genuinely interested in these alternatives for many reasons, not just emulation or multimedia. I have no problem paying that extra 15£ for a system that delivers twice the power of the PI, in fact I even paid close to 150£ for the UP board. Which is also the only board that so far has delivered on its promise. Microsoft Windows and the horrors of eMMC storage considered.

Part of what I work with these days include node.js and full screen browser views, often touch enabled for POS (point of sale) type machines. This is an exciting field because for the first time in history we can now create rich, responsive and rock solid solutions using off the shelves web technology.

So while emulation of older systems is interesting for hobby purposes, the performance of vital technologies like the browser – and consequently the performance of JavaScript and the underlying JIT compilation, is naturally paramount for what I do.

So I was really looking forward to the extra power these boards advertize. Both CPU, number of cores, extra memory and last but not least – the benefits of a fast GPU. The latter directly impacts the effects we can use in our user-interface. And while effects may seem superficial, its important for touch feedback and the customers experience.

desktop_embedded

We have all come across touch devices that give little or no visual feedback. We all know what its like when you type something and the UI just hangs while talking with a server. This is why web tech is so important today. In many ways Html5 and websocket is re-defining what we think of as software.

So while I bring little pie-charts and numbers to the table, I do bring honesty and perspective. It’s not all fun and games.

Was it really fair?

It has been voiced by a couple that perhaps my way of testing things is unfair. Like I mentioned in my previous article the PI has a healthy head-start and a rich community that not only consumes and demands, but also contributes substantially to the value of the product. A well established community around a product is worth its weight in gold. The alternative boards have yet to establish that and when it comes to the Tinkerboard, it’s pretty much just getting started.

Secondly, as far as emulation goes, we have to remember that those who make these products rarely have emulation in mind (and probably not Amiga which by any standard is esoteric compared to Nintendo and Sega consoles). If we look at what people use gadgets like the Raspberry PI for, I think products like Kodi, Apache and various DIY programming experiments out-rank all the retro systems combined. And it’s clear that the people behind Tinkerboard is more media oriented than anything else; it ships with hardware support for 4k video playback after all.

Having said that, I do feel I gave both the ODroid and Tinkerboard a fair trial. I did not approach these boards as a developer, but rather as a consumer. I have focused on the experience of familiar things – like using a browser and how well JavaScript runs, does the UI lag under stress; things that any customer will face after purchase.

UAE4Arm vs FS-UAE

As far as UAE (Unix Amiga Emulator) there is one thing that might be unfair. Amibian is based around something called UAE4Arm which is a highly optimized version of the Amiga emulator. I would even go so far as to say its THE most optimized UAE version on any platform – Windows and OSX included.

The Amiga was a lot more than just a games machine, it was way ahead of both Windows and OS X for a decade

The Amiga was a lot more than just a games machine, it was way ahead of both Windows and OS X for a decade

Comparing that with FS-UAE is perhaps unfair. FS-UAE is based on the older, non optimized codebase. It’s known for stable emulation and good JIT performance. Sadly the JIT engine only works on x86 and is thus disabled on ARM systems. So to be honest it can’t hold a candle against UAE4Arm. The latter uses lookup tables for everything, fast switch cases instead of long sequences of “if then else” code (easier for the compiler to optimize), and it benefits from hardcore llvm level 3 optimization. So I agree, comparing FS to UAE4Arm is like comparing a tiger with a kitten. It’s not even in the same ballpark.

Optimization on this level is almost a lost art these days. People are used to languages like C# and Java – which are hopelessly bloated (for those that have seen what raw assembler and hand optimized pascal can deliver). On the Amiga we learned how to shave off a clock-cycle here and there – and the end result really made a huge difference.

The same type of low-level, every cycle counts, type optimization is used throughout UAE4Arm. FS-Uae is safe, slow and can hardly deliver A500 speed on a Raspberry PI. Yet UAE4Arm spits out 3.2 times the power of an Amiga 4000 with a 68040 cpu. That was the flagship power machine back in the day, and this little $40 card delivers 3 times the power under emulation.

Its like comparing a muffled 1978 Lada with a pissed off 2017 Lamborghini.

Fair is fair

Having tested just about every distro I could find for the Tinker, even back-tracking to older releases in hope that the drivers there would be better suited for the board (perhaps the new drivers were released in a hurry, or a mistake was made) the last straw really was Android. My hope was that Android could have better drivers since it sees a lot of development. It has giants like Google and Samsung behind it – so the last glimmer of light was that Asus had focused on Android rather than Linux.

podcast-event

Will Android save the say and give the Tinker it’s due credit?

I downloaded the latest image, burnt it and booted – thinking that this should finally put things right. Well it didn’t. Quite the opposite.

The first boot took almost an hour. Yes, an hour. I suspect this is because it did a network restore – downloading packages and requirements from the Internet. But that is still no excuse. I have tried Android on the PI it and it was up and running in less than 10 minutes; and that includes having to set some initial user information.

When the Android desktop finally came into view, it was stripped down with no repository package manager. In other words you would have to manually install packages. But that would have been OK had it not been for the utterly sluggish performance.

Android on the PI is no race-car, but at least it doesn’t freeze and lock-up every five second. Clicking on a link in the browser froze the whole system, and you were asked if you wanted to terminate the process again and again while Android figured out the meaning of life or whatever. Every single time.

I’m sorry but this is the worst experience I have ever had with a board. The ODroid XU4 at least tries to deliver a good experience. I had no problems finding Linux editions that ran well (especially gaming systems, Kodi and some homebrew distros) – and the chrome build for the ODroid gave far better results than both the PI and Tinkerboard. It was not the power-house it had been hyped up to be, but at least it delivered a normal desktop experience (even the PI locks up when facing demanding tasks, but the Tinker has an 8 core cpu, twice the ram and a supposedly superior GPU. With the capacity of running 8 threads and doing task scheduling in batches of 8 – the Tinkerboard should remain both stable and interactive under pressure).

I am sorry but I cannot recommend the Tinkerboard at all. The ODroid gives you a slight edge over the PI and if someone baked a more optimized Linux distro for it (it ships with a variant of Ubuntu) then ODroid XU4  would kick serious ass. But the Tinkerboard is a complete disaster.

Do not waste your money on the Tinkerboard. It may have the hardware but the software to utilize and express that power is simply not there! It has been the most pointless purchase I have done this year.

A common waste

The common denominator between these alternatives is the notion that more power equals more stuff. Which ultimately results in biting over more than they can chew. A distro like Pixel is lightweight, optimized and the PI foundation has spent a considerable amount of time making sure the drivers run as fast as possible.

Weird Science, a fun but completely fictional movie from 1985. I just posted this to liven up an otherwise boring trip down memory lane

Saying it doesnt make it so, except in Weird Science

What these alternative hardware suppliers don’t seem to get is that – what is the point of having a cpu that is twice as fast as the PI, when you waste it on a more demanding version of Linux? Both the ODroid and Tinkerboard ship with Ubuntu clones. And while Ubuntu is wonderful to work with on a high-end x86 machine, 32 gigabytes of ram and a i7 CPU growling away under your desk, it is without a doubt the worst possible OS you could pick for a tiny ARM board.

To put things into perspective: The Tinkerboard has the “theoretical” cpu power of a Samsung Galaxy S4 mobile phone (2012). Yet in their infinite wisdom they decided Ubuntu is what is really going to show off this fart of a device.

It makes absolutely no sense from a consumer’s perspective. As a user you want that extra power for running programs. You want to use the extra power on your stuff. Yet these guys seem to think that wasting all the extra power on infrastructure is going to score points.

So if anyone from the Tinkerboard or ODroid camp reads this, please remember the old saying “If you cannot innovate, emulate”. Make a fast, stable and small Jesse distro like Pixel. Focus on optimization. Compile whatever you can with maximum optimization. Shave off every clock cycle you can. Hand carve the drivers in assembler if that’s what it takes.

Because the sad result is, that right now the Raspberry PI is outperforming you with lesser hardware. There is no point in buying a Tinkerboard because it actually delivers a worse experience. It has bitten off more than it can chew and it’s trying to be something it’s not.

A day with the Tinkerboard, part 1

June 7, 2017 1 comment

Anyone into IOT mini computers, be it as a hobby or professionally – have no doubt heard about the Tinkerboard. The Tinkerboard is an alternative to the Raspberry PI (aren’t they all) produced by hardware giant Asus. Everything from its form-factor (size) to its color coordinated GPIO pins is more or less the same as the PI. What separates the two is that the Tinker gives you double the CPU power, double the ram and pretty much double of everything. If we compare it with the Raspberry PI 3b that is.

But you don’t get double for free. The Tinkerboard retails on Amazon for USD 60 which means something in the ballpark of 55£ for those that live in the UK. For the rest of us that live in far-away places like Norway, prices and taxes may vary.

tinker

Before I continue I want to write a few words about pricing. The word “expensive” is one I keep hearing when it comes to the Tinkerboard, but that is extremely unfair considering what you get for your money.

The Raspberry PI 3b is an awesome product and packs a lot of goodies into a credit-card size computer. The same is true for the Tinkerboard, but they have packed even more stuff into the same space! The difference in price is just 15£ – but those 15£ gives you the power to emulate demanding platforms like Nintendo Wii, Sega Dreamcast and Playstation (if you are into retro gaming). Platforms the Raspberry PI 3b dont stand a chance emulating (at least not smoothly).

For serious applications all that extra cpu power can be the difference between success and failure. In many ways the Tinker feels a bit like working on a low-end laptop. If you are building a movie server for your living-room the Tinkerboard will ble able to convert and stream movies much smoother and better than the PI. And if you are looking for a device to power your next kiosk system or POS device, again the extra memory and cpu power is going to make all the difference. And with just 15£ between the two models, I cannot imagine why anyone would pick the PI if gaming or movie streaming is on the menu.

Oh and it has built-in wi-fi and bluetooth support! Here are the specs as listed on Amazon:

  • High performance Quad Core ARM SOC 1.8GHz with 2GB of RAM -The Tinker board features the Rock chip RK3288 Soc with Mali – T764 GPU and 2GB of Dual Channel DDR3 memory
  • Non shared GBit LAN, Shielded Wi-Fi with upgradable antenna support
  • Highly compatible PCB & Topology, Tinker board offer extensive compatibility with SBC accessories & chassis
  • HD Audio & HD & UHD Video Support – Tinker board supports HD audio 192/24bit audio along with accelerated HD & UHD ( 4K ) video playback support* requires use of Rock chip video player in TinkerOS
  • DIY Friendly Design – Tinker board features multiple DIY friendly use features including a color coded GPIO header, silkscreen PCB and color coded pull tabs

That sounds pretty nice doesnt it 🙂

One does not simply take on the mighty PI

Being an alternative to the Raspberry PI is not as easy as it sounds. You would imagine that it all boils down to the specs right? So if you beef up the cpu, ram and gpu everyone will throw away their PI’s and buy your stuff? If only that was true. In which case the Tinkerboard would be awesome since it outperforms the PI on every level.

But it’s not that simple. There is more to a product than just raw power.

One of the cool things about the Raspberry PI is the community. The knowledge base of the product if you like. In order to build a community and be a success, everything has to click. I mean just look at the third-party eco-system that exists around the PI; all those companies selling add-ons and widgets, sensors and remote controls. The PI itself is quite dull. It’s all that extra stuff that makes it so fun to play with. And then you have things like excellent customer service, a forum where people help each other out, well written documentation — the value of the product suddenly exceeds whatever you paid for it to begin with.

And the Tinkerboard is not alone in trying to get a piece of the action. It competes with boards like the ODroid XU4, the x86 based UP boards, Latte Panda, The Pine boards and a whole fruit-basket of banana, orange and whatever PI clones. All of them trying to get a bite of the phenomenon that is the Raspberry PI.

So Asus got their work cut out for them.

First impressions

Having unpacked the board and plugged in all the cables (which is exactly as dull as it sounds) the first thing you do is look for software. Which naturally should be a one-click operation on their website.

Well, the first thing that annoyed me was exactly that: the Asus website. The Tinkerboard is not something that belongs on a sterile, streamlined corporate site. It belongs on website that should be easy to navigate, with plenty of information that hobbyists and “tinkers” need to get started. The Asus website simply lacks the warmth and friendliness of the PI foundation website. I even had to google around a couple of times just to find the disk images. These should be readily available on the site, on the first page even – to make sure customers can get up and running quickly.

tinkerweb

It’s a little bit clangy and a little bit jammy

The presentation of the board was ok, but you really shouldnt have to use an external search engine to locate essential data for something you just bought. Unlike the PI foundation, Asus seem to have put little effort into what customers get once they have bought the board. As for community I dont even know if there is one.

When you finally manage to find the bloody download page, these are your options (there are 17 downloads in total, counting older OS images, config files and schematics):

  • TinkerOS 1.8
  • TinkerOS 1.6 Beta (my initial download)
  • Android, also beta

And don’t expect a polished Linux distro like Pixel either. You get a clean distro with some typical stuff pre-loaded (python, perl for some reason, and various tidbits) but nothing close to the quality of Pixel.

Note: Why they would list older beta releases in their downloads is beyond me. It only adds to the confusion of a sterile list of items. And customers who just got their boards will be eager to try it out and hence make a mistake like I did. Beta and unstable releases should be in a separate category, while the stable and up-to-date images are listed first.

The second thing that really irritated me was something as simple as adjusting the keyboard layout and setting the locale. These are simply not present in preferences at all. Once again you have to google around until you find the terminal commands to manually set these things, which utterly defeats booting into a desktop environment to begin with. This might seem trivial for someone who knows Linux well, but to a novice it can take hours. Things this fundamental should be available in preferences or at the very least as a script. Like raspi-config on the PI.

But OK. If we look away from these superficial things and focus on the actual board, things are not half bad.

It’s the insides that counts

The first thing you are going to notice is how much faster the system is compared to the PI. When you start an application on the PI that is demanding, like libre-office or Lazarus, the whole system can freeze-up for a few seconds while the CPU go through the roof. I was pleased to find that this is not the case here (at least not as disruptive). You can start a program and the system continues to be responsive while things load.

What made me furious though was the quality of the graphics drivers. Performance wise the Tinkerboard is twice (actually a bit more) as fast as the PI on all fronts – and the GPU is supposed to deliver a vastly superior graphics experience. Yet when I ran Amibian.js (the Smart Pascal JavaScript desktop system) in Chrome – the CSS hardware effects were painfully slow. I even questioned if GPU was used at all (Note: which I later confirmed was the case, so much for downloading a beta image by mistake). I was amazed at the cpu power of the Tinkerboard because despite having to do everything purely with the CPU (moving pixels in memory, allocating device independent bitmaps, rasterizing layers and complex compositions in X) and it was more than usable.

But it sure as hell was not the superior graphics experience I was hoping for.

I also noticed that when moving a normal desktop window (X window that is) rapidly around the screen, it would lag behind and re-trace your steps until it caught up with the mouse pointer. This was very odd since most composition engines would have eliminated these movements as much as possible – and the GPU would do all the work. Again, I blame the drivers and pondered what could possibly make a GPU perform so badly.

gpuwrong

Im supriced it booted at all, nothing is working

So while the system is notably faster than the PI, I get the sense that the drivers and tooling is not really polished. The hardware no doubt got juice but without drivers being carefully written for the board – most of the extra juice you pay for is wasted.

In the end I opened up chrome and typed “chrome://gpu” to see what was going on. And not surprisingly I was correct. It wasnt using the GPU at all and I had wasted hours on something that should have been clearly marked “Lacks hardware acceleration” in bold, italic, underlined big red letters. Like the PI foundation did when they shipped Raspberry PI 2 without GPU drivers. But they promptly delivered and it was ok, because you were informed up front.

 

Back to scratch

After a trip back to google I downloaded the right (or at least working) image, namely the TinkerOS 1.8 stable. It booted up, resized the partition to fill the whole SD card automatically (and other boring bits).

The first thing I did was start chrome and use the “chrome://gpu” to get some statistics. And thankfully most of the items were now working and active.

gpubetter

Thats better, although i ponder why GPU DIB’s are not active

Emulation, the name of the game

While I have serious tests to perform there is no doubt that what I was most eager to play with was UAE (the Unix Amiga Emulator). Like most retro-heads I have a ton of Raspberry PI’s and ODroid’s around the house with the sole purpose of emulating older game or computer systems. And I think everyone knows that my favorite computer in the whole wide world is the Commodore Amiga.

On the PI, Gunnar’s native Amibian distro gives you awesome performance. If you overclock the ram, gpu and cpu using safe numbers – the PI 3b spits out roughly 3.2 times the power of a Mc68040 based Amiga 4000. In other words 3 times the power of a high-end Amiga from the early 90s. That is quite an achievement for a $40 board. You can only imagine my expectations for a board running more than twice as fast as the PI does (and costing that 15£ more).

18986661_10154521981875906_472132585_o

Boots fine but you can forget about using it. The PI is 100 times faster than this

One snag though was that UAE4Arm which is the highly optimized version of the Amiga Emulator doesnt exist for the Tinkerboard. In theory it should compile without problems since the latest version uses SDL exclusively. So there is no odd, deprecated UI toolkit like the old UAE4Arm used.

The only thing available was fs-uae. And while that is a very popular emulator on Windows and Mac, it’s sadly not optimized as much as UAE4Arm is. The latter uses lookup tables to the extreme, all if, then, else statements have been replaced by fast switches — and it’s just optimized beyond anything else running on ARM.

The result on the PI in phenomenal, but sadly fs-uae was all I could play with. I will try to compile UAE4Arm on the Tinkerboard later though. But right now im to busy.

Not optimized at all

It is hard to pinpoint exactly where the problem is, but I suspect the older codebase of fs-uae combined with poorly written drivers is what resulted in the absolutely sluggish behavor.

Dont get me wrong, if all you want to do is play some A500 or A1200 games the Tinker is more than capable. But if you want to run the same high-performance desktop as Amibian gives you on the PI – you can pretty much forget it.

I set the system to emulate an A4000, activated the JIT engine and set my desktop to match the display of the desktop. This should run like hell — yet it was slow as a snail. You could literally see the mouse-pointer slowly trying to keep up as I moved it across the desktop.

One interesting thing though. When I ran fs-uae on the first image, the beta image where everything was crap, it actually ran exceptionally well. But here on the so-called stable and “up to date” version, it was useless for anything but clean floppy gaming.

There are many factors involved in this. The drivers sits at the bottom just above the kernel and exposes the hardware so it can be used through a common API. On top of this you have stuff like OpenGL and on top of that again you have SDL which simplifies typical gaming or graphics tasks. This may sound like a long road for code to travel, but it’s actually very fast. If the drivers expose the power of the GPU that is.

I can only conclude at this point that the Tinkerboard has the firepower, I have seen the result of tests performed by others (and tested a retrogaming image) – and you feel the difference almost immediately when you boot the device. But the graphics drivers seems.. crippled somehow. It’s easy to point the finger at Asus and say they have done a terrible job at writing drivers here, but it can also be a bottleneck elsewhere in the system.

This is something Raspberry PI has got so right. Every part of the system has been optimized for the hardware, which makes the PI feel smooth even under a lot of stress. It delegates all the hard graphical stuff to the GPU – and the GPU just deals with it.

Sadly this is not the case with the Tinkerboard so far. And it really is such a shame because the specs speak for themselves.

Android

Next time: We will be giving Android a test and hopefully that will have drivers that are more up to date than whatever we have seen so far..