Fixed Header in Smart Applications

January 3, 2018 Leave a comment

Smart Mobile Studio gives you a lot of really cool visual controls to play with. One of them is a header control (also called a navigation panel by some) that traditionally show and hide it’s buttons (back and next) in response to form navigation.

One question that many people have asked is: how can I make a header that remains fixed and doesnt scroll with the forms? So no matter what form I navigate to, the header remains in place. Preferably easily accessed.

The Visual Application

Smart Visual Applications are more than just forms and buttons. The first thing that is created when you run a visual Smart Application, is naturally an instance of TApplication; this in turn creates a display control, and inside that again there is something called a “viewport”. Forms are always created inside the viewport.

If you are wondering why on earth we use two nested containers like this, that has to do with scrolling and keeping our controls isolated in one place. Forms are positioned horizontally inside the viewport. So whenever you are moving from Form1 to Form2, depending on the scroll-effect you have picked, the second form is lined up either before or after the current form. We then execute a CSS3 animation that smoothly scrolls the new form into view, or the previous form out of view – depending on how you look at it.

The display

The root display control, TW3Display, has only one job; and that is to house the view control. It also contains code to layout child controls vertically. Since there is typically only one control present – that means you don’t notice much of what TW3Display does.

The “trick” to a static header that remains un-affected by forms, is simply to create the header control with “Application.Display” as the parent. That is all you have to do. You could also create it on Application.Display.View, but then it would cause problems with scrolling. My point for mentioning that is to underline how the RTL has no special rules for it’s structure. All visual entities that make up your Smart Pascal application follow the same laws and are subject to the same rules as TW3Button or TW3Label might be.

Creating controls that don’t attach to a form

The vertical layout that TW3Display does automatically is very simple. It sorts the child elements based on their Y position and places them directly after each other. This means that all you have to do is create the header and then make sure you give it a negative Y position, and it will always remain fixed on top of the Viewport and it’s forms.

TW3Application has a virtual method called ApplicationStarting() that is perfect for what we want to achieve. As the name says this method fires when the application is starting, so this is perfect for creating controls that don’t attach to a form. It also has an accompanying ApplicationClosing() method where we can release the control.

So let’s start by creating our control. Each visual application has a “unit1” that is created automatically. This contains your application object. While TApplication is a bit anonymous under Delphi or Lazarus, under Smart it serves a more central role. It’s the place you expose global values that should be usable throughout the entire program.

unit Unit1;


  Pseudo.CreateForms, // auto-generated unit that creates forms during startup
  System.Types, SmartCL.System, SmartCL.Components, SmartCL.Forms,


  TApplication  = class(TW3CustomApplication)
    FHeader:  TW3HeaderControl;
    procedure ApplicationStarting; override;
    procedure ApplicationClosing; override;
    property  Header: TW3HeaderControl read FHeader;


procedure TApplication.ApplicationStarting;
  FHeader := TW3HeaderControl.Create(Display);
  FHeader.SetBounds(0, -10, 100, 46);

procedure TApplication.ApplicationClosing;


Let’s compile and see what we got so far!


As expected we now have a header outside the form region

Global access

SmartCL, which is the namespace (a collection of units organized under one name) where all visual, DOM based classes live, have a global function for getting the Application object. This is simply Application() and you have probably used it many times.

What is not so well-known is that Application() returns a stock TCustomApplication instance. In other words, if you inspect the instance you will find none of the properties you have defined in TApplication. This is because TApplication is unknown until the application is executed. So in order to access your actual application object, you need to typecast; like I do here:

procedure TForm1.InitializeObject;
  {$I 'Form1:impl'}
  var app := TApplication(Application);
  app.Header.Title.Caption := 'This is my header';

Let’s have a look at the result (note: I added a label as well, just so you don’t think you missed something):


Now this approach works fine for many types of objects. I tend to isolate my database instance there, static header, global storage — all of it can be neatly exposed via TApplication. Fast, simple and efficient.

The final step

The initial state for the static header should be that both buttons are hidden by default. So when you start the application it just shows a title, nothing more.

When you click something that cause navigation to form2 (or some other second form), the back-button should become visible once form2 has scrolled into view.

When the user click the back-button, the opposite should happen. The back button should be disabled while you navigate back to form1, then completely hidden once you have arrived.

I don’t think I need to demonstrate this. Obviously, if you have forms that leads to more forms – then you probably want to add a “navigation stack” to the application object – an array that holds the previously visited forms.

Then whenever someone hits the “back button” you just pop the previous form off the stack, and navigate to it.

Well, hope it helps!




PNG icons on Amiga OS 3.X

December 6, 2017 Leave a comment

A couple of days back I posted a couple of pictures of my Raspberry PI 3b based Amiga setup. This caused quite a stir on several groups and people were unsure what exactly I was posting. Is this Amiga OS 4? Is it Aros? Scalos? Or perhaps just a pimped up classic Amiga 3.x?


The more the questions arose the more I realized that a lot of people dont really know what the PI can do. I dont blame them, between work, kids and mending a broken back it probably took me a year before I even entertained the idea of setting up a proper UAE environment. And as luck would have it, two good friends of mine Gunnar kristjánsson and Thomas Navarro Garcia, had already done the worst part: namely to produce a Linux distro that auto-boots into Workbench (or technically, into a full screen UAE environment).

Taking advantage of speed

Purists might not be happy about it, but the PI delivers some serious processing power when it comes to Amiga emulation. The version of UAE Thomas and Gunnar opted for is UAE4Arm, which is a special version that contains a hand-optimized JIT engine. This takes 68k code and generates ARM machine code “on the fly” and is thus able to run Amiga software much faster than traditional UAE variations like fs-uae.

But what should we do with all that extra speed? I mean, there is a limited number of tasks that benefits from the extra processing power of the PI (or an acellerator for that matter). Well, being a programmer the process of compilation is one aspect I really love the extra grunt. When using modern compilers like freepascal 3.x on a classic 68k amiga, there is no denying we need all the cpu power we can get. So compiling on the PI is a great boost over ordinary, real Amiga machines.


Freepascal is great, although the old “turbo” ide is due for an overhaul

The second aspects is the infrastructure. And this is where we get to the pimping part. By default Workbench is optimized for low-color representation. Meaning that icons and backdrops will be 4-8 colors, fixed palette and fairly useless by modern standards. Since UAE4Arm has built in support for RTG (re-targetable graphics), which means 15, 16, 24 and 32 bit screen-modes (the same as any modern PC) then surely we can remedy the visuals right?

Well, I had a google around and found that there is an icon library that supports the latest png based icons. These are icons that contain 32bit graphics with support for alpha blending (transparency). This is the exact same icon system that is used in Amiga OS 4.

So what I did was download  the versionb 46.x icon library from Aminet. Since the PI emulates (in my config) a mc68040 cpu, i was able to use the 040 optimized binary. And in essence i just copied that into my “libs” folder (and removed the old one first just to be sure).

And voila, my Workbench was now able to show 32 bit png icons just like OS 4 is!

Getting some bling

With OS 4 style icons supported, where do I get some icons to play with? Well, again I went on Aminet and downloaded a ton of large icon packs. I also visited OS4Depot and downloaded some cool background pictures and even more icons.

Then it was the time consuming process or manually replacing the *.info files. All files that you can see via Workbench has an associated .info file with the same name. So if you have a program called “myprogram”, then the icon file will be “”.

And that’s basically it! I spent a saturday replacing icons and doing some mild tweaking in VisualPrefs (again on Aminet), and suddenly my old, grey workbench was alive with radient colors.


I love it! It might not be perfect but i have seen Linux distros that looks worse!

What I find amazing is that even after 30 years the old Amiga OS 3.x can still suprise us! If nothing else it’s a testament to the flexible architecture the guys at Commodore knocked out, an architecture that thrives in extremely low memory situations – yet delivers in spades if you give it more to work with.

Doing some modern chores

One of the first things I installed on my PI was a copy of freepascal. This has been updated to version 3.1, which is just one revision behind the compiler used on Windows and OSX. This is a bit too nifty for standard Amiga machines. You need at least an A1200 with 64 megabyte ram to work with it. Although the size of the binaries is reasonable small if you stay clear of the somewhat bloated LCL framework.

So I was able to use my object pascal skills to create a unzip/zip command-line program in 15 minutes. Doing this on my Amibian box felt great, and I really enjoy the fresh new look of Workbench. In a perfect work OS4 would be 68k and the CPU’s would all be fpga’s that ran close to Intel i7 speeds, but alas – a humble PI will have to do for now.


If you want to re-create my experiment then start by downloading Amibian. This is a clean Linux Distro and doesnt contain workbench. So after you have made an sd-card with Amibian you need to copy over workbench. I suggest you copy over the raw files and mount a linux folder as a drive. Using harddisk images is possible, but I dont trust them. And should an error occur you lose everything. So yeah, stick with folder-mounted drives if you want less frustration.

You can visit Amibian here:

HTML5 Attributes, learn how to trigger conditional styling with Smart Mobile Studio

November 8, 2017 Leave a comment

Im not sure if I have written about Attributes before; Probably, because they are so awesome to work with. But today I’m going to show you something that makes it even more awesome, bordering on unbelievable.

What are HTML attributes again?

attribsBefore we dig into the juicy stuff, let’s talk about attributes. For those that dont know much about HTML or CSS, here is a quick and dirty overview. A lot of people use Smart Mobile Studio because they dont know CSS or HTML beyond the basics (or even because they dont want to learn it, quite a few cant stand JavaScript and CSS). Well that is not a problem.

Note-1: While not a vital prerequisite, I do suggest you buy a good book on JavaScript, HTML and CSS. If you are serious about using web technology (like node.js on the server) your Smart skills will benefit greatly by knowing how things work “under the hood” so to speak. You will make better Smart Mobile Studio applications and you will understand the RTL at a deeper level than the average user.

OK, back to attributes. You know how HTML tags have parameters right? For example, a link to another webpage looks like this:

<a href="http://blablabla">This is a link</a>

Note-2: I dont have time to teach you HTML from scratch, so if you have no idea what the “A” tag is then please google it.

Focus here is not on the “a” part, but rather on the “href” parameter. That is actually not a parameter but a tag-attribute (which must not be confused with a tag-property btw).

Back in the day attributes used to be exclusive; Meaning that if you tried to set some attribute value the tag didnt support – nothing would happen. The browser would just ignore it and the information would be deleted.

Around HTML4 all of that changed. Suddenly we got the freedom to declare our own attributes, regardless of tag. The only catch is that the attribute name must be prefixed with “data-“. Which makes sense because the browser needs to tell the difference between valid attributes, junk and intrinsic (supported) attributes.

Storing information outside the pascal instance

When you create a visual control, the control internally creates a DOM element (or tag object, same thing) that it manages. Most visual controls in our RTL manages a DIV element because that is just a square block that can be easily molded and shaped into whatever you like.

spjs_2105But, when you create a Smart Pascal class you dont just get a DOM element in return. You get a Smart Pascal object instance. This is the same as Delphi and Lazarus: a class is a blueprint of an object. You dont create classes you create instances.

The same thing happens when you use Smart Pascal: the JSVM (JavaScript virtual machine) delivers a JavaScript object instance – and that is what your code operates on. When you create a visual class instance, that in turn will create a DOM element and manage that until you release the Smart Pascal instance.

Storing information in a class is easy. It’s one of the fundamental aspects of object oriented programming and there really isnt that much to say about that. But what if you need to store information in a control you dont own? Perhaps you have installed a package you bought, or that a friend shared with you – and you cant change the class (or perhaps dont want to change the class). What then?

This is where the attribute object comes to the rescue. Because now you can store information directly in the DOM element rather than altering the class itself (!)

That is so powerful I dont even know where to start, because you can write libraries that can do amazing things without fields or demanding the user to change their controls (and in some cases, avoid forcing the user to inherit from a particular custom-control).

A real-life example

Our special effect unit, SmartCL.Effects.pas, uses this technique to keep track of effect state. When you execute an effect on a control a busy-flag is written as an attribute to the managed DOM object. And when the effect is finished the busy-flag is reset.


Our CSS hardware powered effect unit uses attributes to keep track of running effects

If you execute 10 effects on a control, it’s this busy flag that stops all of them running at the same time (which would cause havoc). While this attribute is set any queued effects wait their turn.

This would be impossible to achieve without declaring a busy property, or doing some form of stacking behind the scene; both of them expensive codewise. But with attributes it’s a piece of cake.

And now for the juicy parts

styling-forms-with-cssNow that you know what attributes do and how awesome they are, what can possibly make them even more awesome? In short: “CSS attribute pseudo selectors” (phew, that is a mouthful isnt it!).

So what the heck is a pseudo selector? Again its a long story, so im just going to call it “states”. It allows you to define styles that should be activated when a particular state occurs. The most typical state is the :active state. When you press a button the DOM element is said to be active. This allows us to write CSS styles that are applied when you press the button (like changing the background, border or font-color).

But did you know you can also define styles that react to attribute changes?

Just stop and think about this for a moment:

  • You can define your own attributes
  • You can read, write and check for attributes
  • Attributes are part of the DOM element, not the JS instance
  • You can define CSS that apply when an element has an attribute
  • You can define CSS that apply if an attribute has a particular value

If you are still wondering what the heck this is good for, imagine the following:

With this, you can do the following:

  1. Write an event-handler for TW3Application.OnOrientationChange (an event that fires when the user rotate the mobile device horizontally or vertically).
  2. Store the orientation as a attribute value
  3. Define CSS especially for the orientation attribute values

The browser will automatically notice the attribute change and apply the corresponding CSS. This is probably one of the coolest CSS features ever.

Other things that come to mind:

  • You can write CSS that colors the rows in a grid or listbox based on the data-type the row contains. So an integer can have a different background from a float, boolean or string. And all of it can be automated with no code required on your part. You just need to write the CSS rule once and that’s it.
  • You can use attributes it to trigger pre-defined animations. In fact, you could pre define 100 different animations, and based on the attribute-name you can trigger the correct one. Again all of it can be neatly implemented as CSS.

Let’s make a button that triggers a style

While simple, the following example should serve as a good example. It’s easy to build on and not to complex. Let’s start with the CSS:

button[data-funky="this rocks"] {
  background: none;
  background-color: #FF00FF;
  font-color: #FFFFFF;
  font-size: 22px;
  font-weight: bold;

The CSS above should be easy to understand. First we define the name of the DOM element, which in my case is a button. Next we define the attribute, and like mentioned it has to be prefxed with “data-” (our attributes class does this automatically in the RTL, so you dont need to prefix it when you code). And finally we define the value the style should trigger on, “this rocks”.

Right, let’s write some code:

  MyButton := TW3Button.Create(self);
  MyButton.SetBounds(100,280, 100, 44);
  MyButton.Caption := 'Click me!';
  MyButton.OnClick := procedure (Sender: TObject)
    var Text := MyButton.Attributes.Read('funky');
    if Text <> 'this rocks' then
      MyButton.Attributes.Write('funky','this rocks')

The code is very simple, we read the value of the attribute and then we do a toggle based on the content. So when you click the button it will just toggle the trigger value.

This is how the button looks before we click it:


And when we click the button the attribute is written to, and it’s automatically styled:


How cool is that! The things you can automate with this is almost endless. It is a huge boon for anyone writing mobile applications with Smart Mobile Studio and it makes what would otherwise be a difficult task ridiculously easy.


ClientRect, BoundsRect and adventures in Smart Pascal layout land

November 6, 2017 Leave a comment

HTML really is the kitchen sink of ideas. Some of them are good, others are bad – but all them have valid reason for being there.

When coming from Delphi or C++ builder to web development you really feel like you have tumbled down the rabbit hole from time to time. Especially when it comes to things like margins, padding and clientrect values.

You would imagine that BoundsRect gives you the full size of a control. In fact BoundsRect() should just be the same as putting left, top, width, height into a TRect structure right? Same with ClientRect, it should be the same as putting 0, 0, ClientWidth, ClientHeight into a TRect structure right?

Smart Mobile Studio uses absolute positioning, which means that you can layout controls at ordinary cartesian coordinate values. If you place a button at position 10, 10 – that means 10 pixels from the left edge and 10 pixels from the top edge. This is what we are used to from Delphi and other native languages.

But the browser have different boxing models, or box-sizing modes if you like. We are using the one best suited for per-pixel-positioning, namely “border-box”. This means that the width and height values for the control will include the size of the content, it’s padding and the size of the border. It excludes things like margin since that is just empty air the browser adds to the final co-ordinates of a visual control.

Doing it by the book

Since we are taking Smart out of the homebrew production style these days, there had to come a time where this must be dealt with.

If you don’t care about making your own controls then this wont effect you at all. You will always be able to drag & drop some controls on the form-designer, or (like most of us does) create them from code and perform layout during the Resize() method.

But .. if you want to make controls that conforms to our theme engine, that actually give a damn about margins, padding and wants give CSS the power to change existing controls the way it deserves, then you better pay attention.

Having experimented with this for a while now, here are the two cardinal rules you must follow if you want your controls to take height for the margin, padding and border-sizes defined in our CSS theme files:

  1. Margins only apply when positioning child elements with margins
  2. When doing layout of child elements, padding only apply from the parent or container of content, not the content itself.

Example for rule #1

Imagine you have a panel on a form. You want to populate that panel with 10 child elements and you want to do it properly, taking height for whatever padding the panel may have – and also whatever margin’s may exist for some child elements.

  var dx := W3Panel1.Border.Left.Padding;
  var dy := W3Panel1.Border.Top.Padding;
  for var x := 0 to 9 do
    var Item := FItems[x];
    var ItemRect := TRect.Create(dx, dy, Item.width, item. height);
    ItemRect.right -= (Item.Border.Margin.Left + Item.Border.Margin.Right);
    ItemRect.bottom -= (Item.Border.Margin.Top + Item.Border.Margin.Bottom);

Look at the code above. Notice that we dont initialize dx and dy to 0 (zero). We could ofcourse but that would defeat the purpose of being CSS friendsly.

Also notice that we dont add the left and top margin to the final rectangle, this is because the browser automatically does this for us. Instead, we need to scale the right and bottom edge of the rectangle by subtracting the size of the left and right / top and bottom margins.

So if you want theme friendly layout’s, you have to go the extra mile and include these things.

Note: The above was just an example, our ClientRect() function already deals with padding for us, so you would set dx and dy to ClientRect.left and

ClientWidth and ClientHeight methods however, remain unaffected by padding. Because there will be cases where you want full control and non-conformity.

Example of rule #2

Think of a text-editor. You want to add a bit of margin to the document and simply drag the left-margin widget to where you need it. Padding for HTML elements works pretty much the same way.

To demonstrate I will create a test container class and a test child class.

First, create a new visual application to play with. Drop a TW3Panel control on the form and size it to full the form (with a bit of air from the edges naturally).

Next, go into project options and check the “use custom stylesheet”. That way Smart will clone whatever style you are using and create a new node in your project manager. Add the following CSS to the stylesheet:

.TTestOwner {
  margin: 20px;
  padding: 4px;
  border: 3px solid #FFFF00;
  background-color: #FF0000;

.TTestChild {
  margin: 10px;
  padding: 4px;
  border: 3px solid #000000;
  background-color: #FFFFFF;

With the CSS in place, add the following pascal classes to your mainform code, just below the “type” keyword:

TTestChild = class(TW3CustomControl)

TTestOwner = class(TW3CustomControl)

Now, prior to writing this article I make a couple of helper functions. If you are using Alpha 1 (which most of you are), add the following class and code to your form1 unit:

TW3Theme = class
  class function  AdjustRectToLayoutFactors(const ThisControl: TW3MovableControl; const Rect: TRect): TRect;
  class function  GetPaddedClientRect(const ThisControl: TW3MovableControl): TRect;

class function TW3Theme.GetPaddedClientRect(const ThisControl: TW3MovableControl): TRect;
  if ThisControl <> nil then
    result := TRect.Create(0, 0, ThisControl.ClientWidth, ThisControl.ClientHeight);
    result.Left += ThisControl.Border.Left.Padding;
    result.Top += ThisControl.Border.Top.Padding;
    result.Right -= ThisControl.Border.Right.Padding;
    result.Bottom -= ThisControl.Border.Bottom.Padding;
  end else
  result := TRect.NullRect;

class function TW3Theme.AdjustRectToLayoutFactors(const ThisControl: TW3MovableControl; const Rect: TRect): TRect;
  if ThisControl <> nil then
    (*  Rule #1, "margins should only be added when dealing with child elements"
        Since we are usins "border-box" as our size model, padding and border is
        already included in the clientwidth / clientheight values we get from the

        More importantly, margin only affect left and top edge of a rectangle because
        those are the only factors that affect MoveTo() type functionality in
        the browser itself.

        So when the browser moves an element to position 10px, 10px, it automatically
        adds the margin. If you have a margin of 10 pixels - the result will be
        (visually) that the control ends up at 20px, 20px instead. *)

    // Start with a carbon copy of the rectangle we were given
    result := Rect;

    (*  Note: The browser can only know about the left and top edge when
        placing elements. It cannot see into the future to know the exact
        height of an element, or if the content will suddenly grow. So we
        have to calculate the right and bottom based on our knowledge
        from the Rect parameter *)
    result.right  -= (ThisControl.border.left.margin + ThisControl.border.right.margin);
    result.bottom -= ( + ThisControl.border.bottom.margin);

    (*  Rule #2: Padding should only be applied from a control's padding when
        calculating a position for that child. This is recursive, so a parent
        will apply this to their children, which each child will force it's
        padding on any children it may house. *)

    if ThisControl.parent <> nil then
      var Owner := TW3MovableControl(ThisControl.Parent);
      result.left += Owner.border.left.padding; +=;
      result.right -= owner.border.right.padding;
      result.bottom -= owner.border.bottom.padding;

With both styling and the pascal classes out-of-the-way, lets add some code to get the magic working.

So copy & paste this into your W3Form1.InitializeForm() procedure:

  var LRect:  TRect;
  var Box:    TTestOwner;
  var Child:  TTestChild;

  // Create parent container
  Box := TTestOwner.Create(W3Panel1);
  LRect := TRect.Create(0, 0, W3Panel1.ClientWidth, 300);
  LRect := TW3Theme.AdjustRectToLayoutFactors(Box, LRect);

  // create child element for our test-owner
  Child := TTestChild.Create(Box);
  LRect := TRect.Create(0, 0, box.ClientWidth, box.ClientHeight);
  LRect := TW3Theme.AdjustRectToLayoutFactors(Child, LRect);

Note: The TW3Theme class is a part of Alpha 2 which should hit the download section next week. But you should now have everything you need to get this working, no matter what version of Smart Mobile Studio you are using.

Putting it all together

OK, lets run our application and have a look at the results. What we should see is a panel on a form – inside that should be a box that is 20 pixels from the edges (since the CSS defines 20 pixel margins). The box also has 4 pixel padding defined, so the total offset from the edges should be 24 pixels.

The child control inside the box likewise has margin and padding. It operates with 10 pixels of margin and 4 pixels padding. It also sports a 3 pixel border. So let’s see what we have so far:


As you can see it’s not that hard to deal with; just a bit of a brain teaser. Those that write custom controls for Delphi are used to dealing with stuff like this all the time. The difference is that native languages are less cryptic about things, and they also make width / height return the full size of the control. Regardless of what the content may be.

You might have noticed that Delphi has a new “Align with margin” property? Not sure when it came into the system but somewhere around Delphi XE I believe (?). There you define the size of the margin – and Delphi does the rest. You don’t have to think about the size of the margin, and it only comes into play when the Align property is activated.

Final notes

We are doing some brainstorming on how to best deal with these things right now. Personally I think the code I have shown so far, especially the helper code, goes a long way to make this easy to work with.

Some have voiced that ClientRect should always start at zero, but why is that? Where does it say that Clientrect should always be “0,0, width-1, height-1” ? That is not the voice of reason, that is the sound of old habits! The whole point of having a ClientRect, be it in Delphi, Lazarus, C++ or C# is because this can change. It would be equally futile to demand that ClipRect should always be the same as client-rect. That is to utterly miss the whole point of sequential rendering and fast graphics.

So the lesson is: If you play by the rules and never use hard-coded values, then your code wont be affected. And if you want to adjust so your code is 100% theme compatible (and again, this only is valuable for component writers) then calling a simple function to get the rectangle adjusted for margin etc. is not exactly rocket science. It’s a one-liner.

Well, hope it helps!

Custom dialog and loading data from JSON in Smart Pascal

October 30, 2017 Leave a comment

Right now we are putting the finishing touches on our next update, which contains our new theme engine. As mentioned earlier (especially if you follow us on Facebook) the new system builds on the older – but we have separated border and background from the basic element styling.

When working with the new theme system, I needed an application that could demonstrate and show all the different border and background types, most of our visual controls – but also information about what Smart Mobile Studio is, what it’s features are and where you can buy it.


So it started as a personal application just to get a good overview of the CSS themes I was working on; but it has become an example in it’s own right.

Dont hardcode, just dont

If you look at the picture above, there is a MenuList with the options: “Introduction”, “Features” and “Where to buy”. When you click these I naturally want to inform the user about these things by displaying information.

I could have hardcoded the information text into the application; in many ways that would have been simpler (considering the data requirements here is practically insignificant). but all that text within the source? I hate mess like that.

Secondly, how exactly was I going to show this information? Would I use the modal framework already in place, or code something more lightweight?

As always I ended up making a new and more lightweight system. A reader style dialog appears and allows you to scroll vertically. The header contains the title of the information and a close button.


Typical “reader” style dialog with scrolling

I also used a block-box to prevent the user from reaching the UI until they click the close-button. You notice that the form, toolbar and header in the back is darkened. This is actually a control that is semi-transparent that does one thing: prevent anyone from clicking or interacting with the UI while the dialog is active.

The JSON file structure

json_structureThe structure I needed was very simple: our records would have a unique ID that we use to fetch and recognize content; It would also have a Title and Text property. It really doesnt have to be more difficult than that.

To work with the JSON i used the online JSON editor JSonEditorOnline, which is actually really good! It allows you to write your JSON and then format it so that special characters (like CR+LF) is properly encoded.

Putting it all together

Having coded the dialog thing first, I now sat down and finished a sort of “Turbo Pascal” record database system for this particular file format. It’s not very big nor extremely advanced – but that’s the entire point! Throwing in SQLite or MongoDB for something as simple as a few records of data – especially when the data is so simple, is just a complete waste of time and effort.

Right, let’s have a peek at the code shall we!

unit infodialog;





  TInfoDialog = class(TW3Panel)
    FHead:    TW3HeaderControl;
    FBox:     TW3Scrollbox;
    procedure InitializeObject; override;
    procedure FinalizeObject; override;
    procedure Resize; override;
    property  Header: TW3HeaderControl read FHead;
    property  Content: TW3Scrollbox read FBox;

    class function ShowDialog(Title, Content: string): TInfoDialog;

  TAppInfoRecord = record
    iiId:     string;
    iiTitle:  string;
    iiText:   string;
    procedure Clear;
    class function Create(const Id, Title, Text: string): TAppInfoRecord;

  TAppInfoDB = class(TObject)
    FStack:     array of TStdCallback;
    FItems:     array of TAppInfoRecord;

    procedure   Parse(DBText: string);

    procedure   HandleDataLoaded(const FromUrl: string;
                const TextData: string; const Success: boolean);
    property    Empty: boolean read ( (FItems.Count < 1) );
    property    Count: integer read (FItems.Count);
    property    Items[index: integer]: TAppInfoRecord
                read  (FItems[index])
                write (FItems[index] := Value);

    function    GetRecById(Id: string; var Info: TAppInfoRecord): boolean;

    procedure   LoadFrom(Url: string; const CB: TStdCallback);
    procedure   Clear;

    destructor  Destroy; override;


uses SmartCL.Application;

// TAppInfoRecord

class function TAppInfoRecord.Create(const Id, Title, Text: string): TAppInfoRecord;
  result.iiId := id.trim();
  result.iiTitle := Title.trim();
  result.iiText := Text;

procedure TAppInfoRecord.Clear;
  iiId := '';
  iiTitle := '';
  iiText := '';

// TAppInfoDB

destructor TAppInfoDB.Destroy;
  if FItems.Count > 0 then

procedure TAppInfoDB.Clear;

function TAppInfoDB.GetRecById(Id: string; var Info: TAppInfoRecord): boolean;
  if not Empty then
    Id := Id.trim().ToLower();
    if id.length > 0 then
      for var x := 0 to Count-1 do
        result := Items[x].iiId.ToLower() = Id;
        if result then
          Info := Items[x];

procedure TAppInfoDB.Parse(DBText: string);
  vId:    variant;
  vTitle: variant;
  vText:  variant;

  DbText := DbText.trim();
  if DbText.length > 0 then
    var FDb := TJSONObject.Create;

    if FDb.Exists('infotext') then
      // get the infotext-> [] array of JS objects
      var Root: TJSInstanceArray := TJSInstanceArray( FDb.Values['infotext'] );

      for var x := 0 to Root.Count-1 do
        var node := TJSONObject.Create(Root[x]);
        if node <> nil then
            .Read('id', vid)
            .Read('title', vtitle)
            .Read('text', vtext);

          FItems.add( TAppInfoRecord.Create(vId, vTitle, vText) );


procedure TAppInfoDB.LoadFrom(Url: string; const CB: TStdCallback);
  if assigned(CB) then
  TW3Storage.LoadFile(Url, @HandleDataLoaded);

procedure TAppInfoDB.HandleDataLoaded(const FromUrl: string;
          const TextData: string; const Success: boolean);
    // Parse if data ready
    if Success then
    // Perform callbacks
    while FStack.Count>0 do
      var CB := FStack.pop();
      if assigned(CB) then

// TInfoDialog

procedure TInfoDialog.InitializeObject;
  FHead := TW3HeaderControl.Create(self);
  FHead.BackButton.Visible := false;
  FHead.NextButton.Caption := 'Close';

  // By default the header text is centered within the space allocated for it,
  // which by default is 2/4. This can look a bit off when we never show
  // the left-button. So we force text-align to the left [normal].['text-align'] := 'left';

  FBox := TW3Scrollbox.Create(self);
  FBox.ScrollBars := sbIndicator;

procedure TInfoDialog.FinalizeObject;

procedure TInfoDialog.Resize;
  var LBounds := ClientRect;
  var dy :=;

  if FHead <> nil then
    FHead.SetBounds(LBounds.left,, LBounds.width, 32);
    inc(dy, FHead.Height +1);

  if FBox <> nil then
    FBox.SetBounds(LBounds.left, dy, LBounds.width, LBounds.height - dy);

class function TInfoDialog.ShowDialog(Title, Content: string): TInfoDialog;
  var Host := Application.Display;
  var Shade := TW3BlockBox.Create(Host);

  var wd := Host.Width * 90 div 100;
  var hd := Host.Height * 80 div 100;
  var dx := (Host.Width div 2) - (wd div 2);
  var dy := (Host.Height div 2) - (hd div 2);

  var Dialog := TInfoDialog.Create(Shade);
  Dialog.Header.Title.Caption := Title;
  Dialog.SetBounds(dx, dy, wd, hd);
  Dialog.fxZoomIn(0.3, procedure ()
    Dialog.Content.Content.InnerHTML := Content;

  Dialog.Header.NextButton.OnClick := procedure (Sender: TObject)
    Dialog.fxFadeOut(0.2, procedure ()
      TW3Dispatch.Execute( procedure ()
      end, 100);

  result := Dialog;


Using the code

The first thing you want to do is to create an instance of TAppInfoDb when your application starts. Remember to add your JSON file and that it’s formatted property, and then use the LoadFrom() method to load in the data:

  // Create our info database and load in the
  // introduction, features etc. JSON datafile
  FInfoDb := TAppInfoDB.Create;
  FInfoDb.LoadFrom('res/JSON1', nil);

The final parameter in the LoadFrom() method is a callback. So if you want to be notified when the file has loaded, just put an anonymous procedure there if you need it.

Showing a dialog with the information is then reduced to looking up the text you need by it’s ID, and firing up the reader dialog for it:

  W3Button1.OnClick := procedure (Sender: TObject)
    var LInfo: TAppInfoRecord;
    if FInfoDb.GetRecById('introduction', LInfo) then
      TInfoDialog.ShowDialog(LInfo.iiTitle, LInfo.iiText);

And that’s it! Simple, effective and ready to be dropped into any application. Enjoy!

Making your own DOM events in Smart Pascal

October 20, 2017 Leave a comment

Being able to listen to events is fairly standard stuff in Smart Mobile Studio and JavaScript in general. But what is not so common is to create your own event-types from scratch that fire on a target, and that users of JS can listen to and use.

The word Events in cut out magazine letters pinned to a cork not

Now before you get confused and think this is a newbie post, I am talking about DOM (document object model) level events here; these are quite different from the event model we have in object pascal. So what im talking about is being able to create events that external libraries can use for instance. Libraries written in plain JavaScript rather than Smart Pascal.

Interesting events

While you may think that events like that, which are akin to all the other DOM events, have little or no use – think again. First of all you can dispatch them on any element and event-emitter. So you can in fact register events on common elements like Document. You can then use custom events as a bridge between your Smart code and third party libraries for instance. So if you have written a kick-ass media system and wants to sell it to a customer who only knows JavaScript – then using native JS events can act as a bridge.

Right, let’s look at a little unit I wrote to simplify this:

unit userevents;




  IW3Prototype = interface
    procedure AddField(FieldName: string; const DataType: TRTLDatatype);
    function  FieldExists(FieldName: string): boolean;
    procedure SetEventName(EventName: string);

  TW3CustomEvent = class(TObject, IW3Prototype)
    FName:      string;
    FData:      TJSONObject;
    FDefining:  boolean;
    procedure   SetEventName(EventName: string);
    procedure   AddField(FieldName: string; const DataType: TRTLDatatype);
    function    FieldExists(FieldName: string): boolean;
    function    GetReady: boolean;
    property    Name: string read FName;
    property    Ready: boolean read GetReady;

    function    DefinePrototype(var IO: IW3Prototype): boolean;
    procedure   EndDefine(var IO: IW3Prototype);
    function    NewEventData: TJSONObject;

    procedure   Dispatch(const Handle: TControlHandle; const EventData: TJSONObject);

    constructor Create; virtual;
    destructor  Destroy; override;


// TW3CustomEvent

constructor TW3CustomEvent.Create;
  inherited Create;
  FData := TJSONObject.Create;

destructor TW3CustomEvent.Destroy;

function TW3CustomEvent.GetReady: boolean;
  result := (FDefining = false) and (FName.Length > 0);

procedure TW3CustomEvent.Dispatch(const Handle: TControlHandle; const EventData: TJSONObject);
  LEvent: THandle;
  LParamData: variant;
  if GetReady() then
    if (Handle) then
      // Check for detail-fields, get javascript object if available
      if EventData <> nil then
        if EventData.Count > 0 then
          LParamData := EventData.Instance;

      if (LParamData) then
        // Create event object with detail-data
        var LName := FName.ToLower().Trim();
        @LEvent = new CustomEvent(@LName, { detail: @LParamData });
      end else
        // Create event without detail-data
        var LName := FName.ToLower().Trim();
        @LEvent = new Event(@LName);

      // Dispatch event-object

procedure TW3CustomEvent.SetEventName(EventName: string);
  if FDefining then
    EventName := EventName.Trim().ToLower();
    if EventName.Length > 0 then
      FName := EventName
      raise EW3Exception.Create
      ('Invalid or empty event-name error');
  end else
    raise EW3Exception.Create
    ('Event-name can only be written while defining error');

function TW3CustomEvent.FieldExists(FieldName: string): boolean;
  if FDefining then
    result := FData.Exists(FieldName)
    raise EW3Exception.Create
    ('Fields can only be accessed while defining error');

procedure TW3CustomEvent.AddField(FieldName: string; const DataType: TRTLDatatype);
  if FDefining then
    if not FData.Exists(FieldName) then
      FData.AddOrSet(FieldName, TDataType.NameOfType(DataType))
      raise EW3Exception.CreateFmt
      ('Field [%s] already exists in prototype error', [FieldName]);
  end else
  raise EW3Exception.Create
  ('Fields can only be accessed while defining error');

function TW3CustomEvent.NewEventData: TJSONObject;
   MAX_INT_16 = 32767;
   MAX_INT_08 = 255;
  result := TJSONObject.Create;

    function (Name: string; var Data: variant): TEnumState
      // clear data with datatype value to initialize
      case TDataType.TypeByName(TVariant.AsString(Data)) of
      itBoolean:  Data := false;
          Data := MAX_INT_08;
          Data := $00;
          Data := $FFFF;
          Data := $0000;
          Data := 00000000;
          Data := MAX_INT_16;
          Data := 0000;
          Data := MAX_INT;
          Data := 0;
          Data := 1.1;
          Data := 0.0;
          Data := 20.44;
          Data := 0.0;
      itString:   Data := '';
      else        Data := null;
      result := esContinue;

function TW3CustomEvent.DefinePrototype(var IO: IW3Prototype): boolean;
  result := not FDefining;
  if result then
    FDefining := true;
    IO := (Self as IW3Prototype);

procedure TW3CustomEvent.EndDefine(var IO: IW3Prototype);
  if FDefining then
    FDefining := false;
  IO := nil;


Patching the RTL

Sadly there was a bug in the RTL that prevented the TJSONObject.ForEach() to function properly. This has been fixed in the update we are preparing now, but there will still be a few days before that is released.

You can patch this manually right now with this little fix. Just go into the System.JSON.pas file and replace the TJSonObject.ForEach() method with this one:

function TJSONObject.ForEach(const Callback: TTJSONObjectEnumProc): TJSONObject;
  LData:  variant;
  result := self;
  if assigned(CallBack) then
    var NameList := Keys();
    for var xName in NameList do
      Read(xName, LData);
      if CallBack(xName, LData) = esContinue then
        Write(xName, LData)

Creating events

Events come in two flavours: those with data and those without. This is why we have the DefinePrototype() and EndDefine() methods – namely to define what data fields the event should take. If you dont populate the prototype then the class will create an event without it.

Secondly, events dont need to be registered somewhere. You create it, dispatch it to a handle (or element) and if there is an event-listener attached there looking for that name – it will fire.

Ok let’s have a peek:

  // Create a custom, new, system-wide event
  var LEvent := TW3CustomEvent.Create;
  var IO: IW3Prototype = nil;
  if LEvent.DefinePrototype(IO) then
      IO.AddField('name', TRTLDataType.itString);
      IO.AddField('id', TRTLDataType.itInt32);

  // Setup a normal event-listner
    procedure (ev: variant)
      var data := ev.detail;
      if (data) then

  // Populate some event-data
  var MyData := LEvent.NewEventData();
  MyData.Write('name','John Doe');
  MyData.Write('id', '{F6EB5680-5DC1-422E-8F72-5C60EAC0B46F}');

  // Now send the event to whomever is listening
  LEvent.Dispatch(Display.Handle, MyData);

In the above example I use the Application.Display control as the event-target. There is no special reason for this except that it’s always available. You would naturally create events like this inside your TW3CustomControl (or perhaps the Document element, under a namespace).

You will also notice that any data sent ends up in the “detail” field of the event object. We use a variant datatype since that maps directly to any JS object and also lets us access any property (and create properties for that mapper); so thats why the “ev” parameter in addEventListner() is a variant, not a fixed class.

Well, hope you enjoy the show and happy coding!

PS: Smart now uses an event-manager to deal with input events (mouse, touch), but the other events works like before. Have a look at SmartCL.Events.pas to see some time-saving event classes. So instead of having to use ASM sections and variants, you can use object pascal classes to map any event. 

Smart Mobile Studio and CSS: part 4

October 18, 2017 Leave a comment

If you missed the previous articles, I urge you to take the time to read through them. While not explicit to the content of this article, they will give you a better context for the subject of CSS and how Smart Mobile Studio deals with things:

Scriptable CSS

If you are into web technology you probably know that the latest fad is so-called css compilers [sigh]. One of the more popular is called Less, which you can read up on over at And then you have SASS which seem to have better support in the community. I honestly could not care less (pun intended).

So what exactly is a CSS compiler and why should it matter to you as a Smart Pascal developer? That is a good question! First, it doesnt matter to you at all. Not one iota. Why? Because Smart Mobile Studio have supported scriptable CSS for years. So while the JS punters think they have invented gunpowder, they keep on re-inventing the exact same stuff native languages and their programmers have used for ages. They just bling it up with cool names to make it seem all new and dandy (said the grumpy 44 year old man child).

In short a CSS compiler allows you to:

  • Define variables and constant values you can use throughout your style-sheet
  • Define repeating sections of CSS, a poor man’s “for-next block” if you like
  • Merge styles together, which is handy at times

Smart Mobile Studio took it all one step further, because we have a lot more technology on our hands than just vanilla JavaScript. So what we did was to dump the whole onslaught of power from Delphi Web Script – and we bolted that into our CSS linker process. So while the JS guys have a parser system with a ton of cryptic identifiers – we added something akin to ASP to our CSS module. It’s complete overkill but it just makes me giggle like a little girl whenever I use it.


The new themes being created now all tap into scripting to automate things

But how does it work you say? Does it execute with the program or? Nope. Its purely a part of the linking process, so it executes when you compile your program. Whatever you emit (using the Print() method) or assign via the tags, ends up at that location in the output. Think php or ASP but for CSS instead:

  1. Smart takes your CSS file (with code) and feed’s it to DWScript
  2. DWScript runs it, and spits out the result to a buffer
  3. The buffer is sent to the linker
  4. The linker saves the data either as a separate CSS file, or statically links it into your HTML file.

Pretty cool or what!

So what good can that do?

It can do a world of good. For instance, when you create a theme it’s important to use the same values to ensure that things have the same general layout, colors and styles. Since you can now use constants, variables, for/next loops, classes, records and pretty much everything DWScript has to offer – you have a huge advantage over these traditional JS based compilers.

  • Gradients are generated via a pascal function
  • Font names are managed via constants
  • Font sizes can be made uniform throughout the theme
  • Standard colors that you can also define in your Smart code, thus having a unified color system, can be easily shared between the css-pascal and the smart-pascal codebases.
  • Instead of defining the same color over and over again, perhaps in hundreds of places, use a constant. When you need to adjust something you change one value instead of 200 values!

It’s no secret that browser-standards are hard to deal with. For instance, did you know that there are 3 different webkit formats for defining a top-down gradient? Then you have the firefox version, the microsoft version (edge), the microsoft IE version, the opera version and heaven-forbid: the W3C “standard” that nobody seem interested in supporting. Now having to hand-carve the same gradients over and over for the different backgrounds (of a theme) that might use them – that can be both time consuming and infuriating.

Let’s look at some code that can be used in your stylesheet’s straight away. It’s almost like a mini-unit that perhaps should be made external later. But for now, have a peek:

<span 				data-mce-type="bookmark" 				id="mce_SELREST_start" 				data-mce-style="overflow:hidden;line-height:0" 				style="overflow:hidden;line-height:0" 			></span><?pas   const EdgeRounding          = "4px";   const clDlgBtnFace          = "#ededed";   //#############################################   //Fonts   //#############################################   const fntDefaultName = '"Ubuntu"';   const fntSmallSize   = "12px";   const fntNormalSize  = "14px";   const fntMediumSize  = "18px";   const fntLargeSize   = "24px";   const fntDefaultSize =  fntNormalSize;   type   TRGBAText = record     rs: string;     gs: string;     bs: string;     ac: string;   end;   type   TBrowserFormat = (     gtWebkit1,     gtWebkit2,     gtMoz,     gtMs,     gtIE,     gtAny   );   function GetR(ColorDef: string): string;   begin     if ColorDef.StartsWith('#') then     begin       delete(ColorDef, 1, 1);       var temp := Copy(ColorDef, 1, 2);       result := HexToInt(temp).ToString();     end else     result := '00';   end;   function GetG(ColorDef: string): string;   begin     if ColorDef.StartsWith('#') then     begin       delete(ColorDef, 1, 1);       var temp := Copy(ColorDef, 3, 2);       result := HexToInt(temp).ToString();     end else     result := '00';   end;   function GetB(ColorDef: string): string;   begin     if ColorDef.StartsWith('#') then     begin       delete(ColorDef, 1, 1);       var temp := Copy(ColorDef, 5, 2);       result := HexToInt(temp).ToString();     end else     result := '00';   end;   function OpacityToStr(const Opacity: float): string;   begin     result := FloatToStr(Opacity);     if result.IndexOf(',') ><span 				data-mce-type="bookmark" 				id="mce_SELREST_end" 				data-mce-style="overflow:hidden;line-height:0" 				style="overflow:hidden;line-height:0" 			></span> 0 then
      result := StrReplace(result, ',', '.')

  function ColorDefToRGB(const ColorDef: string): TRGBAText;
  begin := GetR(ColorDef); := GetG(ColorDef); := GetB(ColorDef); := '255';

  function ColorDefToRGBA(const ColorDef: string; Opacity: float): TRGBAText;
  begin := GetR(ColorDef); := GetG(ColorDef); := GetB(ColorDef); := OpacityToStr(Opacity);

  function GetRGB(ColorDef: string): string;
    result += 'rgb(';
    result += GetR(ColorDef) + ', ';
    result += GetG(ColorDef) + ', ';
    result += GetB(ColorDef);
    result += ')';

  function GetRGBA(ColorDef: string; Opacity: float): string;
    result += 'rgba(';
    result += GetR(ColorDef) + ', ';
    result += GetG(ColorDef) + ', ';
    result += GetB(ColorDef) + ', ';
    result += OpacityToStr(Opacity);
    result += ')';

  function SetGradientRGBSInMask(const Mask: string; First, Second: TRGBAText): string;
    result := StrReplace(Mask,   '$r1',;
    result := StrReplace(result, '$g1',;
    result := StrReplace(result, '$b1',;

    if result.contains('$a1') then
      result := StrReplace(result, '$a1',;

    result := StrReplace(result, '$r2',;
    result := StrReplace(result, '$g2',;
    result := StrReplace(result, '$b2',;

    if result.contains('$a2') then
      result := StrReplace(result, '$a2',;

  function GradientTopBottomA(FromColorDef, ToColorDef: TRGBAText;
           BrowserFormat: TBrowserFormat): string;
    var xfirst := FromColorDef;
    var xSecond := ToColorDef;

    case BrowserFormat of
        var mask := "-webkit-gradient(linear, left top, left bottom, color-stop(0, rgba($r1,$g1,$b2,$a1)), color-stop(100, rgba($r2,$g2,$b2,$a2)))";
        result := SetGradientRGBSInMask(mask, xFirst, xSecond);

        var mask := "-webkit-linear-gradient(top, rgba($r1,$g1,$b2,$a1) 0%, rgba($r2,$g2,$b2,$a2) 100%)";
        result := SetGradientRGBSInMask(mask, xFirst, xSecond);

        var mask := "-moz-linear-gradient(top, rgba($r1,$g1,$b2,$a1) 0%, rgba($r2,$g2,$b2,$a2) 100%)";
        result := SetGradientRGBSInMask(mask, xFirst, xSecond);

        var mask := "-ms-linear-gradient(top, rgba($r1,$g1,$b2,$a1) 0%, rgba($r2,$g2,$b2,$a2) 100%)";
        result := SetGradientRGBSInMask(mask, xFirst, xSecond);

        var mask := "filter: progid:DXImageTransform.Microsoft.gradient(startColorstr=rgba($r1,$g1,$b2,$a1), endColorstr=rgba($r2,$g2,$b2,$a2),GradientType=0)";
        result := SetGradientRGBSInMask(mask, xFirst, xSecond);

        var mask := "linear-gradient(to bottom, rgba($r1,$g1,$b2,$a1) 0%, rgba($r2,$g2,$b2,$a2) 100%)";
        result := SetGradientRGBSInMask(mask, xFirst, xSecond);

  function GradientTopBottom(FromColorDef, ToColorDef: string;
           BrowserFormat: TBrowserFormat): string;
    (* var xfirst  := ColorDefToRGB(FromColorDef);
    var xSecond := ColorDefToRGB(ToColorDef);
    var mask := ''; *)

    case BrowserFormat of
        var mask := "-webkit-gradient(linear, left top, left bottom, color-stop(0, $a), color-stop(100, $b))";
        result := StrReplace(mask, '$a', FromColorDef);
        result := StrReplace(result, '$b', ToColorDef);

        var mask := "-webkit-linear-gradient(top, $a 0%, $b 100%)";
        result := StrReplace(mask, '$a', FromColorDef);
        result := StrReplace(result, '$b', ToColorDef);

        var mask := "-moz-linear-gradient(top, $a 0%, $b 100%)";
        result := StrReplace(mask, '$a', FromColorDef);
        result := StrReplace(result, '$b', ToColorDef);

        var mask := "-ms-linear-gradient(top, $a 0%, $b 100%)";
        result := StrReplace(mask, '$a', FromColorDef);
        result := StrReplace(result, '$b', ToColorDef);

        var mask := "filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='$a', endColorstr='$b',GradientType=0)";
        result := StrReplace(mask, '$a', FromColorDef);
        result := StrReplace(result, '$b', ToColorDef);

        var mask := "linear-gradient(to bottom, $a 0%, $b 100%)";
        result := StrReplace(mask, '$a', FromColorDef);
        result := StrReplace(result, '$b', ToColorDef);

This code has to be placed on top of your CSS. It should be the very first of the css file. Now let’s make some gradients!

.TW3ButtonBackground {
  background-color: <?pas=clDlgBtnFace?>;
  background-image: <?pas=GradientTopBottom('#FFFFFF','#F0F0F0', gtWebkit1)?>;
  background-image: <?pas=GradientTopBottom('#FFFFFF','#F0F0F0', gtWebkit2)?>;
  background-image: <?pas=GradientTopBottom('#FFFFFF','#F0F0F0', gtMoz)?>;
  background-image: <?pas=GradientTopBottom('#FFFFFF','#F0F0F0', gtMs)?>;
  background-image: <?pas=GradientTopBottom('#FFFFFF','#F0F0F0', gtIE)?>;
  background-image: <?pas=GradientTopBottom('#FFFFFF','#F0F0F0', gtAny)?>;

.TW3ButtonBackground:active {
  background-color: <?pas=clDlgBtnFace?>;
  background-image: <?pas=GradientTopBottom('#E7E7E7','#FFFFFF', gtWebkit1)?>;
  background-image: <?pas=GradientTopBottom('#E7E7E7','#FFFFFF', gtWebkit2)?>;
  background-image: <?pas=GradientTopBottom('#E7E7E7','#FFFFFF', gtMoz)?>;
  background-image: <?pas=GradientTopBottom('#E7E7E7','#FFFFFF', gtMs)?>;
  background-image: <?pas=GradientTopBottom('#E7E7E7','#FFFFFF', gtIE)?>;
  background-image: <?pas=GradientTopBottom('#E7E7E7','#FFFFFF', gtAny)?>;

Surely you agree that the above makes gradients a lot easier to work with? (and we can simplify it even more later). You can also abstract it further right now by putting the start and stop colors into constants – thus making it super easy to maintain and change whatever style use those constant-colors.

Now let’s use our styles for something. Start a new Smart Mobile Studio Visual Project. Do as mentioned in the previous articles and make the stylesheet visible (project options, use custom css).

Now copy and paste in the code on top of your css-file, then copy and paste in the style-code above at the end of the css-file.

In the Smart IDE, drop a button on the form, then go into the code-editor and locate InitializeForm. Add the following to the procedure:

w3button1.StyleClass := '';

Compile and run the progra, and voila, you will now have a button with a nice, gradient background. A gradient that will work in all modern browsers, and that will be easy to maintain and change later should you want to.

Start today

Smart has had support for scriptable CSS files for quite some time. If you go into the Themes folder of your Smart Mobile Studio installation, you will find plenty of CSS files. Many of these use scripting as a part of their makeup. But it’s only recently that we have started to actively use it as it was meant to be used.

But indeed, spend a little time looking at the code in the existing stylesheets, and feel free to play around with the code I have posted here. The sky is the limit when it comes to creative and elegant solutions – so I’m sure you guys will do miracles with it.

Smart Mobile Studio and CSS: part 3

October 12, 2017 Leave a comment

In the first article we looked at some ground rules for how Smart deals with CSS. The most important part is how Smart Mobile Studio maps pascal class names to CSS style names. Simple, but extremely effective.

In the second article we looked at how you should write classes to make styling easy. We also talked about code discipline and that you should never use TW3CustomControl directly, because it makes styling time-consuming and cumbersome.

In this article we are going to cover two things: first we are going to look at probably the most powerful feature CSS has to offer, namely cascades. And then we are going to talk a bit about the new theme system we are working on. Please note that the new theme system is not yet available in the alpha releases yet. Like all things it has to go through the testing stage. All our visual controls needs a little adjustment to support the new themes as well, which doesn’t affect you – but it is time-consuming work.


Writing a CSS style for your control should be pretty easy if you have read our previous two articles. But do you really want a large, complex and monolithic style? If you have a look at the stylesheet that ships with Smart Mobile Studio (any of them, there are several), you probably agree that it’s not easy to understand at times. Each control have it’s style definition, that part is clear, but every style includes font, backgrounds, text colors, text shadowing, margins, borders, border shadows, gradients (ad nauseum). Long story short: stylesheets like this is hell to maintain and extremely time-consuming to make.

CSS have this cool feature where you can actually take any number of styles and apply them to the same element. This might sound nutty at first but think it through, because it is going to make your like a lot easier:

  • We can isolate the border style separately
  • We can have multiple border styles and pick the ones we want, rather than a single, hardcoded and fixed version
  • We can define the backgrounds, any number of them, as separate styles

That doesn’t sound to bad does it? But wait, there is more!

Remember how I told you that animations are also defined in CSS? Since CSS allows you to add multiple styles to a control, this also means you can define a style with an animation – and then just add it when you want something to happen, and then remove the style when you don’t need it any more.

You have probably seen these spinners that websites use right? So while the website is loading something, a circle or dot keeps rotating to signal that work is being performed in the background? Well, that’s pretty easy to achieve when you understand how cascades work. You just define the animation, use it in a style — and then add that style to your control. When you want to stop this behavior you just remove the style. Thats it!

But let’s start with something simple. Let’s define a border and a background and apply that to a control using code. And remember: styles you add does not exclude the initial style. Like we talked about earlier – Smart will take the pascal classname and use a CSS style with that name. So whatever you add to the control is extra. Which is really powerful!

You probably want to start a new visual project for this one. And remember to pick a theme in the project options (in my case I picked the “Android-HoloLight.,css” theme, just so you know), save the project, then go back into the options and check the “Use custom theme” checkbox. Again exit the dialog and click save – the IDE will now create a copy of whatever theme you picked and give you direct access to it from the IDE.

Remember to check the custom theme in project options

With that out of the way you should now have a fresh, blank visual application with an item called “Custom CSS” in the project manager list. Now double click on that like before so we can get cracking, and go down to the end of the file. Add the following text:

<?pas   const EdgeRounding = "4px";   const clDlgBtnFace = "#ededed"; ?>

.TMyButtonBorder {
  border-radius:  <!--?pas=EdgeRounding?-->;
  border-top:     1px solid rgba(250, 250, 250, 0.7);
  border-left:    1px solid rgba(250, 250, 250, 0.7);
  border-right:   1px solid rgba(240, 240, 240, 0.5);
  border-bottom:  1px solid rgba(240, 240, 240, 0.5);

  -webkit-box-shadow: 0px 0px 1px 1px rgba(81, 81, 81, 0.8);
     -moz-box-shadow: 0px 0px 1px 1px rgba(81, 81, 81, 0.8);
          box-shadow: 0px 0px 1px 1px rgba(81, 81, 81, 0.8);

.TMyButtonBorder:active {
  border-radius:  <!--?pas=EdgeRounding?-->;
  border-top:     1px solid rgba(240, 240, 240, 0.5);
  border-left:    1px solid rgba(240, 240, 240, 0.5);
  border-right:   1px solid rgba(250, 250, 250, 0.7);
  border-bottom:  1px solid rgba(250, 250, 250, 0.7);

  -webkit-box-shadow: 0px 0px 1px 1px rgba(81, 81, 81, 0.8);
     -moz-box-shadow: 0px 0px 1px 1px rgba(81, 81, 81, 0.8);
          box-shadow: 0px 0px 1px 1px rgba(81, 81, 81, 0.8);

.TMyButtonBackground {
  background-color: <!--?pas=clDlgBtnFace?-->;
  background-image: -webkit-gradient(linear, 0% 0%, 0% 100%,color-stop(0, rgb(255, 255, 255)),color-stop(1, rgb(240, 240, 240)));
  background-image: -webkit-repeating-linear-gradient(top,rgb(255, 255, 255) 0%,rgb(240, 240, 240) 100%);
  background-image: repeating-linear-gradient(to bottom,rgb(255, 255, 255) 0%,rgb(240, 240, 240) 100%);
  background-image: -ms-repeating-linear-gradient(top,rgb(255, 255, 255) 0%,rgb(240, 240, 240) 100%);

.TMyButtonBackground:active {
  background-color: <!--?pas=clDlgBtnFace?-->;
  background-image: -webkit-gradient(linear, 0% 0%, 0% 100%,color-stop(0, rgb(231, 231, 231)),color-stop(0.496, rgb(231, 231, 231)),color-stop(0.5, rgb(231, 231, 231)),color-stop(1, rgb(255, 255, 255)));
  background-image: -webkit-repeating-linear-gradient(top,rgb(231, 231, 231) 0%,rgb(231, 231, 231) 49.6%,rgb(231, 231, 231) 50%,rgb(255, 255, 255) 100%);
  background-image: repeating-linear-gradient(to bottom,rgb(231, 231, 231) 0%,rgb(231, 231, 231) 49.6%,rgb(231, 231, 231) 50%,rgb(255, 255, 255) 100%);
  background-image: -ms-repeating-linear-gradient(top,rgb(231, 231, 231) 0%,rgb(231, 231, 231) 49.6%,rgb(231, 231, 231) 50%,rgb(255, 255, 255) 100%);

Now this might look like a huge mess, but most of this is gradient coloring. If you look closer you will notice that it’s the exact same gradients but with different browser prefixing. This is to ensure that things look exactly the same no matter what browser people use. Making gradients like this is easy, there are a ton of websites that deals with this. One of my favorites is ColorZilla, which will make all this code for you.

If you don’t know your CSS you might be wondering – what is that :active postfix? You have two declarations with the same name – but one of them has :active appended to it? The active selector (which is the fancy name) simply tells the browser that whenever someone interacts with an element that has this state – it should switch and display the :active one instead. Typically a button will look 3d when it’s not pressed, and sunken when you press it. This is automated and you just need to define how an element should look when it’s pressed via the :active postfix (note: since different controls do different things, “active” can hold different meanings. But for most controls it means when you click it, touch it or otherwise interact with it).

And now for the big question: what on earth is that first segment that looks like pascal code? Well, that is pascal code! All your CSS stylesheets are processed by a scripting engine and only the result is actually given to the linker. So yes indeed, you can write both functions and procedures and use them to make your CSS like easier (take that Adobe!).

What we have done in the pascal snippet is to define a standard rounding value. That way we don’t have to update 300 places where border-radius is set (or you can blank it out if you dont want round edges). We change the constant and it will spread to any style that uses it. Clever huh?

OK, lets use our styles for something fun! What we have here is a nice border definition, both active and non-active, and also a nice background. Let’s use cascades to change how a button looks like!

What is a button anyways

If you switch to Form1 in your application and place a TW3Button on the form, we can start to work with it. The first thing you need to do is to clear out the styleclass so that Smart doesn’t apply the default styling. That way it’s easier to see what happens. So here is how it looks when I just run and compile it:


Now go into the procedure TForm1.InitializeForm() in the unit Form1. And write the following code:

procedure TForm1.InitializeForm;

  // Remove the default styling
  w3button1.StyleClass := '';

  // Add our border

  // And add our background

  // Make the font autosize to the container
  w3button1.font.AutoSize := true;

Now save, compile and run and we get the following result:


Suddenly our vanilla Android button have all the majesty of Ubuntu Linux! And all we did was define a couple of styles and then manually add them. We could of course have stuffed all of this into a single, monolithic style – no shame in that, but im sure you agree that by fragmenting border from background, and background from content – we have a lot of power on our hands!

As an experiment: Remove the line that clears the StyleClass string and see what happens. When you click the button the browser actually blends the two backgrounds together! Had we used RGBA values in our background gradients – the browser would have blended the standard theme button with our added styles. It’s pretty frickin’ awesome if you ask me.

Here is a more extensive example of our upcoming Ubuntu Linux theme. This is not yet ready for alpha, but it represents the first theme system where all our controls makes use of multiple styles. It looks and behaves beautifully.


From the labs: A Ubuntu Linux inspired theme that is done using cascading exclusively

Brave new themes

So far we I have written exclusively about things you can do right now. But we are working every single day on Smart Mobile Studio, and right now my primary task is to finish a working prototype of our theme engine. As you can see from the picture above we still have a few controls that needs to be adjusted. In the previous article I mentioned the importance of respecting borders, padding and margins from the stylesheet; well let’s just say that I have learnt that the hard way.

Most of our controls were written with no consideration regarding these things, we use an absolute boxing model after all so we don’t have to. But not having to do something and taking the time to do something is often the difference between quality and fluff. And this time we are doing things right every step of the way.

Much like the effect system (SmartCL.Effects.pas) the theming system makes use of partial classes. This means that it simply doesn’t exist until you include the unit SmartCL.Theme in your project.

With the theme unit included (actually it’s included by the RTL so it’s there no matter what, but it wont be visible unless you include it in your unit scope) TW3CustomControl suddenly gains a couple of properties and methods:

  • ThemeBorder property
  • ThemeBackground property
  • ThemeReset() method

When you create custom controls you can (if you need to) define a style for that control, but this time you don’t need to define borders or backgrounds. A style is now reduced to padding, margins, some font stuff and perhaps shading if you need that. Then simply assign a ThemeBorder and ThemeBackground in the StyleTagObject() method of your control – and you can make your control look and feel at home with everything else using that theme.

Lets look at the standard borders first:

  • btNone
  • btFlatBorder
  • btControlBorder
  • btContainerBorder
  • btButtonBorder
  • btDialogButtonBorder
  • btDecorativeBorder
  • btEditBorder
  • btListBorder
  • btToolContainerBorder
  • btToolButtonBorder
  • btToolControlBorder
  • btToolControlFlatBorder

And then we have pre-defined backgrounds matching these:

  • bsNone
  • bsDisplay
  • bsControl
  • bsContainer
  • bsList
  • bsListItem
  • bsListItemSelected
  • bsEdit
  • bsButton
  • bsDialogButton
  • bsDecorative
  • bsDecorativeInvert
  • bsDecorativeDark
  • bsToolContainer
  • bsToolButton
  • bsToolControl

And as mentioned, you can assign these to any control you like

Same defines, many themes

The cool thing about the new system is that it’s not just one theme. We start with one of course but ultimately all our themes will follow the new styling scheme. The goal is to use pure constants, much like what Delphi did with colors (clBtnFace and so on) so that we only need to change the coloring constants – and then the changes will spread to the whole theme.

You as a Smart Mobile Studio developer don’t need to care about the details. As long as you stick to the normal types listed above, your custom controls will always match whatever theme is being used. And it will always look good and match the theme.


Still  few controls to style, but I’m sure you agree that its starting to look nice

Well that has been a rather long introduction to Smart and CSS. I hope you have enjoyed reading it. I will keep you all posted on the progress we make, which is moving ahead very fast these days!

Personally I can’t wait until Smart Mobile Studio 3.0 is ready, and I hope people value the effort we have put into this. And we are just getting started!

Smart Mobile Studio and CSS: part 2

October 11, 2017 Leave a comment

In my previous article we had a quick look at some fundamental concepts regarding CSS. These concepts are not unique to Smart Mobile Studio, but simply just the way things work with CSS in general. The exception being the way Smart maps your pascal class-name to a CSS style of the same name.

To sum up what we covered last time:

  • Smart maps a control’s class-name to a CSS style with the same name. So if your control is called TMyControl, it expects to find a CSS style cleverly named “.TMyControl”. This works very well and is easy to apply.
  • CSS can affect elements recursively, so you can write CSS that changes the appearance and behavior of child controls. This technique is typically used if you inject HTML directly via the InnerHTML property
  • CSS is cascading, meaning that you can add multiple styles to the same control. The browser will merge them into a final, computed style. The rule of thumb is to avoid styles that affect the same properties
  • CSS can define more than colors; things like animations, gradients, animated gradients and whatnot can all be defined in CSS
  • Smart Mobile Studio ships with units for creating, applying and working with CSS from your pascal code. It also ships with effect classes that can trigger defined CSS animations.
  • Smart Mobile Studio has a special effect unit (SmartCL.Effects) that when added to the uses list, adds quite a few effect procedures to TW3MovableControl. These effect methods are prefixed with Fx (ex: fxMoveTo, fxFadeOut, fxScaleTo).

Best practices

When you write your own controls, don’t cheat. I have seen a lot of code where people create instances of TW3CustomControl for instance, and then jump through hoops trying to make that look good. TW3CustomControl is a base-class, it’s designed to be inherited from – not used “as is”. I can understand the confusion to some extent, I mean since TW3CustomControl manage a DIV by default – people with some HTML background probably think creating one of these is the same as making a DIV. But by doing so they essentially short-circuit the whole theme-system since (as underlined above) all pascal classes will use a style with the same name. And TW3CustomControl is just a transparent block of nothing.

No matter how small a thing you are creating, always inherit out your own classes and give them distinct names. This is extremely important with regards to styling, but also as a discipline of writing readable, maintainable code. Using TW3CustomControl all over the place will make the code a mess to maintain – let alone share with others who don’t have a clue what you are doing.

A practical example

To show how easy it is to style things once you have written code that uses distinct class names and clear-cut structure, let’s take the time to write a little list-box. Nothing fancy, just a control that can take X number of child rows, style them and display the items vertically like a list. Let’s begin with the class code:


  // Define an exception especially for our control
  EMyControl = class(EW3Exception);

  // Define a baseclass, that way we can grow in the future
  TMyChild = class(TW3CustomControl)

  // Define a class type, good when working with lists or
  // collections of elements that share ancestors
  TMyChildClass = class of TMyChild;

  // Define a clear child class, that way we can apply
  // styling without problems
  TMyChildRed = class(TMyChild)

  // Create a custom version with sensitive properties only
  // available to ancestors. Here we place these in the protected
  // section (items and count)
  TCustomMyControl = class(TW3CustomControl)
    property Items[const index: integer]: TMyChild
            read ( TMyChild(GetChildObject(Index)) ); default;
    property Count: integer read ( GetChildCount );

    procedure Resize; override;
    function Add(const NewItem: TMyChild): TMyChild; overload;
    function Add(const &Type: TMyChildClass): TMyChild; overload;

  // The actual control we use, this is the one we write
  // CSS code for and that we create and use in our applications.
  // This step is optional ofcourse, but it has it's perks
  TMyControl = class(TCustomMyControl)
    property Items;
    property Count;


procedure TCustomMyControl.Resize;
  LCount: integer;
  bl, bt, br, bb: integer;
  wd, dy: integer;
  LItem: TMyChild;

  // Avoid doing work if there is nothing there
  LCount := GetChildCount();
  if LCount > 0 then
    // Get the values of the borders/padding etc from CSS
    // We need to respect these when working in the client-rect
    bl := Border.Left.Width + Border.Left.Padding + Border.Left.Margin;
    bt := Border.Top.Width + Border.Top.Padding + Border.Top.Margin;
    br := Border.Right.Width + Border.Right.Padding + Border.Right.Margin;
    bb := Border.Bottom.Width + Border.Bottom.Padding + Border.Bottom.Margin;

    // This is the maximum width an element can have without
    // bleeding over whatever styling is present
    wd := ClientWidth - (bl + br);

    // Start at the top
    dy := bt;

    // Now layout each element vertically
    for var x := 0 to LCount-1 do
      LItem := Items[x];
      LItem.SetBounds(bl, dy, wd, LItem.Height);
      inc(dy, LItem.Height);

function TCustomMyControl.Add(const &Type: TMyChildClass): TMyChild;
  if &Type <> nil then
    // Start update

    // Create our control & return it
    result := &Type.Create(self);

      // Define that a resize must be issued

    // End update. If update was not called elsewhere
    // the resize will happen now. If not, it will happen
    // when the last EndUpdate() is called (clever stuff!)
  end else
  raise EMyControl.Create('Failed to add item, classtype was nil error');

function TCustomMyControl.Add(const NewItem: TMyChild): TMyChild;
  result := NewItem;
  if NewItem <> nil then
    // Are we the current parent?
    if not Handle.Contains(NewItem.Handle) then
      // Remove from other parent

      // Start update

      // Add child to ourselves

      // Define that a resize must be issued

      // End update. If update was not called elsewhere
      // the resize will happen now. If not, it will happen
      // when the last EndUpdate() is called (clever stuff!)
  end else
  raise EMyControl.Create('Failed to add item, instance was nil error');

If you are wondering about the strange property getter’s, where we don’t call a function but instead have some code inside (), that is another perk of Smart Pascal. The GetChildObject() method is a part of TW3TagContainer which ultimately TW3CustomControl inherits from, so we simply typecast and call that. This is perfectly legal in Smart as long as it’s a simple function or expression with matching type.

And now lets look at the CSS for our new control and its red child:

.TMyChildRed {
  padding: 2px;
  background-color: #FF0000;
  font-family: "Ubuntu", "Helvetica Neue", Helvetica, Verdana;
  color: #FFFFFF;
  border-bottom: 1px solid #AA0000;

.TMyControl {
  padding: 4px;
  background-color: #FFFFFF;
  border: 1px solid #000000;
  border-radius: 4px;
  margin: 1px;

We need to populate the list before we can see anything of course, so if we add the following code to InitializeForm() things will start to happen:

  // Lets create our control. We use an inline variable
  // here since this is just an example and I wont be
  // accessing it later. Otherwise you want to define it
  // as a form field in the form-class
  var LTemp := TMyControl.Create(self);
  LTemp.SetBounds(100, 100, 300, 245);

  // We call beginupdate here to prevent the
  // control calling Resize() for every elements.
  // It will only resize when the last EndUpdate (below)
  // is called. Also see how we use this inside the
  // procedures that needs to force a change
  for var x := 1 to 10 do
    // Create a new "red" child
    var NewItem := LTemp.Add(TMyChildRed);

    // Fill the content with something
    NewItem.InnerHTML := 'Item number ' + x.ToString();

The end result might not look fancy but it demonstrates some very basic concepts that is fundamental to working with Smart Mobile Studio. Namely how to define CSS that map to your classes, and also how to use BeginUpdate() and EndUpdate() to prevent a ton of calls to Resize() when adding multiple items.


It wont win any prices for looks, but it demonstrates some very important principles when writing controls


Being able to style and layout child elements in your own controls is cool, but applications can quickly become dull and static without visual feedback. This is why I wrote the effect unit, namely to make it so easy to use GPU powered effects in your applications that anyone can make stuff move around.

So let’s make a little change to our mini-list control. When a user press one of the items, we want the item to scale up while the mouse is pressed, and then gracefully shrink back to normal size when you let go of the mouse. We could make it spin around for that matter, but let’s start with something a bit more down to earth.

This is where defining our own classes comes into play. We are going to add some code to our root child class, TW3MyChild, because this behavior should be universal. For sake of simplicity im just going to use the controls own events for this purpose. So Let’s expand our ancestor class to the following:

  TMyChild = class(TW3CustomControl)
    FDown: boolean;
    procedure HandleMouseDown(Sender: TObject; Button: TMouseButton;
                        Shift: TShiftState; X, Y: integer);
    procedure HandleMouseUp(Sender: TObject; Button: TMouseButton;
                        Shift: TShiftState; X, Y: integer);
    procedure InitializeObject; override;

The implementation needs to keep track of when a scale is in process, otherwise we can scale the element out of sync with the UI. Again this is just an example, there are many ways to keep track of things but let’s keep it simple:

procedure TMyChild.InitializeObject;
  self.OnMouseDown := HandleMouseDown;
  self.OnMouseUp := HandleMouseUp;

procedure TMyChild.HandleMouseDown(Sender: TObject; Button: TMouseButton;
                    Shift: TShiftState; X, Y: integer);
  if Button = TMouseButton.mbLeft then
    if not FDown then
      FDown := true;
      fxScaleUp(1.0, 1.5, 0.3);

procedure TMyChild.HandleMouseUp(Sender: TObject; Button: TMouseButton;
                    Shift: TShiftState; X, Y: integer);
  if Button=TMouseButton.mbLeft then
    if FDown then
      fxScaleDown(1.5, 1.0, 0.3, procedure ()
        FDown := false;

The result? Well, when we press one of the items in our list that items grows to 1.5 of it’s original size (the parameter names for the effects are easy to understand). So we scale from 1.0 (normal size) to 1.5, and we tell the system to execute this transition in 0.3 seconds.

All the effect methods have an optional callback procedure you can use (anonymous procedure) that will fire when the effect is finished. As you can see in the HandleMouseUp() method we use this to reset the FDown field, allowing the effect to be executed again on the next click.


Smooth scaling via hardware

Next time

Hopefully the past two articles have been interesting. In our next article we will look at some of the stuff we are building in our labs. That means talking about styling and how we are working to improving this (read: not yet available but in the process).

In the meantime, have a peek at what you can do with proper use of CSS effects


You can do some amazing things with effects and JS (click image)

Happy coding!

Smart Mobile Studio and CSS: part 1

October 9, 2017 Leave a comment

If I were to pinpoint a single feature of the modern HTML5 rendering engine that demands both respect and care, it would have to be CSS. While it’s true that no other piece of technology has seen the level of development as “the browser” for the past 20 years – the piece that has seen the most is without a doubt CSS.

When we designed Smart Mobile Studio styling became an issue almost from the start. I knew CSS well and I was reluctant to create a theming engine for Smart, because it’s so easy to fall into the same pit that Macromedia once did; namely that you end up boxing the user into a corner with the best of intentions. So instead of writing a large and complex styling engine, we designed the simplest possible system we could imagine – and left the rest to our users.

For advanced users that know their way around CSS, HTML and Javascript as well as they know object pascal, this has been a great bonus. But for users that come directly from Delphi or Lazarus with little or no background in web technology – CSS has been a black box they would rather not touch. Which is really sad because well written CSS makes up as much as 40% of a good application. If not more (!).

CSS for smarties

Most Delphi developers in their 40’s who never really got into Web development (because they were too busy coding in Delphi) probably think of CSS as a coloring language. I keep on hearning the same thing over and over “CSS? You can set colors, background pictures and stuff”. In part they are right, back in the late 90s that is. Yes CSS allows you to define how things should be colored and stuff like that – but CSS have evolved side by side with modern JavaScript and HTML, and as such it’s capable of a lot more than just setting colors.

The most important features you want to know about is:

  • You can define gradients as backgrounds, not just a static color or picture
  • You can use alpha blending (rgba) rather than fixed colors (#rrggbb)
  • You can define elaborate animations
  • Animations can use most CSS properties: colors, size, opacity and / or position
  • CSS is recursive, you can define rules that applies to child elements of a control using a style. You can also target child elements by name.
  • CSS is no longer just 2D but also 3d (Note: Sprite3d has been ported to Smart, see SmartCL.Sprite3d.pas), so you can place elements in 3d space
  • Rotation is now standard, be it purely 2d or 3d
  • You can define transitions directly on a property, like how long a move should take
  • CSS is cascading (hence the term “cascading style sheets”)
  • CSS allows elements to inherit properties from their parents, which is extremely handy if you want all child elements to use the font you set in the first, actual control you are making.
  • Filters! You can now apply great graphics filters on your content
  • CSS is powered by the GPU (graphical processing unit) and makes full use of the graphics chipset on the target device

This is just the tip of the iceberg of what modern CSS has to offer, but before you dive in, lets look at some fundamental facts you need to know when working in Smart Mobile Studio.

Class to style mapping

Have you ever wondered how a custom control in Smart knows what css style to use? For instance, if you drop a TW3Panel on a form – where does the style come from? Is there some magical spell that automatically assigns a piece of css to the visual control? Sure you know there is a CSS file that’s generated for the application, and you can pick between a few themes, but how is the panel CSS style attached to an instance of TW3Panel?

Like I mentioned above, we tried to leave CSS alone in fear of boxing the user into a system that was too limited or too lose; But we did one stroke of genius, and that was to automatically map the pascal class-name to the CSS class name. And this turned out to be a very efficient method of dealing with styling.

So to make this crystal clear: Let’s say you create a new control called TMyControl right? When you create an instance of that control in your pascal code, it will automatically try to use a CSS style with the same name. So far that is the only rule we have enforced. But it is extremely important to know this and understand how powerful that is.

Recursive CSS

The next thing I want to explain is how you can define recursive styles. Again Let’s say you have created a new custom-control called TMyControl. You go into your project options, click on “Linker” from the treeview on the left – and then check the “Use custom theme” checkbox. This makes a copy of whatever theme you picked for your application and stores that copy within your project file. When you click “OK” to exit the project options dialog and click “Save”, your project will get a new item cleverly named “Custom CSS”. This is where you add your own styles.


So ok, we have a control called TMyControl and now we want to style it. So we double-click on the “Custom CSS” node in our project, and we are greeted with a ton of weird looking CSS code.

So let’s go ahead and create a style with the same name as our pascal class, that way they will find each other:

.TMyControl {
  background-color: #FF0000;

Click “Save” again (or “CTRL + S” on your keyboard) and compile + run your program. If you had created an instance of TMyControl on your form, you should now see a red box. Not much to look at just yet, but we will deal with that later.

But a blank control is really not much fun. So for sake of argument let’s say you want to display a header inside your control. So you create a second class called TMyHeader and then create that in the constructor of TMyControl. And we want to place it at the top of our TW3MyControl display, 32 pixels high. So we end up with something like this:

unit Unit1;


  System.Types, System.Colors, System.Types.Convert,
  SmartCL.System, SmartCL.Graphics, SmartCL.Components, SmartCL.Forms,
  SmartCL.Fonts, SmartCL.Borders;


// our header
TMyHeader = class(TW3CustomControl)

// our new cool control
TMyControl = class(TW3CustomControl)
  FHeader: TMyHeader;
  procedure InitializeObject; override;
  procedure FinalizeObject; override;
  procedure Resize; override;
  property Header: TMyHeader read FHeader;


procedure TMyControl.InitializeObject;
  FHeader := TMyHeader.Create(self);

procedure TMyControl.FinalizeObject;

procedure TMyControl.Resize;
  FHeader.SetBounds(0, 0, ClientWidth, 32);


At this point we can ofcourse do the same as we just did, namely to add a CSS style called “.TMyHeader” and define our header there – which is also how you should do things. But there will be cases where you dont have this fine control over things – perhaps you are using a library or maybe you are generating html and just injecting it via the innerHTML property? Who knows, but the point is we can actually write CSS that targets ANY child element without knowing much about it. And we do that using something called a CSS selector.

So let’s say I want to color all children of TMyControl, regardless of type, green (just for the hell of it). Well, then I can do like this in our CSS:

.TMyControl {
  background-color: #FF0000;

/* Color all (*) children green! */
.TMyControl > * {
  background-color: #00FF00;

We can also be more spesific and say: Color the first P (paragraph) inside the first DIV child green! And I should mention that the default tag that TW3CustomControl manages is a DIV. Well, to target the text paragraph inside the first child we would write:

.TMyControl {
  background-color: #FF0000;

/* Color the P inside the first DIV green! */
.TMyControl > :first-child > P {
  background-color: #00FF00;

Now you are probably wondering, but where did that “P” come from? There is no paragraph in my code? Well, like mentioned we can add that via the innerHTML property if we like:

procedure TMyControl.InitializeObject;
  FHeader := TMyHeader.Create(self);
  FHeader.innerHTML := '<p>This is the text!</p>';

Note: WordPress has a tendency to kill html tags, so if you dont see a paragraph tag in the string above, wordpress gobbled it up.

Now the point of this code so far has not been to teach how to write good code. In fact, you really should try to avoid code like this unless you really know what you are doing. The point here was to show you how CSS can be made to operate on structures. If a style is selected by a control, selector code like I demonstrated above will kick-in automatically and you can do some pretty amazing things with it. Just changing the background doesnt really give this system the credit it deserves. You can add animations, change the row-color of every odd listitem, add a glowing rectangle only around a particular element — the sky is the limit!

The cascading part

This is probably one of the simplest features ever, yet it’s one that people fail to remember when they sit down to write CSS code. So please make a note of this one because it will save you so much time.

So far we have looked at single styles attached to a control. But truth be told, you can assign 100 styles to the same control – at the same time (!). What happens is that the rendering engine will merge them all together and draw whatever the outcome is onto the display. The only rule is: they must not collide. If you define two backgrounds the style engine will try to merge them, but odds are only one of them will survive.

But let’s stop for a minute and think about what this means:

  • Instead of one large, monolithic style for a control, you can divide it into smaller and more managable parts
  • You can define borders in one style, background in another and fonts in a third
  • You can have two separate animations running at the same time targeting the same element – and as long as they dont manipulate the same properties – it will work just fine.

It can take a while for the true potential of this to really sink in.

To give you a practical example: This is how Smart Mobile Studio deals with disabled states. Whenever you disable a control, a style called “DisabledState” is added to the control. This takes over opacity, disables mouse and touch events, changes the mouse cursor and draws a diagonal pattern that covers the control.

When the control is enabled again, we simply remove the style and it reverts back to normal. It’s pretty cool if I say so myself!

TW3CustomControl, which is the foundation for all visible controls on the palette, has a property called “CSSClasses”. This has been deprecated and replaced by “TagStyles”, but both still works. This class gives you easy methods for adding, removing and checking if any extra styles (apart from the default style) has been added.

It looks like this:

TW3TagStyle = class(TW3OwnedObject)
    FCache:     TStrArray;
    FCheck:     integer;
    FHandle:    TControlHandle;
    function    GetCount: integer; virtual;
    function    GetItem(const Index: integer): string; virtual;
    procedure   SetItem(const Index: integer; const Value: string); virtual;
    procedure   ParseToCache(CssStyleText: String); virtual;
    procedure   CacheToTag; virtual;
    procedure   TagToCache; virtual;
    function    AcceptOwner(const CandidateObject: TObject): Boolean; override;
    property    Handle: TControlHandle read FHandle;
    property    Count: integer read GetCount;
    property    Items[const Index: integer]: string read GetItem write SetItem;

    procedure   Update; virtual;

    class procedure AddClassToControl(const Handle: TControlHandle; CssClassName: string);
    class function ControlContainsClass(const Handle: TControlHandle; CssClassName: string): boolean;
    class procedure RemoveClassFromControl(const Handle: TControlHandle; CssClassName: string);

    function    Contains(const CssClassName: string): boolean;
    function    Add(CssClassName: string): integer;
    function    Remove(const Index: integer): string;
    function    RemoveByName(CssClassName: string): string;
    function    IndexOf(CssClassName: string): integer;
    function    ToString: string;
    procedure   Clear;
    constructor Create(AOwner: TObject); override;
    destructor Destroy; override;

So Let’s say you have a fancy animated background you want to show while doing something, then simply call the AddClassToControl() method.

I should mention that I have used the word “style” so far to avoid confusion. A css definition is not really called a style in HTML land, but a style class. I just used style to make the distinction easier for everyone.

Summing up

In this short article we have had a look at the fundamental rules of CSS. We have looked at how a control match and finds it’s css style, how to define your own styles. We also brushed into the concept of CSS selectors, which can recursively affect child elements in your controls — and last but not least, we have talked about cascading and how you can assign multiple styles to the same element.

In our next article we are going to look at some of the next-generation features in our RTL regarding styles, and also talk a bit about what we have cooking in our labs. Needless to say, CSS is going to become easier and much more powerful in the weeks to come, so it’s important that you pick up on the basics now!

Homework (if you need it) is to have a look at the CSS pascal classes in our RTL. They contain a lot of nice features, helper classes and more to generate platform independent CSS code that you can use right now.

You want to go through the following units:

  • SmartCL.CSS.StyleSheet
  • SmartCL.CSS.Classes
  • SmartCL.Effects

Have a peek at the methods “TSuperStyle.AnimGlow” and see how CSS can be written as code, although in most cases it’s easier to just write it as vanilla CSS. You will also be happy to know that stylesheets can be created as normal pascal objects, so you dont have to put all your eggs into one basket.

The last unit in that list, SmartCL.Effects is special. It uses something called “partial classes” which is not supported by Delphi or Lazarus. In general it means that you can spread the declaration of a class over many units.

When you add SmartCL.Effects to your form’s uses clause, TW3CustomControl suddenly gains a ton of effect methods (prefixed by “fx”). These are CSS animation effects that you can call on any control. You can also daisy-chain them together and they will execute in sequence. Again this demonstrates what you can achieve with CSS and some clever programming.

Until next time!

Webfonts in Smart Mobile Studio

October 4, 2017 2 comments

Webfonts is something I have wanted to include in Smart for ages now. It’s such a simple feature, but when you use it right it becomes powerful and assuring.

What is a webfont?

Well, you know how you have to define what fonts you use under html right? And if the user doesn’t have that font, you have fallback fonts it can use instead? If you have worked with web technology for a while you no doubt know how haphazardly the results can be. You would think that a font like “verdana” looks exactly the same from system to system -but that is not always the case.


Adding webfonts to your project is very easy

Apple for instance have their own tweak on just about every typeface; Linux often have alternatives that looks good but might not be 100% identical (on some distros, Linux is not exactly “one thing”). And Microsoft tends to live in their own universe.

The solution? Webfonts. In short it means that the browser will double-check if the user has the font you need installed. And if they don’t – the font is downloaded from a font provider (like Google) when your web application starts.

Fonts, glorious fonts!

The result is that your application will look and feel the same no matter what device is used. And that is a very important thing – because coding flexible, adaptive UI’s that should work on Android, iOS, TV’s and ordinary browsers is no picnic to begin with. Having to worry that your fancy Ubuntu based UI is rendered using vanilla Sans-Serif (read: looking like something out of the 80s) has been an ever-present reality for two decades now.


Plenty of good looking fonts on Google

If you head over to and take a gander at the fonts available, I’m sure you agree that this is a fabulous idea. And as always, when you combine good-looking fonts with some cool CSS – the results can be spectacular.

Still in Alpha

We are still in Alpha for Smart Mobile Studio 3.0, so there might be hiccups along the way. But all in all you should be able to enjoy webfonts in our next update.


Why buy a Vampire accelerator?

August 24, 2017 2 comments

With the Amiga about to re-enter the consumer market, a lot of us “old timers” are busy knocking dust of our old machines. And I love my old machines even though they are technically useless by modern standards. But these machines have a lot of inspiration in them, especially if you write code. And yes there is a fair bit of nostalgia involved in this, there is no point in lying about any of this.

I mean, your mobile phone is probably 100 times faster than a vintage Amiga. But like you will discover with the new machines that are about to hit the market, there is more to this computer than you think. But vintage Amiga? Sadly they lack the power to anything useful [in the “modern” sense].

Enter the vampire

The Vampire is a product that started shipping about a year ago. It’s a FPGA based accelerator, and it’s quite frankly turning the retro scene on its head! Technically it’s a board that you just latch onto the CPU socket of your classical Amiga; it then takes over the whole machine and replace the CPU and chipset with its versions of these. Versions that are naturally a hell of a lot faster!

vanpireThe result is that the good old Amiga is suddenly beefy enough to play Doom, Quake, MP3 files and MPG video (click here to read the datasheet). In short: this little board gives your old Amiga machine a jolt of new life.

Emulation vs. FPGA

Im not going to get into the argument about FPGA not being “real”, because that’s not what FPGA is about. Nor am I negative to classical hardware – because I own a ton of old Amiga gear myself. But I will get in your face when it comes to buying a Vampire.

Before we continue I just want to mention that there are two models of the vampire. There is the add-on board I have just mentioned which is again divided into different models for various Amiga versions (A600, A500 so far). The second model is a completely stand-alone vampire motherboard that wont even need a classic Amiga to work. It will be, for all means and purposes, a stand alone SBC (single board computer) that you just hook up power, video, storage and mouse – and off you go!

This latter version, the stand-alone, is a project I firmly believe in. The old boards have been out of production since 1993 and are getting harder to come by. And just like people they will eventually break down and stop working. There is also price to consider because getting your 20-year-old A500 fixed is not easy. First of all you need a specialist that knows how to fix these old things, and he will also need parts to work with. Since parts are no longer in production and homebrew can only go so far, well – a brand new motherboard that is compatible in every way sounds like a good idea.

There is also the fact that FPGA can reach absurd speeds. It has been mentioned that if the Vampire used a more expensive FPGA modules, 68k based Amiga’s could compete with modern processors (Source: Can you imagine a 68k Amiga running side by side with the latest Intel processors? Sounds like a lot of fun if you ask me !


Amiga 1000, in my view the best looking Amiga ever produced

But then there is emulation. Proper emulation, which for Amiga users can only mean one thing: UAE in all its magnificent diversity and incarnations.

Nothing beats firing up a real Amiga, but you know what? It has been greatly exaggerated. I recently bought a sexy A1000 which is the first model that was ever made. This is the original Amiga, made way back before Commodore started to mess around with it. It cost me a small fortune to get – but hey, it was my first ever Amiga so I wanted to own one again.

But does it feel better than my Raspberry PI 3b powered A500? Nope. In fact I have only fired up the A1000 twice since I bought it, because having to wait for disks to load is just tedious (not to mention that you can’t get new, working floppy disks anymore). Seriously. I Love the machine to bits but it’s just damn tedious to work on in 2017. It belongs to the 80s and no-one can ever take away its glory or it’s role in computer history. That achievement stands forever.

High Quality Emulation

If you have followed my blog and Amiga escapades, you know that my PI 3b based Amiga, overclocked to the hilt, yields roughly 3.2 times the speed of an Amiga 4000/040. This was at one point the flagship Commodore computer. The Amiga 4000’s were used in movie production, music production, 3d rendering and heavy-duty computing all over the world. And the 35€ Raspberry PI gives you 3.2 times the power via the UAE4Arm emulator. I don’t care what the vampire does, the PI will give it the beating of its life.

Compiling anything, even older stuff that is a joke by today standard, is painful on the Raspberry PI. Here showing my retro-fitted A500 PI with sexy led keyboard. It will soon get a makeover with an UP board :)

My retrofitted Raspberry PI 3b Amiga. Serious emulation at high speed allowing for software development and even the latest Freepascal 3.x compiler

Then suddenly, out of the blue, Asus comes along with the Tinkerboard. A board that I hated when it first came out (read part-1 here, part-2 here) due to its shabby drivers. The boards have been collecting dust on my office shelf for six months or so – and it was blind luck that i downloaded and tested a new disk image. If you missed that part you can read the full article here.

And I’m glad I did because man – the Tinkerboard makes the Raspberry PI 3b look like a toy! Asus has also adjusted the price lately. It was initially priced at 75€, but in Norway right now it retails for about 620 NKR – or 62€. So yes, it’s about twice the price of the PI – but it also gives you twice the memory, twice the graphics performance, twice the IO performance and a CPU that is a pleasure to work with.

The Raspberry PI 3b can’t be overclocked to the extent the model 1 and 2 could. You can over-volt it and tweak the GPU and memory and make it run faster. But people don’t call that “overclock” in the true sense of the word, because that means the CPU is set to run at speeds beyond the manufacturing specifications. So with the PI 3b there is relatively little you can do to make it run faster. You can speed it up a little bit, but that’s it. The Tinkerboard can be overclocked to the hilt.


The A1222 motherboard is just around the corner [conceptual art]

Out of the box it runs at 1.5 Ghz, but if you add a heatsink, fan (important) and a 3A PSU – you can overclock it to 2.6 Ghz. And like the PI you can also tweak memory and gpu. So the Tinkerboard will happily run 3 times faster than the PI. If you add a USB3 harddisk you will also beef up IO speeds by 100 megabyte a second – which makes a huge difference. Linux does memory paging and it slows down everything if you just use the SD card.

In short: if you fork out 70€ you get a SBC that runs rings around both the vampire and the Raspberry PI 3b. If we take height for some Linux services and drivers that have to run in the background, 3.2 x 3 = 9.6. Lets round that off to 9 since there will be performance hits by the background services. But still — 70€ for an Amiga that runs 9 times faster than A4000 @ MC68040 cpu ? That should blow your mind!

I’m sorry but there has to be something wrong with you if that doesn’t get your juices flowing. I rarely game on my classic Amiga setup. I’m a coder – but with this kind of firepower you can run some of the biggest and best Amiga titles ever made – and the Tinkerboard wont even break a sweat!

You can’t afford to be a fundamentalist

There are some real nutbags in the Amiga community. I think we all agree that having the real deal is a great experience, but the prices we see these days are borderline insane. I had to fork out around 500€  to get my A1000 shipped from Belgium to Norway. Had tax been added on the original price, I would have looked at something in the 700€ range. Still – 500€ for a 20-year-old computer that can hardly run Workbench 1.2? Unless you add the word “collector” here you are in fact barking mad!

If you are looking to get an Amiga for “old times sakes”, or perhaps you have an A500 and wonder if you should fork out for the Vampire? Will it be worth the 300€ pricetag? Unless you use your Amiga on a daily basis I can’t imagine what you need a vampire for. The stand-alone motherboard I can understand, that is a great idea – but the accelerator? 300€?

I mean you can pay 70€ and get the fastest Amiga that ever existed. Not a bit faster, not something on second place – no – THE FASTEST Amiga that has ever existed. If you think playing MP3 and MPG media files is cool with the vampire, then you are in for a treat here because the same software will work. You can safely download the latest patches and updates to various media players on the classic Amiga, and they will run just fine on UAE4Arm. But this time they will run a hell of a lot faster than the Vampire.


My old broken A500 turned into an ass-kicking, battle hardened ARM monster

You really can’t be a fundamentalist in 2017 when it comes to vintage computers. And why would you want to? With so much cool stuff happening in the scene, why would you want to limit your Amiga experience to a single model? Aros is doing awesome stuff these days, you have the x5000 out and the A1222 just around the corner. Morphos is stable and good on the G5 PPC — there has never been a time when there were so many options for Amiga enthusiasts! Not even during the golden days between 1989-1994 were there so many exciting developments.

I love the classic Amiga machines. I think the Vampire stand-alone model is fantastic and if they ramp up the fpga to a faster model, they have in fact re-created a viable computer platform. A 68080 fpga based CPU that can go head to head with x86? That is quite an achievement – and I support that whole heartedly.

But having to fork out this amount of cash just to enjoy a modern Amiga experience is a bit silly. You can actually right now go out and buy a $35 Raspberry PI and enjoy far better results than the Vampire is able to deliver. How that can be negative? I have no idea, nor will I ever understand that kind of thinking. How do any of these people expect the Amiga community to grow and get new, young members if the average price of a 20-year-old machine costs 500€? Which incidentally is 50€ more than a brand new A1222 PPC machine capable of running OS 4.

And with the Tinkerboard you can get 9 times the speed of an A4000? How can that not give you goosebumps!

People talk about Java and Virtual-Machines like its black magic. Well UAE gives you a virtual CPU and chipset that makes mince-meat of both Java and C#. It also comes with one of the largest software libraries in the world. I find it inconceivable that no-one sees the potential in that technology beyond game playing – but when you become violent or nasty over hardware, then I guess that explains quite a bit.

I say, use whatever you can to enjoy your Amiga. And if your perfect Amiga is a PI or a Tinkerboard (or ODroid) – who cares!

I for one will not put more money into legacy hardware. I’m happy that I have the A1000, but that’s where it stops for me. I am looking forward to the latest Amiga x5000 PPC and cant wait to get coding on that – but unless the Appollo crew upgrades to a faster FPGA I see little reason to buy anything. I would gladly pay 500 – 1000 € for something that can kick modern computers in the behind. And I imagine a lot of 68k users would be willing to do that as well. But right now PPC is a much better option since it gives you both 68k and the new OS 4 platform in one price. And for affordable Amiga computing, emulation is now of such quality that you wont really notice the difference.

And I love coding 68k assembler on my Amibian emulator setup. There is nothing quite like it 🙂

The Tinkerboard Strikes Back

August 20, 2017 Leave a comment

For those that follow my blog you probably remember the somewhat devastating rating I gave the Tinkerboard earlier this year (click here for part 1, and here for part 2). It was quite sad having to give such a poor rating to what is ultimately a fine piece of hardware. I had high hopes for it – in fact I bought two of the boards because I figured there was no way it could suck with that those specs. But suck it did and while the muscle was there, the drivers were in such a state that it never emerged for the user. It was released prematurely, and I think most people that bought it agrees on this.


The initial release was less than bad, it was horrible

Since my initial review those months ago good things have happened. Asus seem to have listened to the “poonami” of negative feedback and adapted their website accordingly. Unlike the first time I visited when you literally had to dig into recursive menus (which was less than intuitive in this case) just to download the software – the disk images are now available at the bottom of the product page. So thumbs up for that (!)

They have also made the GPIO programming API a lot easier to get; downloading it is reduced to a “one liner” for C developers, which is the way it should be. And they have likewise provided wrappers for other languages, like ever popular python and scratch.

I am a bit disappointed that they don’t provide freepascal units. A lot of developers use object pascal on these board after all, because Object Pascal gives you a better balance between productivity and depth. Pascal is easier to learn (it was designed for that after all) but avoids some of the pitfalls of C/C++ while retaining all the good things. Porting over C headers is fairly easy for a good pascal programmer – but it would be cool of Asus remember that there are more languages in the world than C and python.

All of this aside: the most important change of all is what Asus has done with the drivers! They have finally put together drivers that shows off the capabilities of the hardware and unleash the speed we all hoped for when the board was first announced. And man does it show! My previous experience with the Tinkerboard was horrible; it was the text-book example of a how not to release a product (the whole release has been odd; Asus is a huge, multi-national corporation. Yet their release had basement 3 man band written all over it).

So this is fantastic news! Finally the Tinkerboard delivers and can be used for real life projects!

Smart IOT

At The Smart Company we both create and use our core product, Smart Mobile Studio, to deliver third-party solutions. As the name implies Smart is a software development system initially made for mobile applications; but it quickly grew into a much larger toolchain and is exceptionally good for making embedded applications. With embedded applications I mean things that run on kiosk systems, cash machines and stuff like that; basically anything with a touch-screen that does something.


The Smart desktop gives you a good starting point for embedded work

One of the examples that ship with Smart Pascal is a fully working desktop embedded environment. Smart compiles for both ordinary browsers (JavaScript environments with a traditional HTML5 display) but also for node.js, which is JavaScript unbound by the strict rules of a browser. Developers typically use node.js to write highly scalable server software, but you are naturally not limited to that. Netflix is written 100% in Node.js, so we are talking serious firepower here.

Our embedded environment is called The Smart Desktop (also known as Amibian.js) and gives you a ready-made node.js back-end that couples with a HTML5 front-end. This is a ready to use environment that you can deploy your own applications through. Things like storage, a nice looking UI, user logon and credentials and much, much more is all implemented for you. You don’t have to use it of course, you can write your own system from scratch if you like. We created “Amibian” to demonstrate just how powerful Smart Pascal can be in the right hands.

With this in mind – my main concern when testing SBC’s (single board computers) is obviously web performance. By default JavaScript is a single core event-driven runtime system; you can spawn threads of course but its done somewhat different from how you would work in Delphi or C++.  JavaScript is designed to be system friendly and a gentle giant if you like, which has turned out to be a good thing – because the way JS schedules execution makes it ideal for clustering!

Most people find it hard to believe that JavaScript can outperform native code, but the JavaScript runtimes of today is almost a whole eco system in themselves. With JIT compilers and LLVM optimization — it’s a whole new ballgame.

Making a scale

To give you a better context to see where the Tinkerboard is on a scale, I decided to set up a couple of simple tests. Nothing fancy, just running the same web applications and see how each of them perform on different boards. So I used the same 3 candidates as before, namely the Raspberry PI 3b, the Hardkernel ODroid XU4 and last but not least: the Asus Tinkerboard.

I setup the following applications to compile with the desktop system, meaning that they were compiled with the Smart project. We got plenty of web applications but for this I wanted to pack the most demanding apps in our library:

  • Skid-Row intro remake using the CODEF library
  • Quake 3 asm.js build
  • Plex

OK let’s go through them and see where the chips land!

The Raspberry PI 3b


Bassoon ran well, its not that demanding

The Raspberry PI was aweful (click here for a video). There is no doubt that native applications like UAE4Arm runs extremely well on the PI (which contains hand optimized assembler, not exactly a fair fight)- but when it comes to modern HTML5 the PI doesn’t stand a chance. You could perhaps use a Raspberry PI 3b for simple applications which are not graphic and cpu intensive, but you can forget about anything remotely taxing.

It ran Bassoon reasonably fast, but all in all you really don’t want a raspberry when doing high quality IOT, unless its headless code and node.js perhaps. Frameworks like Johnny #5 gives you a ton of GPIO features out of the box – in fact you can target 40 embedded systems without any change to your code. But for large, high quality web front-ends, the PI just wont cut it.

  • Skid-Row: 1 frame per second or less
  • Quake: Can’t even start, just forget it
  • Plex: Starts but it lags so much you can’t watch anything

But hey, I never expected $35 to give me a kick ass ARM experience anyways. There are 1000 things the PI does very well, but HTML is not one of them.

ODroid XU4


The ODroid packs a lot of power!

The ODroid being faster than the Raspberry PI is nothing new, but I was surprised at how much power this board delivers. I never expected it to give me a Linux experience close to that of a x86 PC; I mean we are talking about a 45€ SBC here. And it’s only 10€ more than the Raspberry PI, which is a toy at best. But the ODroid XU4 delivers a good Linux desktop; And it’s well worth the extra 10€ when compared to the PI.

Personally I don’t understand why people keep buying PI’s when there is so much better options on the market now. At least not if web technology is involved. A small server or emulator sure, but not HTML5 and browsers. The PI just cant handle it.

  • Skid-Row: 4-5 frames per second
  • Quake: Runs at very enjoyable speed (!)
  • Plex: Runs well but you may want to pick SD or 720p to avoid lags

What really shocked me was that ODroid XU4 can run Quake.js! The PI can’t even start that because it’s so demanding. It is one of the largest and most resource hungry asm.js projects out there – but ODroid XU4 did a fantastic job.

Now it’s not a silky smooth experience, I would guess something along the lines of 17-20 fps. But you know what? Thats pretty good for a $45 board.

I have owned far worse x86 PC’s in my day.

The Tinkerboard

Before i powered up the board I was reluctant to push it too far, because I thought it would fail me once again. I did hope that something had been done by Asus to rectify the situation though, because Asus really should have done a better job before releasing it. It’s now been roughly 6 months since I bought it, and roughly 8 months since it was released here in Europe. It would have been better for them to have waited with the release. I was not alone about butchering the whole board, its been a source of frustration for those that bought it. 75€ is not much, but no-one likes to throw money out the window like that.

Long story short: I downloaded the latest Ubuntu image and burned that to an SD card (I actually first downloaded the Debian Jessie image they have, but sadly you have to do a bit of work to turn that into a desktop system – so I decided to go for Ubuntu instead). If the drivers are in order I have a feeling the Jessie image will be even faster – Ubuntu has always been a high-quality distribution, but it’s also one of the most demanding. One might even say it’s become bloated. But it does deliver a near Microsoft Windows like experience which has served the Linux community well.

But the Tinkerboard really delivers! (click here for the video) Asus have cleaned up their act and implemented the drivers properly, and you can feel that the moment the desktop comes into view. With the PI you are always fighting with lagging performance. When you start a program the whole system freezes for a while, when you quit a program the system freezes – hell when you move the mouse around the system bloody freezes! Well that is not the case with the Tinkerboard that’s for sure. The tinkerboard feels more like running vanilla Ubuntu on a normal x86 PC to be honest.

  • Skid-Row: 10-15 frames per second
  • Quake: Full screen 32bit graphics, runs like hell
  • Plex: Plays back fullscreen HD, just awesome!

All I can say is this: if you are going to do any bit of embedded coding, regardless if you are using Smart Mobile Studio or some other devkit — this is the board to get (!)

Like already mentioned it does cost almost twice as much as the PI, but that extra 30€ buys you loads of extra power. It opens up so many avenues of code and you can explore software far more complex than both the PI and ODroid combined. With the tinkerboard you can finally deliver a state of the art product built with off the shelves web components. It’s in a league of its own.

The ‘tinker’ rocks at last

When I first bought the tinker i felt cheated. It was so frustrating because the specs were so good and the terrible performance just came down to sloppy work and Asus releasing it prematurely for cash (lets face it, they tapped into the lucrative market established by the PI foundation). By looking at the specs you knew it had the firepower to deliver so much, but it was held back by ridicules drivers.

There is still a lot that can be done to make the Tinkerboard run even faster. Like I mentioned Ubuntu is not the racecar of distributions out there. Ubuntu is fat, there is no other way of saying it. So if someone took the time to create a minimalistic Jessie image, recompile every piece with maximum llvm optimization and as few running services as possible — the tinkerboard would positively fly!

So do I recommend it? I am thrilled to say that yes, I can finally recommend the tinkerboard! It is by far the coolest board in my collection now. In fact it’s so good that I’m donating one to my daughter. She is presently using an iMac which is overkill for her needs at age 10. Now I can make a super simple menu with Netflix and Youtube, buy a nice touch-screen display and wall mount it in her room.

Well done Asus!

Where is PowerPC today?

August 5, 2017 5 comments

Phase 5 PowerUP board prototype

Anyone who messed around with computers back in the 90s will remember PowerPC. This was the only real alternative for Intel’s complete dominance with the x86 CPU’s and believe me when I say the battle was fierce! Behind the PowerPC you had companies like IBM and Motorola, companies that both had (or have) an axe to grind with Intel. At the time the market was split in half – with Intel controlling the business PC segment – while Motorola and IBM represented the home computer market.

The moment we entered the 1990s it became clear that Intel and Microsoft was not going to stay on their side of the fence so to speak. For Motorola in particular this was a death match in the true sense of the word, because the loss of both Apple and Commodore represented billions in revenue.

What could you buy in 1993?

The early 90’s were bitter-sweet for both Commodore and Apple. Faster and affordable PC’s was already a reality and as a consequence – both Amiga machines and Mac’s were struggling to keep up.

The Amiga 1200 still represented a good buy. It had a massive library of software, both for entertainment and serious work. But it was never really suited for demanding office applications. It did wonders in video and multimedia development, and of course games and entertainment – but the jump in price between A1200 and A4000 became harder and harder to justify. You could get a well equipped Mac with professional tools at that range.

Apple on the other hand was never really an entertainment company. Their primary market was professional graphics, desktop publishing and music production (Photoshop, Pro-tools, Logic etc. were exclusive Mac products). When it came to expansions and ports they were more interested in connecting customers to industrial printers, midi devices and high-volume storage. Mac was always a machine for the upper class, people with money to burn; The Amiga dominated the middle-class. It was a family type computer.

But Apple was not a company in hiding, neither from Commodore or the Wintel threat. So in 1993 they introduced the Macintosh Quadra series to the consumer market. Unlike their other models this was aimed at home users and students, meaning that it was affordable, powerful and could be used for both homework and professional applications. It was a direct threat to upper middle-class that could afford the big box Amiga machines.


The 68k Macintosh Quadra came out in October of 1993

But no matter how brilliant these machines were, there was no hiding the fact that when it came to raw power – the PC was not taking any prisoners. It was graphically superior in every way and Intel started doubling the CPU speed exponentially year by year; Just like Moore’s law had predicted.

With the 486-DX2 looming on the horizon, it was game over for the old and faithful processors. The Motorola 68k family had been there since the late 70’s, it was practically an institution, but it was facing enemies on all fronts and simply could not stand in the way of evolution.

The PowerPC architecture

If you are in your 20’s you wont remember this, but back in the late 80’s early 90’s, the battle between computer vendors was indeed fierce. You have to take into consideration that Microsoft and Intel did a real number on IBM. Microsoft stabbed IBM in the back and launched Windows as a direct competitor for IBM’s OS2. When I write “stabbed in the back” I mean that literally because Microsoft was initially hired to create parts of OS/2. It was the typical lawsuit mess, not unlike Microsoft and Sun later, where people would pick sides and argue who the culprit really was.

As you can imagine IBM was both bitter and angry at Microsoft for stealing the home PC market in such a shameful way. They were supposed to help IBM and be their ally, but turned out to be their most fierce competitor. IBM had also created a situation where the PC was licensed to everyone (hence the term “ibm clone”) – meaning that any company could create parts for it and there was little IBM could do to control the market like they were used to. They would naturally get revenue from these companies in the form of royalties (and would later retire 99% of all their products. Why work when they get billions for doing nothing?), but at the time they were still in the game.

Motorola was in a bad situation themselves, with the 68k line of processors clearly incapable of facing the much faster x86 CPU’s. Something new had to be created to ensure their market share.

The result of this “marriage of necessity” was the PowerPC line of processors.


The Apple “Candy” Mac’s made PPC and computing sexy

Apple jumped on the idea. It was the only real alternative to x86. And you have to remember that – had Apple gone to x86 at that point, they would basically have fed the forces that wanted them dead. You could hardly make out where Microsoft started and Intel ended during the early 90s.

I’m going to spare you the whole fall and rebirth of Apple. Needless to say Apple came to the point where their branch of PowerPC processors caused more problems than they had benefits. The type of PowerPC processors Apple used generated an absurd amount of heat, and it was turning into a real problem. We see this in their later models, like the dual cpu G5 PowerMac where 40% of the cabinet is dedicated purely to cooling.

And yes, Commodore kicked the bucket back in 1994 so they never finished their new models. Which is a damn shame because unlike Apple they went with a dedicated RISC processor. These models did not suffer the heating problems the PPC’s used by Apple had to deal with.

Note: PPC and RISC are two sides of the same coin. PPC processors are RISC based, but naturally there exists hundreds of different implementations. To avoid a ton of arguments around this topic I treat PPC as something different from PA-RISC which Commodore was playing with in their Hombre “skunkworks” projects.

You can read all about Apple’s strain of PowePC processors here, and PA-RISC here.

PPC in modern computers?

I am going to be perfectly honest. When I heard that the new Amiga machines were based on PowerPC my reaction was less than polite. I mean who the hell would use PowerPC in our day and age? Surely Apple’s spectacular failure would stand as a warning for all time? I was flabbergasted to say the least.

the_red_one_498240f858553The Amiga One came out and I didn’t even give it the time of day. The Sam440 motherboards came out, I couldn’t care less. It would have been nice to own one, but the price at the time and the lack of software was just to disproportionate to make sense.

And now there is the Amiga x5000 and a smaller, more affordable A1222 (a.k.a “Tabour”) model just around the corner. And they are both equipped with a PPC CPU. There are just two logical conclusions you can make when faced with this: either the maker of these products is nuttier than a snicker’s bar, or there is something the general public doesn’t know.

What the general public doesn’t know has turned out to be quite a lot. While you would think PPC was dead and buried, the reality of PPC is not that simple. Turns out there is not just one PPC family (or branch) but several. The one that Apple used back in the day (and that MorphOS for some odd reason support) represents just one branch of the PPC tree if you like. I had no idea this was the case.

The first thing you are going to notice is that the CPU in the new Amiga’s doesn’t have the absurd cooling problems the old Mac’s suffered. There are no 20cm cooling ribs and you don’t need 2 fans on Ritalin to prevent a cpu meltdown; and you also don’t need a custom aluminium case to keep it cool (everyone thinks the “Mac Pro” cases were just to make them look cool. Turned out it was more literal, it was to turn the inside into a fridge).

In other words, the branch of PPC that we have known so far, the one marketed as “PowerPC” by Apple, Phase5 and everyone back in the 90’s is indeed dead and buried. But that was just one branch, one implementation of what is known as PPC.

Remember when ARM died?

When I started to dig into the whole PPC topic I could not help but think about the Arm processor. It’s almost spooky to reflect on how much we, the consumer, blindly accept as fact. Just think about it: You were told that PowerPC was the bomb, so you ended up buying that. Then you were told that PowerPC was crap and that x86 was the bomb, so you mentally buried PowerPC and bought x86 instead. The consumer market is the proverbial cheep farm where most of us just blindly accept whatever advertising tell us.

This was also the case with Arm. Remember a company called Acorn? It was a great british company that invented, among other things, the Arm core. I remember reading articles about Acorn when I was a kid. I even sold my Amiga for a while and messed around with an Acorn Archimedes. A momentary lapse of sanity, I know; I quickly got rid of it and bought back my Amiga. But I did learn a lot from messing around in RISC OS.


The Acorn Archimedes, a brilliant RISC based machine that sadly didnt make it

My point is, everyone was told that Arm was dead back in the 80’s. The Acorn computers used a pure RISC processor at the time (again, PPC is a RISC based CPU but I treat them as separate since the designs are miles apart), but it was no secret that they were hoping to equip their future Acorn machines with this new and magic Arm thing. And reading about the power and speed of Arm was very exciting indeed. Sadly such a computer never saw the light of day back in the 80’s. Acorn went bust and the market rolled over Acorn much like it would Commodore later.

The point im trying to make is that, everyone was told that Arm died with Acorn. And once that idea was planted in the general public, it became a self-fulfilling prophecy. Arm was dead. End of story. It doesn’t matter that Acorn had set up a separate company that was unaffected by the bankrupcy. Once the public deem something as dead, it just vanish from the face of the earth.

Fast forward to our time and Arm is no longer dead, quite the opposite! It’s presently eating its way into just about every piece of electronics you can think of. In many ways Arm is what made the IOT revolution possible. The whole Raspberry PI phenomenon would quite frankly never have happened without Arm. The low price coupled with the fantastic performance -not to mention that these cpu’s rarely need cooling (unless you overclock the hell out of them) has made Arm the most successful CPU ever made.

The PPC market share

With Arm’s so-called death and re-birth in mind, let’s turn our eyes to PPC and look at where it is today. PPC has suffered pretty much the same fate as Arm once did. Once a branch of the tech is defined “dead” by media and spin-doctors, regardless if the PPC is actually a cluster of designs not a single design or “chip”, the general public blindly follows – and mentally bury the whole subject.

And yes I admit it, I am guilty of this myself. In my mind there was no distinction between PPC and PowerPC. Which is a bit like not knowing the difference between Rock & Roll as a genre, and KISS the rock band. If we look at this through a parallel what we have done is basically to ban all rock bands, regardless of where they are from, because one band once gave a lousy concert.

And that is where we are at. PPC has become invisible in the consumer market, even though it’s there. Which is understandable considering the commercial mechanisms at work here, but is really PPC dead? This should be a simple question. And commercial mechanisms not withstanding the answer is a solid NO. PPC is not dead at all. We have just parked it in a mental limbo. Out of sight, out of mind and all that.


Playstation 3, Nintendo WII U and Playstation VR all use Freescale PPC

PPC today has a strong foothold in industrial computing. The oil sector is one market that use PPC SBC’s extensively (read: single board computers). You will find them in valve controllers, pump and drill systems and pretty much any systems that require a high degree of reliability.

You will also be surprised to learn that cheap PPC SBC’s enjoy the same low energy requirements people adopt Arm over (3.3 – 5.0 V). And naturally, the more powerful the chip – the more juice it needs.

The reason that PPC is popular and still being used with great success is first of all reliability. This reliability is not just physical hardware but also software. PPC gives you two RTOS’s (real-time operating system) to choose from. Each of them comes with a software development toolchain that rivals whatever Visual Studio has to offer. So you get a good-looking IDE, a modern and up to date compiler, the ability to debug “live” on the boards – and also real-time signal processing protocols. The list of software modules you can pick from is massive.


QNX RTOS desktop, This is a module you can embed in your own products

The last part of that paragraph, namely real-time signal processing, is extremely important. Can you imagine having an oil valve with 40.000 cubic tons of pressure failing, but the regulator that is supposed to compensate doesn’t get the signal because Linux or Windows was busy with something else? It get’s pretty nutty at that level.

The second market can be found with set-top boxes, game consoles and tv signal decoders. While this market is no doubt under attack from cheap Arm devices – PPC still has a solid grip here due to their reliability. PPC as an embedded platform has roughly two decades head start over Arm when it comes to software development. That is a lifetime in computing terms.

When developers look at technology for a product they are prototyping, the hardware is just one part of the equation. Being able to easily write software for the platform, perform live debugging of code on the boards, and maintain products over decades rather than consumer based 1-3 year warranties; it’s just a completely different ballgame. Technology like external satellite-dish parts runs for decades without maintenance. And there are good reasons why you dont see x86 or Arm here.


Playstattion 3 and the new PSX VR box both have a Freescale PPC cpu

As mentioned earlier, the PPC branch used today is not the same branch that people remember. I cannot stress this enough, because mixing these is like mistaking Intel for AMD. They may have many of the same features but ultimately they are completely different architectures.

The “PowerPC” label we know from back in the day was used to promote the branch that Apple used. Amiga accelerators also used that line of processors for their PowerUP boards. And anyone who ever stuffed a PowerUP board in their A1200 probably remember the cooling issues. I bought one of the more affordable PowerUP boards for my A1200, and to this day I associate the whole episode as a fiasco. It was haunted by instability, sudden crashes and IO problems – all of it connected to overheating.

But PPC today as delivered by Freescale Semiconductors (bought by NXP back in 2015) are different. They don’t suffer the heat problem of their remote and extinct cousins, have low power requirements and are incredibly reliable. Not to mention leagues more powerful than anything Apple, Phase 5 or Commodore ever got their hands on.

Is Freescale for the Amiga a total blunder?

Had you asked me a few days back chances are I would said yes. I have known for a while that Freescale was used in the oil sector, but I did not take into consideration the strength of the development tools and the important role an RTOS system holds in a critical production environment.

I must also admit that I had no idea that my Playstation and Nintendo consoles were PPC based. Playstation 4 doesn’t use PPC on its motherboard, but if you buy the fantastic and sexy VR add-on package, you get a second module that is again – a PPC based product.

It also turns out that IBM’s high-end mainframes, those Amazon and Microsoft use to build the bedrock for cloud computing are likewise PPC based. So once again we see that PPC is indeed there and it’s playing an important role in our lives – but most people don’t see it. So all of this is a matter of perspective.


The Nintendo WII U uses a Freescale PPC cpu, not exactly a below-par gaming system

But the Amiga x5000 or A1222 will not be controlling a high-pressure valve or serving half a million users (hopefully); so does this affect the consumer at all? Does any of this hold any value to you or me? What on earth would real-time feedback mean for a hobby user that just want to play some games, watch a movie or code demos?

The answer is: it could have a profound benefit, but it needs to be developed and evolved first.

Musicians could benefit greatly from the superior signal processing features, but as of writing I have yet to find any mention of this in the Amiga NG SDK. So while the potential is there I doubt we will see it before the Amiga has sold in enough volume.

Fast and reliable signal dispatching in the architecture will also have a profound effect on IPC (inter process communication), allowing separate processes to talk with each other faster and more reliably than say, windows or Linux. Programmers typically use a mutex or a critical-section to protect memory while it’s being delivered to another process (note: painting in broad strokes here), this is a very costly mechanism under Windows and Linux. For instance, the reason UAE is still single threaded is because isolating the custom chips in separate threads and having them talk – turned out to be too slow. If PPC is able to deal with that faster, it also means that processes would communicate faster and more interesting software can be made. Even practical things like a web-server would greatly benefit from real-time message dispatching.


There is no lack of vendors for PPC SBC’s online, here from Abaco Systems

So for us consumers, it all boils down to volume. The Freescale branch of PPC processors is not dead and will be around for years to come; they are sold by the millions every year to great variety of businesses; and while most of them operate outside the traditional consumer awareness, it does have a positive effect on pricing. The more a processor is sold the cheaper it becomes.

Most people feel that the Amiga x5000 is to expensive for a home computer and they blame that on the CPU. Forgetting that 50% of the sub total goes into making the motherboard and all the parts around the CPU. The CPU alone does not represent the price of a whole new platform. And that’s just the hardware! On top of this you have the job of re-writing a whole operating system from scratch, add features that have evolved between 1994 and 2017, and make it all sing together through custom written drivers.

So it’s not your average programming project to say the least.

But is it really too expensive? Perhaps. I bought an iMac 2 years back that was supposed to be my work machine. I work as a developer and use VMWare for almost all my coding. Turned out the i5 based beauty just didn’t have the ram. And fitting it with more ram (it came with 16 gigabytes, I need at least 32) would cost a lot more than a low-end PC. The sad part is that had I gone for a PC I could have treated myself to an i7 with 32 gigabyte ram for the same price.

I later bit the bullet and bought a 3500€ Intel i7 monster with 64 gigabytes of ram and the latest Nvidia graphics card. Let’s just say that the Amiga x5000 is reasonable in context with this. I basically have an iMac i have no use for, it just sits there collecting dust and is reduced to a music player.

Secondly we have to look at potential. The Mac and Windows machines now have their potential completely exposed. We know what these machines do and it’s not going to change any time soon.

The Amiga has a lot of hidden potential that has yet to be realized. The signal processing is just one of them. The most interesting is by far the Xena chip (XMOS) that allow developers to implement custom hardware in software. It might sound like FPGA but XMOS is a different technology. Here you write code using a custom C compiler that generates a special brand of opcodes. Your code is loaded onto a part of the chip (the chip is divided into X number of squares, each representing a piece of logic, or “custom chip” if you like) and will then act as a custom-chip.


The Amiga x5000 in all her glory, notice the moderate cooling for the CPU

The XENA technology could really do wonders for the Amiga. Instead of relying on traditional library files that are executed by the main CPU, things like video decoding, graphical effecs, auxiliary 3D functionality and even emulation (!) can now be dealt-with by XENA and executed in parallel with the main CPU.

If anything is going to make or break the Amiga, it wont be the Freescale PPC processor – it will be the XENA chip and how they use it to benefit the consumer.

Just imagine running UAE almost solely on the XENA chip, emulating 68k applications at near native speed – without using the main CPU at all? Sounds pretty good! And this is a feature you wont find on a PC motherboard. Like always they will add it should it become popular, but right now it’s not even on the radar.

So I for one do believe that the next generation Amiga machines have a shot. The A1222 is probably going to be the defining factor. It will retail at an affordable price (around 450€) and will no doubt go head-to-head with both consoles and mid-range PC’s.

So like always it’s about volume, timing and infrastructure. Everything but the actual processor to be honest.

Last words

Its been a valuable experience to look around and read up on PPC. When I started my little investigation I had a dark picture in my head where the new Amiga machines were just a waste of time. I am happy to say that this is not true and the Freescale processors are indeed alive and kicking.

It was also interesting to see how widespread PPC technology really is. It’s not just a specialist platform, although that is absolutely where it’s strength is financially; it ships in everything from your home router to your tv-signal decoder or game system. So it does have a foot in the consumer market, but like I have outlined here – most consumers have parked it in a blind-spot and we associate the word “PowerPC” with the fiasco of Apple in the past. Which is a bit sad because it’s neither true or fair.


Amiga OS 4.x is turning out to be a very capable system

I have no problem seeing a future where the Amiga becomes a viable commercial product again. I think there is some way to go before that happens, and the spear-head is going to be the A1222 or a similar product.

But like I have underlined again and again – it all boils down to developers. A platform is only as good as the software you can run on it, and Hyperion should really throw themselves into porting games and creativity software. They need to build up critical mass and ship the A1222 with a ton of titles.

For my personal needs I will be more than happy just owning the x5000. It doesn’t need to be a massive commercial success, because Amiga is in my blood and I will always enjoy using it. And yes it is a bit expensive and I’m not in the habit of buying machines like this left and right. But I can safely say that this is a machine that I will be enjoying for many, many years into the future – regardless of what others may feel about it.

I would suggest that Hyperion lower their prices to somewhere around 1000€ if possible. Right now they need to think volume rather than profit, and hopefully Hyperion will start making the OS compatible with Arm. Again my thoughts go to volume and that IOT and embedded systems need an alternative to Linux and Windows 10 embedded.

But right now I’m itching to start developing for it – and I’m not alone 🙂

Amiga OS 4, object pascal and everything

August 2, 2017 1 comment

Those that read my blog knows that I’m a huge fan of the Commodore Amiga machines. This was a line of computers that took the world by storm around 1985 and held its ground until 1993. Sadly the company had to file for bankruptcy after a series of absurd financial escapades by its management.


The original team before it fell prey to mismanagement

The death of Commodore is one of the great tragedies in computing history. There is no doubt that Commodore represented a much-needed alternative to Microsoft and Apple – and the death of Commodore meant innovation of technology took a turn for the worse.

Large books have been written on this subject, as well as great documentaries and movies – so I’m not going to dig further into the drama here. Ars Technica has a range of articles covering the whole story, so if you want to understand how the market got the way it is today, head over and read up on the story.

On a personal level I find the classic Amiga machines a source of great inspiration even now. Despite Commodore dying in the 90’s, today 30 years after the fact I still stumble over amazing source-code on this awesome computer; There are a few things in Amiga OS that “hint” to its true age, but ultimately the system has aged with amazing elegance and grace. It just blows people away when they realize that the Amiga desktop hit the market in 1984 – and much of what we regard as a modern desktop experience is actually inherited from the Amiga.


Amiga OS is highly customizable. Here showing OS 3.9 [the last of the classic OS versions]

As I type this the Amiga is going through a form of revival. It’s actually remarkable to be a part of this because the scope of such an endeavour is monumental. But even more impressive is just how many people are involved. It’s not like some tiny “computer cult” where a bunch of misfits hang out in sad corners of the internet. Nope, we are talking about thousands of educated and technical people who still use their Amiga computers on a daily basis.

For instance: the realization of the new Amiga models have cost £ 1.2 million, so there are serious players involved in this.

The user-base is varied of course, it’s not all developers and engineers. You have gamers who love to kick back with some high quality retro-gaming. You have graphics designers who pixel large masterpieces (an almost lost art in this day and age). And you have musicians who write awesome tracks; then use that to spice up otherwise flat and dull PC based tracks.

What is even more awesome is the coding. Even the latest Freepascal has been ported, so if you were expecting people hand punching hex-codes you will be disappointed. While the Amiga is old in technical terms, it was so far ahead of the competition that people are surprised just how capable the classic systems are.

And yes, people code games, demos and utility programs for the classical Amiga systems even today. I just installed a Dropbox cloud driver on my system and it works brilliantly.

The brand new Amiga

Classic Amiga machines are awesome, but this post is not about the old models; it’s about the new models that are coming out now. Yes, you read right: next generation Amiga computers that have finally become a reality. Having waited for 22 years I am thrilled to say that I just ordered a brand new Amiga 5000! (and cant wait to install Freepascal and start coding).

It’s also quite affordable. The x5000 model (which is the power system) retails at around €1650, which is roughly half the price I paid for my Intel i7, Nvidia GeForce GTX 970 workstation. And the potential as a developer is enormous.

Just think about the onslaught of Delphi code I can port over, and how instrumental my software can become by getting in early. Say what you will about Freepascal but it tends to be the second compiler to hit a platform after GCC. And with Freepascal in place a Delphi developer can do some serious magic!

20431276_643626252509574_7473564293748990830_nRight. So the first Amiga  is the power model, the Amiga 5000. This can be ordered today. It cost the same as a good PC (1600€ range depending on import tax and vat). This is far less than I paid for my crap iMac (that I never use anymore).

The power model is best suited for people who do professional work on the machine. Software development doesn’t necessarily need all the firepower the x5000 brings, but more demanding tasks like 3d rendering or media composition will.

The next model is the A1222 which is due out around x-mas 2017 /slash/ first quarter


The A1222 “Tabour”

2018. You would perhaps expect a mid-range model, something retailing at around €800 or there abouts – but the A1222 is without a doubt a low-end model.

It should retail for roughly €450. I think this is a great idea because AEON (who makes hardware) have different needs from Hyperion (who makes the new Amiga OS [more about that further into the article]). AEON needs to get enough units out to secure the foundation – while Hyperion needs vertical market penetration (read: become popular and also hit other hardware platforms as well). These factors are mutually exclusive, just like they are for Windows and OS X. Which is probably why Apple refuse to sell OS X without a mac, or they could end up competing with themselves.

A brave new Amiga OS

But there is more to this “revival” than just hardware. Many would even say that hardware is the least interesting about the next generation systems, and that the true value at this point in time is the new and sexy operating system. Because what the world needs now more than hardware (in my opinion) is a lightweight alternative to Linux and Windows. A lean, powerful, easy to use, highly customizable operating system that will happily boot on a $35 Raspberry PI 3b, or a $2500 Intel i7 monster. Something that makes computing fun, affordable and most of all: portable!

AmigaOS 41 Final (Commodore-Amiga, 2015, Amiga)_2

My setup of Amiga OS 4, with FPC and Storm C/C++

And with lean I have to stress that the original Amiga operating system, the classic 3.x system that was developed all the way to the end – was initially created to thrive in as little as 512kb. At most I had 2 megabytes of ram in my Amiga 1200 and that was ample space to write and run large programs, play the latest games and enjoy the rich, colorful and user-friendly desktop environment. We have to remember that Amiga had a multi-tasking, window based OS a decade before Microsoft.

Naturally the next-generation systems is built to deal with the realities of 2017 and beyond, but incredibly enough the OS will run just fine with as little as 256 megabytes. Not even Windows embedded can boot up on that. Linux comes close with distributions like Puppy and DSL, but Amiga OS 4 gives you a lot more functionality out of the box.

What way to go?

OK so we have new hardware, but what about the software? Are the new Amiga’s supposed to run some ancient version of Amiga OS? Of-course not! The people behind the new hardware have teamed up with a second company, Hyperion, that has believe it or not, done a full re-implementation of Amiga OS! And naturally they have taken the opportunity to get rid of annoying behavior – and adding behavior people expect in 2017 (like double-clicking on a window header to maximize it, easy access to menus and much more). Visually Amiga OS 4  is absolutely gorgeous. Just stunning to look at.

Now there are many different theories and ideas about where a new Amiga should go. Sadly it’s not just as simple as “hey let’s make a new amiga“; the old system is literally boiled in patent and legislation issues. It is close to an investors worst nightmare since ownership is so fragmented. Back when Commodore died, different parts of the Amiga was sold to different companies and individuals. The main reason we havent seen a new Amiga until now – is because the owners have been fighting between themselves. The Amiga as we know it has been caught in limbo for close to two decades.

My stance on the whole subject is that Trevor Dickenson, the man behind the next generation Amiga systems, has done the only reasonable thing a sane human being can when faced with a proverbial patent kebab: the old hardware is magical for us that grew up on it – but by todays standard they are obsolete dinosaurs. The same can be said about the Amiga OS 3.9. So Trevor has gone for a full re-implementation and hardware.

The other predominant idea is more GNU/Linux in spirit, where people want Amiga OS to be platform independent (or at least written in a way that makes the code run on different hardware as long as some fundamental infrastructure exists). This actually resulted in a whole new OS being written, namely Aros, which is a community made Amiga OS clone. A project that has been perpetually maintained for 20 years now.


Aros, a community re-implementation of Amiga OS for x86

While I think the guys behind Aros should be applauded, I do feel that AEON and Hyperion have produced something better. There are still kinks to work out on both systems – and don’t get me wrong: I am thrilled that Aros is available, I just enjoy OS 4 more than I do Aros. Which is my subjective opinion of course.

New markets

Right. With all this in mind, let us completely disregard the old Amiga, the commodore drama and instead focus on the new operatingsystem as a product. It doesn’t take long before a few thrilling opportunities present themselves.

The first that comes to my mind is how well suited OS 4 would be as an embedded platform. The problem with Linux is ultimately the same that haunts OS X and Windows, namely that size and complexity grows proportionally over time. I have seen Linux systems as small as 20 megabytes, but for running X based full screen applications, taking advantage of hardware accelerated graphics – you really need a bigger infrastructure. And the moment you start adding those packages – Linux puts on weight and dependencies fast!


The embedded market is one place where Amiga OS would do wonders

With embedded systems im not just talking about head-less servers or single application devices. Take something simple like a ticket booth, an information kiosk or POS terminal. Most of these run either Windows embedded or some variation of Linux. Since both of these systems require a fair bit of infrastructure to function properly, the price of the hardware typically start at around 300€. Delphi and C++ based solutions, at least those that I have seen, end up using boards in the 300€ to $400€ range.

This price-tag is high considering the tasks you need to do in a POS terminal or ticket system. You usually have a touch enabled screen, a network connection, a local database that will cache information should the network be down – the rest is visual code for dealing with menus, options, identification and fault tolerance. If a visa terminal is included then a USB driver must also be factored in.

These tasks are not heavy in themselves. So in theory a smaller system if properly adapted for it could do the same if not better job – at a much better price.

More for less, the Amiga legacy

Amiga OS would be able to deliver the exact same experience as Windows and Linux – but running on more cost-effective hardware. Where modern Windows and Linux typically need at least 2 gigabyte of ram for a heavy-duty visual application, full network stack and database services – Amiga OS is happy to run in as little as 512 megabytes. Everything is relative of course, but running a heavy visual application with less than a gigabyte memory in 2017 is rare to say the least.

Already we have cut cost. Power ARM boards ships with 4 gigabytes of ram, powered by a snappy ARM v9 cpu – and medium boards ship with 1 or 2 gigabytes of ram and a less powerful cpu. The price difference is already a good 75€ on ram alone. And if the CPU is a step down, from ARM v9 to ARM v8, we can push it down by a good 120€. At least if you are ordering in bulk (say 100 units).

The exciting part is ultimately how well Amiga OS 4 scales. I have yet to try this since I don’t have access to the machine I have ordered yet – and sadly Amiga OS 4.1 is compiled purely for PPC. This might sound odd since everyone is moving to ARM, but there is still plenty of embedded systems based on PPC. But yes, I would urge our good friend Trevor Dickenson to establish a migration plan to ARM because it would kill two birds with one stone: upgrading the faithful Amiga community while entering into the embedded market at the same time. Since the same hardware is involved these two factors would stimulate the growth and adoption of the OS.


The PPC platform gives you a lot of bang-for-the-buck in the A1222 model

But for sake of argument let’s say that Amiga OS 4 scales exceptionally well, meaning that it will happily run on ARM v8 with 1 gigabyte of ram. This would mean that it would run on systems like the Asus Tinkerboard that retails at 60€ inc. vat. This would naturally not be a high performance system like the A5000, but embedded is not about that – it’s about finding something that can run your application safely, efficiently and without problems.

So if the OS scales gracefully for ARM, we have brought the cost down from 300€ to 60€ for the hardware (I would round that up to 100€, something always comes up). If the customers software was Windows-based, a further 50€ can be subtracted from the software budget for bulk licensing. Again buying in bulk is the key.

Think different means different

Already I can hear my friends that are into Linux yell that this is rubbish and that Linux can be scaled down from 8 gigabytes to 20 megabytes if so needed. And yes that is true. But what my learned friends forget is that Linux is a PITA to work with if you havent spent a considerable amount of time learning it. It’s not a system you can just jump into and expect to have results the next day. Amiga OS has a much more friendly architecture and things that are often hard to do on Windows and Linux, is usually very simple to achieve on the Amiga.

Another fact my friends tend to forget is that the great majority of commercial embedded projects – are done using commercial software. Microsoft actually presented a paper on this when they released their IOT support package for the Raspberry PI. And based on personal experience I have to agree with this. In the past 20 years I have only seen 2 companies that use Linux as their primary OS both in products and in their offices. Everyone else uses Windows embedded for their products and day-to-day management.

So what you get are developers using traditional Windows development tools like Visual Studio or Delphi (although that is changing rapidly with node.js). And they might be outstanding programmers but Linux is still reserved for server administrators and the odd few that use it on hobby basis. We simply don’t have time to dig into esoteric “man pages” or explore the intricate secrets of the kernel.

The end result is that companies go with what they know. They get Windows embedded and use an expensive x86 board. So where they could have paid 100€ for a smaller SBC and used Amiga OS to deliver the exact same product — they are stuck with a 350€ baseline.

Be the change

The point of this little post has been to demonstrate that yes, the embedded market is more than open for alternatives. Linux is excellent for those that have the time to learn its many odd peculiarities, but over the past 20 years it has grown into a resource hungry beast. Which is ironic because it used to be Windows that was the bloated scapegoat. And to be honest Windows embedded is a joy to work with and much easier to shape to your exact needs – but the prices are ridicules and it wont perform well unless you throw at least 2 gigabyte on it (relative to the task of course, but in broad strokes that’s the ticket).

But wouldn’t it be nice with a clean, resource friendly and extremely fast alternative? One where auto-starting applications in exclusive mode was a “one liner” in the startup-sequence file? A file which is actually called “startup-sequence” rather than some esoteric “init.d” alias that is neither a folder or an archive but something reminiscent of the Windows registry? A system where libraries and the whole folder structure that makes up drivers, shell, desktop and service is intuitively named for what they are?


Amiga OS could piggyback on the wave of low-cost ARM SBC’s that are flooding the market

You could learn how to use Amiga OS in 2 days tops; but it holds great depth so that you can grow with the system as your needs become more complex. But the general “how to” can be picked up in a couple of days. The architecture is so well-organized that even if you know nothing about settings, a folder named “prefs” doesn’t leave much room for misinterpretation.

But the best thing about AmigaOS is by far how elegant it has been architected. You know, when software is planned right it tends to refactor out things that would otherwise be an obstacle. It’s like a well oiled machinery where each part makes perfect sense and you don’t need a huge book to understand it.

From where I am standing, Amiga OS is ultimately the biggest asset the Hyperion and AEON have to offer. I love the new hardware that is coming out – but there is no doubt in my mind, and I know I am right about this, that the market these companies should focus on now is not PPC – but rather ARM and embedded systems.

It would take an effort to port over the code from a PPC architecture to ARM, but having said that – PPC and ARM have much more in common than say, PPC and x86.

I also think the time is ripe for a solid power ARM board for desktop computers. While smaller boards gets most of the attention, like the Raspberry PI, the ODroid XU4 and the (S)Tinkerboard – once you move the baseline beyond 300€ you see some serious muscle. Boards like iMX6 OpenRex SBC Ultra packs a serious punch, and like expected it ships with 4 gigabyte of ram out of the box.

While it’s impossible to do a raw comparison between the A1222 and the iMX6 OpenRex, I would be surprised if the iMX6 delivered terrible performance compared to the A1222 chipset. I am also sure that if we beefed up the price to 700€, aimed at home computing rather than embedded – the ARM power boards involved would wipe the floor with PPC. There are a ton of factors at play here – a fast CPU doesn’t necessarily mean better graphics. A good GPU should make up at least 1/5 of the price.

Another cool factor regarding ARM is that the bios gives you a great deal of features you can incorporate into your product. All the ARM board I have gives you FAT32 support out of the box for instance, this is supported by the SoC itself and you don’t need to write filesystem drivers for it. Most boards also support Ext2 and Ext3 filesystems. This is recognized automatically on boot. The rich bios/mini kernel is what makes ARM so attractive to code for, because it takes away a lot of the boring, low-level tasks that took months to get right in the past.

Final words

This has been a long article, from the early years of Commodore – all the way up to the present day and beyond. I hope some of my ideas make sense – and I also hope that those who are involved in the making of the new Amiga perhaps pick up an idea or two from this material.

Either way I will support the Amiga with everything I got – but we need a couple of smart ideas and concrete plans on behalf of management. And in my view, Trevor is doing exactly what is needed.

While we can debate the choice of PPC, it’s ultimately a story with a long, long background to it. But thankfully nothing is carved in stone and the future of the Amiga 5000 and 1222 looks bright! I am literally counting the days until I get one!

Amibian.js on bitbucket

August 1, 2017 Leave a comment

The Smart Pascal driven desktop known as Amibian.js is available on bitbucket. It was hosted in a normal github repository earlier – so make sure you clone out from this one.

About Amibian.js

Amibian is a desktop environment written in Smart Pascal. It compiles to JavaScript and can be used through any modern HTML5 compliant browser. The project consists of both a client and server, both written in smart pascal. The server is executed by node.js (note: please install PM2 to have better control over scaling and task management:

smartdeskAmibian.js is best suited for embedded projects, such as kiosk systems. It has been used in tutoring software for schools, custom routers and a wide range of different targets. It can easily be molded into a rich environment for SAD (single application devices) based software – but also made to act more as a real operating system:

  • Class driven filesystem, easy to target external services
    • Ram device-type
    • Browser cache device-type
    • ZIPfile device-type
    • Node.js device-type
  • Cross domain application hosting
    • Traditional IPC protocol between hosted application and desktop
    • Shared resources
      • css styling
      • glyphs and images
    • Event driven visual controls
  • Windowing manager makes it easy to implement custom applications
  • Support for fullscreen API

Amibian ships with UAE.js (based on the SAE.js codebase) making it possible to run Amiga software directly on the desktop surface.

The bitbucket repository is located here:


Drag and drop with smart pascal

July 28, 2017 Leave a comment

Drag and drop under HTML5 is incredibly simple; even more simple than Delphi’s mechanisms. Having said that it can be a PITA to work with due to the async nature of the JavaScript API.

This functionality is just begging to be isolated in a non-visual controller (read: component), and it’s on my list of RTL features. But it will have to wait until we have wiped the list clean.


Drag and drop is useful for many web applications

Anyways, people asked me about a simple way to capture a drag & drop event and kidnap the file-data without any type for form tags involved. So here is a very simple ad-hoc example.

The FView variable is a reference to a visible control. In this case the form itself, so that you can drop files anywhere.

FView.handle.ondragover := procedure (event: variant)
  // In order to hijack drag & drop, this event must prevent
  // the default behavior. So we hotwire it

FView.Handle.ondrop := procedure (event: variant)

  var ev := event.dataTransfer;
  if (ev) then
    if (ev.items) then
      for var x:=0 to ev.items.length-1 do
        var LItem := ev.items[x];
        if (LItem) then
          if string(LItem.kind).ToLower() = "file" then
            var file := LItem.getAsFile();

            var reader: variant;
              @reader = new FileReader();
            reader.onload := procedure (data: variant)
              if reader.readyState = 2 then
                writeln("File data ready:");

                var binbuffer := reader.result;
                var raw: TDefaultBufferType;
                  @raw = new Uint8Array(@binbuffer);

                var Buffer := TBinaryData.Create();




Object models, a Smart Pascal example

July 22, 2017 Leave a comment

Most information developers work with is hierarchal and organized in a classical parent and child manner. Parent-child relationships is so universal that they are everywhere, from how visual controls are organized to how elements in a document is stored. No matter if it’s a visual treeview in Delphi, an html element in a document or entries in a pdf-file; it’s pretty much all organized in a series of inter-linked, parent-child relationships.

Since parent-child trees is so universal, developers usually end up with a unit that contains a couple of simple base-classes. Typically these classes are used to create a model of things. The tree can contain the actual data itself – or more commonly, it’s used to represent a model that is later processed or realized visually.


Most frameworks are essentially parent-child hierarchies

Here is an example of such a unit. It compiles under Smart Pascal and should work fine for most versions. It has virtually no dependencies.

Populate the data property with whatever data you want to represent, then you can enumerate and work with the model. What you use it for is eventually up to you. I use it a lot when dealing with menus; it makes it easier to define menus and sub-menus and then simply feed the model to a construction routine.

unit DataNodes;




  // for older SMS versions that lack "system.types", un-remark this:
  // TEnumResult = (erContinue = $A0, erBreak = $10);

  TCustomDataNode = class;
  TCustomDataNodeList = array of TCustomDataNode;

  TDataNodeEnumEnterProc = function (const Root: TCustomDataNode): TEnumResult;
  TDataNodeEnumExitProc = function (const Root: TCustomDataNode): TEnumResult;
  TDataNodeEnumProc = function  (const Child: TCustomDataNode): TEnumResult;
  TDataNodeCompareProc = function (const Value: variant): boolean;

  TCustomDataNode = class
    property  Parent: TCustomDataNode;
    property  Caption: string;
    property  Data: variant;
    property  Children: TCustomDataNodeList;

    procedure Clear;

    function  Search(const Compare: TDataNodeCompareProc): TCustomDataNode;

    procedure ForEach(const Before: TDataNodeEnumEnterProc;
              const Process: TDataNodeEnumProc;
              const After: TDataNodeEnumExitProc); overload;

    procedure ForEach(const Process: TDataNodeEnumProc); overload;

    function  Serialize: string;
    class function  Parse(const JSonData: string): TCustomDataNode;

    constructor Create(const NodeOwner: TCustomDataNode;
                NodeText: string; const NodeData: variant); overload;

    constructor Create(const NodeOwner: TCustomDataNode;
                const NodeData: variant); overload;

  TDataNode = class(TCustomDataNode)
    property  Parent;
    property  Caption;

  TDataNodeTree = class(TCustomDataNode)
    property  Caption;


// TCustomDataNode

constructor TCustomDataNode.Create(const NodeOwner: TCustomDataNode;
            NodeText: string; const NodeData: variant);
  Parent := NodeOwner;
  Caption := NodeText;
  Data := NodeData;

constructor TCustomDataNode.Create(const NodeOwner: TCustomDataNode;
         const NodeData: variant);
  Parent := NodeOwner;
  Data := NodeData;

function TCustomDataNode.Serialize: string;
    @result = JSON.stringify(@self);

class function TCustomDataNode.Parse(const JSonData: string): TCustomDataNode;
    @result = JSON.parse(@JSONData);

procedure TCustomDataNode.Clear;
    for var x := 0 to Children.length-1 do
      if Children[x] <> nil then

function TCustomDataNode.Search(const Compare: TDataNodeCompareProc): TCustomDataNode;
  LResult: TCustomDataNode;
  ForEach( function (const Child: TCustomDataNode): TEnumResult
      if Compare(Child.Data) then
        LResult := Child;
        result := TEnumResult.erBreak;
  result := LResult;

procedure TCustomDataNode.ForEach(const Before: TDataNodeEnumEnterProc;
          const Process: TDataNodeEnumProc;
          const After: TDataNodeEnumExitProc);
  if assigned(Before) then
    if assigned(Process) then
      for var LChild in Children do
        if Process(LChild) = erBreak then
    if assigned(After) then

procedure TCustomDataNode.ForEach(const Process: TDataNodeEnumProc);
  if assigned(Process) then
    for var LChild in Children do
      if Process(LChild) = erBreak then


LDef parser done

July 21, 2017 Leave a comment

Note: For a quick introduction to LDef click here: Introduction to LDef.

Great news guys! I finally finished the parser and model builder for LDef!

02237439ec5958f6ec7362f726a94696-cogwheels-red-circle-icon-by-vexelsThat means we just need to get the assembler ported. This is presently running fine under Smart Pascal (I like to prototype things there since its faster) – and it will be easy to port it over to Delphi and Freepascal after the model has gone through the steps.

I’m really excited about this project and while I sadly don’t have much free time – this is a project I truly enjoy working on. Perhaps not as much as Smart Pascal which is my baby, but still; its turning into a fantastic system.

Thoughts on the architecture

One of the things I added support for, and that I have hoped that Embarcadero would add to Delphi for a number of years now, is support for contract coding. This is a huge topic that I’m not jumping into here, but one of the features it requires is support for entry and exit sections. Essentially that you can define code that executes before the method body and directly after it has finished (before the result is returned if it’s a function).

This opens up for some very clever means of preventing errors, or at the very least give the user better information about what went wrong. Automated tests also benefits greatly from this.

For example,  a normal object pascal method looks, for example, like this:

procedure TForm1.MySpecialMethod;
  writeln("You called my-special-method")

The basis of contract design builds on the classical and expands it as such:

procedure TForm1.MySpecialMethod;
    writeln("Before my-special-method");

    writeln("After my-special-method");

  writeln("You called my-special-method")

Note: contract design is a huge system and this is just a fragment of the full infrastructure.

What is cool about the before/after snippets, is that they allow you to verify parameters before the body is even executed, and likewise you get to work on the result before the value is returned (if any).

You mights ask, why not just write the tests directly like people do all the time? Well, that is true. But there will also be methods that you have no control over, like a wrapper method that calls a system library for instance. Being able to attach before/after code for externally defined procedures helps take the edge off error testing.

Secondly, if you are writing a remoting framework where variant data and multi-threaded invocation is involved – being able to check things as they are dispatched means catching potential errors faster – leading to better performance.

As always, coding techniques is a source of argument – so im not going into this now. I have added support for it and if people don’t need it then fine, just leave it be.

Under LDef assembly it looks like this:

public void main() {
  enter {

  leave {

Well I guess that’s all for now. Hopefully my next LDef post will be about the assembler being ready – leaving just the linker. I need to experiment a bit with the codegen and linker before the unit format is complete.

The bytecode-format needs to include enough information so that the linker can glue things together. So every class, member, field etc. must be emitted in a way that is easy and allows the linker to quickly look things up. It also needs to write the actual, resulting method offsets into the bytecode.

Have a happy weekend!

FMX 4 linux gets an update

July 20, 2017 Leave a comment

The Firemonkey framework that allows you to compile for Linux desktop (Linux x86 server is already supported) just got a nice update. Amoung the changes is a nice Radial Gradient pattern – and several bugs squashed.


This is an awesome addition if you already have Delphi XE 10.2 and if writing Ubuntu desktop applications is something you want – then this is the package to get!

Check it out: