Facebook, this must change

March 14, 2018 2 comments

Facebook has grown to be more than just a social platform where friends meet. You have groups and communities of every conceivable type, where people of every convictions engage and debate anything you can think of. Groups where people have opinions, are passionate and put ideas to the test.

It has been grand, but lately a negative trend (or technique) has evolved; and sadly Facebook don’t seem to get the full scope of its impact. For them that is.

Childish games

College student looks at sign on classroom door: Blame Shifting 101.

We did this as kids!

It reminds me of behaviour you could see in highschool, where someone would do something illegal, and then point the finger at those who tried to stop the act (also known as blame shifting). Today this has evolved into a type of “revenge” tactics, where individuals who lose an argument (regardless of what it may be) get back at others by falsely reporting them.

At first glance this looks silly enough. Go ahead and report me, I have nothing to hide right? Well it would be silly if Facebook actually took such complaints serious and actually looked at what was written with human eyes. Sadly they don’t, and without any consequences involved for people who maliciously report users out of sheer spite – the stage is set of the worst of trolls to do what they do best: cause mischief and mayhem for upstanding members.

This has reached such heights that we now see the proverbial “drive-by” reporting of people they don’t like or disagree with (especially in political and economic forums) and it goes un-checked by Facebook.

This is a very negative trend for the platform and has already caused considerable damage; To Facebook that is. Why? Well people just move on when the format puts trolls, group campers and reporting snipers (call them what you will) at equal odds with honest, responsible adults that engage in debate.

Group campers and trolls

I was just informed that I had been “reported” and consequently expelled for 7 days due to a violation of terms. I was quite shocked to read this, so I took the time to go through these terms. I was at a complete loss of which of their standards I had violated. And as it turned out, I had broken none of them. I would never dream of posting pornography, I have not made racist remarks (quite the opposite! In 2017 I kicked a total of 46 members from Delphi Developer for rubbish like that), nor am I a member of the anti-christ movement and I don’t go around looking for fights either.

What I had done however, was to catch two members of a group using fake profiles. And in debate with one of these, telling the individual that his trolling the group is neither welcome nor decent – his revenge was to report me (!).


Not all sayings translate well to English

What really surprised me was how Facebook seem to take things at face value. There is no way that a human being could be behind such a ruling; at least not people fluent in Norwegian.

First they seem to employ a simple word check, no doubt looking for curse and swear items (using google translate or some other lookup service). If you pass that, they seem to check for references to a person or individual in conjunction with negative phrasing. Which, let’s be honest, is a somewhat grey area considering their services covers a whole planet with hundreds of cultures.

In this case the only conceivable negative phrase in my post was “Go troll under a bridge“, which is not an insult but an expression with roots in Norwegian folklore. In Norwegian lore trolls typically lived either up in the mountains or under a bridge. And you had to pay the troll not to eat you (a somewhat fitting description considering the situation).

This goes to character. Namely when the person (or fake profile) here did nothing but post statements designed cause problems for other members, then that is the very definition of a net-troll. So telling such an individual to troll under a bridge is the same as saying “stop it and get out” [loosely translated]. I could have just banned him, but I tend to give people the benefit of the doubt.

Facebook as a viable platform

I hope Facebook wakes up, because this type of “tactics” has grown and is being used more and more. And if you score a single point on the above criteria, regardless if the person who reported the incident is also the source — you are just banned for 7 days. Surely, the act of reporting someone who has not violated the terms should carry equal weight? But that is where Facebook just hides behind a wall of Q&A without any opportunity for actual dialog. They don’t seem to care if the report was false or a pure act of revenge – they just blindly accept it and moves on.

The result of this? Well, it’s sort of self-evident isn’t it? People will have to deploy the same tactics in order to survive and protect themselves from such attacks; and voila – you have the extreme rise of fake profiles which has just exploded on Facebook.


Viable platform? I am really starting to question this

Well im not going to create a false profile, because I have some terms of my own; commonly known as “principles“. I run several large groups on Facebook and have been nothing but an asset to their growth. And if they want to lose 7 days of high activity, that is their loss. I am also starting to question if FB is a viable platform at all when a guy running 3 large groups and two businesses there (with a 15 year membership history) can be so easily barred by a fake profile.

But sadly I will stop talking to people who get into arguments and just report + kick them from whatever group they are in. Its sad, but those are the results of the absolutely absurd practices of Facebook. So until their filters employ some logic to them, that’s the way things are.

You cannot run a business on kindergarten rules

I sincerely hope you put some effort and thought into how to solve problems like these. For example, scanning the past 3 notes posted by the reporter to see if there is grounds to ignore the report – or in fact ban the reporter for creating the situation to begin with.

All of this can be solved with a simple strike and value system. If you falsely report someone that’s a strike. If you camp in a group and get multiple reports (within a time-frame), you get automatically banned from that group. If you persistently ban someone (a.k.a bullying) that is another strike. Enough strikes and you get a 7 day warning (or harder depending on the violation).

It wouldn’t require much work to create a system where long-standing, responsible members who benefit the platform – are recognized over trolls that do nothing but ruin this. Seriously. I cannot believe that a planet wide social platform with millions of users, are deploying social rules from the late bronze age.

My thoughts go to the Monty Python sketch “She’s a witch!” set in the darkness of medieval europe. If someone says you are a witch, well then you must be one (sigh). Way to go Facebook, just way to go.

Oh well, I meant to brush up on my Google+ work anyways 🙂


Alternative pointers in Smart Mobile Studio

February 27, 2018 Leave a comment

Smart Mobile Studio already enjoy a rich and powerful set of memory handling classes and methods. If you have a quick look in the memory units (see below) you will find that Smart Mobile Studio really makes JavaScript sing and dance like no other.

As of writing in version 3.0 BETA the following units are dedicated to raw memory manipulation:

  • System.Memory
  • System.Memory.Allocation
  • System.Memory.Buffer
  • System.Memory.Views

Besides these, the unit System.Types.Convert represents the missing link. It contains the class TDataType which converts data between intrinsic (language level) data types and byte arrays.

Alternative pointers

While Smart has probably one of the best frameworks (if not THE best) for memory handling out there, including the standard library that ships with Node.js, the way it works is slightly different from Delphi’s and Freepascal’s approach.

Since JavaScript is reference based rather than pointer based, a marshaling offset mechanism is more efficient in terms of performance; So we modeled this aspect of Smart on how C# in particular organized its memory stuff.

But, is it possible to implement more Delphi like pointers? To some degree yes. The best would be to do this on compiler level, but even without such deep changes to the system you can actually implement a more Delphi-ish interface.

Here is an example of just such a system. It is small and efficient, but compared to the memory units in the RTL it’s much slower. This is also why we abandoned this way of handling memory in the first place. But perhaps someone will find it interesting, or it can help you port over code from Delphi to HTML5.

unit altpointers;




  Pointer = variant;

  TPointerData = record
    Offset: integer;
    Buffer: JArrayBuffer;
    View:   JUint8Array;

function IncPointer(Src: Pointer; AddValue: integer): Pointer;
function DecPointer(Src: Pointer; DecValue: integer): Pointer;
function EquPointer(src, dst : Pointer): boolean;

// a := a + bytes
operator + (Pointer,   integer): Pointer uses IncPointer;

// a := a - bytes
operator - (Pointer,   integer): Pointer uses DecPointer;

// if a = b then
operator = (Pointer,   Pointer): boolean uses EquPointer;

function  Allocmem(const Size: integer): Pointer;
function  Addr(const Source: Pointer; const Offset: integer): Pointer;
procedure FreeMem(const Source: Pointer);
procedure MemSet(const Target: pointer; const Value: byte); overload;
procedure MemSet(const Target: pointer; const Values: array of byte); overload;
function  MemGet(const Source: pointer): byte; overload;
function  MemGet(const Source: pointer; ReadLength: integer): TByteArray; overload;


function MemGet(const Source: pointer): byte;
  if (Source) then
    var SrcData: TPointerData;
    asm @SrcData = @Source; end;
    result := SrcData.View.items[SrcData.Offset];
  end else
  raise Exception.Create('MemGet failed, invalid pointer error');

function MemGet(const Source: pointer; ReadLength: integer): TByteArray;
  if (Source) then
    var SrcData: TPointerData;
    asm @SrcData = @Source; end;

    var Offset := SrcData.Offset;

    while ReadLength > 0 do
      result.add( SrcData.View.items[Offset] );

      if offset >= SrcData.View.byteLength then
        raise Exception.Create('MemGet failed, offset exceeds memory');
  end else
  raise Exception.Create('MemGet failed, invalid pointer error');

procedure MemSet(const Target: pointer; const Value: byte);
  var DstData: TPointerData;
  asm @DstData = @Target; end;
  dstData.View.items[DstData.Offset] := value;

procedure MemSet(const Target: pointer; const Values: array of byte);
  if Values.length > 0 then
    var DstData: TPointerData;
    asm @DstData = @Target; end;

    var offset := DstData.Offset;
    for var x := low(Values) to high(Values) do
      dstData.View.items[offset] := Values[x];
      if offset >= DstData.View.byteLength then
        raise Exception.Create('MemSet failed, offset exceeds memory');

function EquPointer(src, dst : Pointer): boolean;
  if (src) then
    if (dst) then
      var SrcData: TPointerData;
      var DstData: TPointerData;
      asm @SrcData = @Src; end;
      asm @DstData = @dst; end;
      result := SrcData.buffer = dstData.buffer;

function IncPointer(Src: Pointer; AddValue: integer): Pointer;
  if (Src) then
    // Check that there is an actual change.
    // If not, just return the same pointer
    if AddValue > 0 then
      // Map source data
      var SrcData: TPointerData;
      asm @SrcData = @Src; end;

      // Calculate new offset, using the current view
      // position as the present location.
      var NewOffset := srcData.Offset;
      inc(NewOffset, AddValue);

      // Make sure the new offset is within the range of the
      // memory buffer. Picky yes, but this is not native land
      if  (NewOffset >=0)
      and (NewOffset  0 then
      // Map source data
      var SrcData: TPointerData;
      asm @SrcData = @Src; end;

      // Calculate new offset, using the current view
      // position as the present location.
      var NewOffset := srcData.Offset;
      dec(NewOffset, DecValue);

      // Make sure the new offset is within the range of the
      // memory buffer. Picky yes, but this is not native land
      if  (NewOffset >=0)
      and (NewOffset  0 then
    var Data: TPointerData;
    Data.Offset := 0;
    Data.Buffer := JArrayBuffer.Create(Size);
    Data.View := JUint8Array.Create(Data.Buffer, 0, Size);
      @result = @data;
  end else
  raise Exception.Create('Allocmem failed, invalid size error');

function Addr(const Source: Pointer; const Offset: integer): Pointer;
  if (Source) then
    if offset > 0 then
      // Map source data
      var SrcData: TPointerData;
      asm @SrcData = @Source; end;

      // Check that offset is valid
      if (Offset >=0) and (offset < srcData.buffer.byteLength) then
        // Setup new Pointer data
        var Data: TPointerData;
        Data.Buffer := SrcData.Buffer;
        Data.View := SrcData.View;
        Data.Offset := Offset;
          @result = @data;
      end else
      raise Exception.Create('Addr failed, offset exceeds memory');
    end else
    raise Exception.Create('Addr failed, invalid offset error');
  end else
  raise Exception.Create('Addr failed, invalid pointer error');

procedure FreeMem(const Source: Pointer);
  if (source) then
    // Map source data
    var SrcData: TPointerData;
    asm @SrcData = @Source; end;

    // Flush reference and let the GC take care of it
    SrcData.Buffer := nil;
    SrcData.View := nil;
    SrcData.Offset := 0;
      srcData = {}
  end else
  raise Exception.Create('FreeMem failed, invalid pointer error');


Using the pointers

As you can probably see from the code there is no such thing as PByte, PWord or PLongword here. We use a clean uint8 typed array that we link to a memory buffer, so “pointer” here is fully byte based despite it’s untyped origins. In reality it just holds a TPointerData structure, but since this is done via asm sections, the compiler cant see it and treats it as a variant.

The operators add support for code like:

var buffer := allocmem(1024);
memset(buffer, $ff);
buffer := buffer + 1;
memset(buffer, $FA)

But using the overloaded memset procedure is a bit more efficient:

var buffer := allocmem(1024);
var bytes := TDataType.StringToBytes('this is awesome!');
memset(buffer, bytes);
buffer := buffer + bytes.length;
// write more data here

While fun to play with and perhaps useful in porting over older code, I highly recommend that you familiarize yourself with classes like TBinaryData that represents a fully managed buffer with a rich number of methods to use.

And ofcourse let us not forget TMemoryStream combined with TStreamWriter and TStreamReader. These will no doubt feel more at home both under HTML5 and Node.js

Note: WordPress formatting of pascal code is not the best. Click here to view the code as PDF.

Extract DLL member names in Delphi

February 16, 2018 2 comments

Long before dot net and Java I was doing a huge coding system for a large Norwegian company. They wanted a custom scripting engine and they wanted a way to embed bytecodes in dll files. Easy like apple pie (I sure know how to pick’em huh?).

The solution turned out to be simple enough, but this post is not about that, but rather about a unit I wrote as part of the solution. In order to recognize one dll from another, you obviously need the ability to examine a dll file. I mean, you could just load the dll and try to map the functions you need, but that will throw an exception if it’s the wrong dll.

So after a bit of googling around and spending a few hours on MDN, I sat down and wrote a unit for this. It allows you to load a dll and extract all the method names the library exposes. If nothing else it makes it easier to recognize your dll files.

Well enjoy!

unit dllexamine;



    Reference material for WinAPI functions






  THexDllExamine = class abstract
    class function Examine(const Filename: AnsiString;
      out Members: TStringlist): boolean; static;


  class function THexDllExamine.Examine(const Filename: AnsiString;
    out Members: TStringlist): boolean;
    TDWordArray = array [0..$FFFFF] of DWORD;
    libinfo:      LoadedImage;
    libDirectory: PImageExportDirectory;
    SizeOfList: Cardinal;
    pDummy: PImageSectionHeader;
    i: Cardinal;
    NameRVAs: ^TDWordArray;
    Name: string;
    result := false;
    members := nil;

    if MapAndLoad( PAnsiChar(FileName), nil, @libinfo, true, true) then
        // Get the directory
        libDirectory := ImageDirectoryEntryToData(libinfo.MappedAddress,
          false, IMAGE_DIRECTORY_ENTRY_EXPORT, SizeOfList);

        // Anything to work with?
        if libDirectory  nil then

          // Get ptr to first node for the image directory
          NameRVAs := ImageRvaToVa( libinfo.FileHeader,

          // Traverse until end
          Members := TStringList.Create;
            for i := 0 to libDirectory^.NumberOfNames - 1 do
              Name := PChar(ImageRvaToVa(libinfo.FileHeader,
                libinfo.MappedAddress, NameRVAs^[i], pDummy));
              Name := Name.Trim();
              if Name.Length > 0 then
            on e: exception do

          // We never get here if an exception kicks in
          result := members  nil;

        // Yoga complete, now breathe ..


Smart Pascal assembler, it’s a reality

January 31, 2018 2 comments

After all these years of activity I guess there is no secret that I am a bit over-active at times. I am usually the most happy when I work on 2-3 things at the same time. I also do plenty of research to test theories and explore various technologies. So it’s never a dull moment – and this project has been no exception.

Bytecode based compilation

For that past 7 years I have worked close to compiler tech of various types and complexity on a daily basis. Script engines like DWScript, PAXScript, PascalScript, C# script, JavaScript (the list continues) – all of these have been used in projects either inhouse or for customers; and each serve a particular purpose.

Now while they are all fantastic engines and deliver fantastic results – I have had this “itch” to create something new. Something that approach the problem of interpreting, compiling and running code from a more low-level angle. One that is more standardized and not just a result of the inventors whim or particular style. Which in my view results in a system  that wont need years of updates and maintenance. I am a strong believer in simplicity, meaning that most of the time – a simple ad-hoc solution is the best.

It was this belief that gave birth to Smart Mobile Studio to begin with. Instead of spending a year writing a classical parser, tokenizer, AST and code emitter – we forked DWScript and used it to perform the tokenizing for us. We were also lucky to catch the interest of Eric (the maintainer) and the rest is history. Smart Mobile Studio was born and made with off the shelves parts; not boring. grey studies by men in lab coats.

The bytecode project started around the summer of 2017. I had thought about it for a while but this is when I finally took the time to sit down and pen my ideas for a portable virtual machine and bytecode based instruction set. A system that could be easily implemented in any language, from Basic to C/C++, without demanding the almost ridicules system specs and know-how of Java or the Microsoft CLR.

I labeled the system LDef, short for “language definition format”; I have written a couple of articles on the subject here on my blog, but I did not yet have enough finished to demo my ideas.

Time is always a commodity, and like everyone else the majority of my time is invested in my day job, working on Smart Mobile Studio. The rest is divided between my family, social obligations, working out and hobbies. Hence progress has been slow and sporadic.

But I finally have a working prototype so the LDEF parser, assembler, disassembler and runtime is no longer a theory but a functional virtual machine.

Power in simplicity

Without much fanfare I have finally reached the stage where I can demonstrate my ideas. It took a long time to get to this point, because before you can even think of designing a language or carve out a bytecode-format, you have to solve quite a few fundamental concepts. These must be in place before you even entertain the idea of starting on the virtual machine – or the project will simply end up as useless spaghetti that nobody understands or wants to work with.

  • Text parsing techniques must be researched properly
  • Virtual machine design must be worked out
  • A well designed instruction-set must be architected
  • Platform criteria must be met

Text parsing sounds easy. Its one of those topics where people reply”oh yeah, that’s easy” on auto pilot. But when you really dig into this subject you realize it’s anything but easy. At least if you want a parser that is fast, trustworthy – and more importantly: that can be ported to other dialects and languages with relatively ease (Delphi, FreePascal, C#, C/C++ are obvious targets). The ideas has to mature quite frankly.

One of my most central criteria when writing this system has been: no pointers in the core system. How people choose to inplement their version of LDEF for other languages is up to them (Delphi and FPC included), but the original prototype should be as clean and down to earth as possible.

Besides, languages like C# are not too keen on pointers anyways. You can use them but you have to mark your assemblies as “unsafe”. And why bother when var and const parameters offers you a safe and portable alternative? Smart Mobile Studio (or Smart Pascal, the dialect we use) doesn’t use pointers either; we compile to JavaScript after all where references is the name of the game. So avoiding pointers is more than central; it’s fundamental.

We want the system to be easy to port to any language, even Basic for that matter. And once the VM is ported, LDEF compiled libraries and assemblies can be loaded and used straight away.

The virtual CPU and it’s aggregates

The virtual machine architecture is the hard part. That’s where the true challenge resides. All the other stuff, be it source parsing, expressions management, building a model (AST), data types, generating jump tables, emitting bytecodes; All those tasks are trivial compared to the CPU and it’s aggregates.

The design and architecture of the cpu (or “runtime” or “virtual machine” since it consists of many parts) affects everything. It especially shapes the cpu instructions (what they do and how). But like mentioned the CPU is just one of many parts that makes up the virtual machine. What about variable handling? How should variables be allocated, addressed and dealt with? The way the VM deals with this will directly reflect how the byte code operates and how much code you need to initialize, populate and dispose of a variable.

Then you have more interesting questions like: how should the VM distinguish between global and local variable identities? We want the assembly code to be uniform like real machine code, we don’t want “special” instructions for global variables, and a whole different set of instructions for local variables. LDEF allows you to pass registers, variables, constants and a special register (DC) for data control as you wish. You are not bound to using registers only for math for instance.

I opted for an old trick from the Commodore days, namely “bit shift marking”. Local variables have the first bit in their ID set. While Global variables have the first bit zeroed. This allows us to distinguish between global and local variables extremely fast.

Here is a simple example that better demonstrates the technique. The id parameter is variable id read directly from the bytecode:

function TExample.GetVarId(const Id: integer;
  var IsGlobal: boolean): integer; inline;
  IsGlobal := ((byte((Id shl 24) shr 24) shr 1) and 1) = 0;
  result := Id shr 1;

This is just one of a hundred details you need to mentally work out before you even attempt the big one: namely how to deal with OOP and inheritance.

So far we have only talked about low-level bytecodes (ILASM as it’s called under the .net regime). In both Java and  dot net, object orientation is intrinsic to the VM. The runtime engine “knows” about objects, it knows about classes and methods and expect the bytecode files to be neatly organized class structures.

LDEF “might” go that way; but honestly I find it more tempting to implement OOP in ASM itself. So instead of the runtime having intrinsic knowledge of OOP, a high level compiler will have to emit a scheme for OOP instead. I still need to think and research what is best regarding this topic,

Pictures or it didn’t happen

The prototype is now 97% complete. And it will be uploaded so that people can play around with it. The whole system is implemented in Smart Pascal first (a Delphi and FreePascal version will follow) which means the whole system runs in your browser.

Like you would expect from any ordinary x86 assembler program (MASM, NASM, Gnu ASM, IAR [ARM] with others) the system consists of 4 parts:

  • Parser
  • Assembler
  • Disassembler
  • Runtime

So you can write source code directly in the browser, compile / assemble it – and then execute it on the spot. Then you can disassemble it and look at the results in-depth.


The virtual cpu

The virtual CPU sports a fairly common set of instructions. Unlike Java and .net the cpu has 16 data-aware registers (meaning the registers adopt the type of the value you assign to them, a bit like “variant” in Delphi and C++ builder). Variables allocated using the alloc() instruction can be used just like a register, all the instructions support both registers and variables as params – as well as defined constants, inline constants and strings.

  • R[0] .. R[16] ~ Data aware work registers
  • V[x] ~ Allocated variable
  • DC ~ Data control register

The following instructions are presently supported:

  • alloc [id, datatype]
    Allocate temporary variable
  • vfree [id]
    Release previously allocated variable
  • load [target, source]
    Move data from source to target
  • push [source]
    Push data from a register, variable onto the stack
  • pop [target]
    Pop a value from the stack into a register or variable
  • add [target, source]
    Add value of source to target
  • sub [target, source]
    Subtract source from target
  • mul [target, factor]
    Multiply target by factor
  • div [target, facor]
    Divide target by factor
  • mod [target, factor]
    Modulate target by factor
  • lsl [target, factor]
    Logical shift left, shift bits to the left by factor
  • lsr [target, factor]
    Logical shift right, shift bits to the right by factor
  • btst [target, bit]
    Test bit in target
  • bset [target, bit]
    Set bit in target
  • bclr [target, bit]
    Clear bit in target
  • and [target, source]
    And target with source
  • or [target, source]
    OR target with source
  • not [target]
    NOT value in target
  • xor [target]
    XOR value in target
  • cmp  [target, source]
    Compare value in target with source
  • noop
    No operation, used mostly for byte alignment
  • jsr [label]
    Jump sub-routine
  • bne [label]
    Branch not equal, conditional jump based on a compare
  • beq [label]
    Branch equal, conditional jump based on a compare
  • rts
    Return from a JSR call
  • sys [id]
    Call a standard library function

The virtual cpu can support instructions with any number of parameters, but the most common is either one or two.

I will document more as the prototype becomes available.

TextCraft 1.2 for Smart Pascal

January 26, 2018 Leave a comment

TextCraft is a fast, generic object-pascal text parsing framework. It provides you with the classes you need to write fast text parsers that builds effective data models.

The Textcraft framework was recently moved up to version 1.2 and has been ported from Delphi to both Freepascal and Smart Pascal (the dialect used by Smart Mobile Studio). This is probably the only parsing framework that spans 3 compilers.

Smart Pascal coders can download the framework unit here. This can be placed in their $Install/Library folder  (where $install is where Smart’s library and rtl folder is installed): BitBucket TextCraft Repository

Buffer, parser, model

Textcraft divides the job of parsing into 4 separate objects; each of them representing a concept familiar to people writing compilers; these are: buffer, parser, model and context. If you are parsing a programming language the “model” would be what people call the AST (short for “Abstract Symbol Tree”). This AST is later feed to the code generator, turning it into an executable program (Smart Pascal compiles to JavaScript so there really is no limit to the transformation, just level of complexity).

Note: Textcraft is not a compiler for any particular language, it is a generic text parsing framework that is language-agnostic. Meaning that it makes it easy for you to make parsers with it. We recently used it to parse command-line parameters for Freepascal, so it doesn’t have to be about languages.

The buffer

The buffer has one of the most demanding jobs in the framework. In other frameworks the buffer is often just a memory allocation with a simple read method; but in TextCraft the model is responsible for a lot more. It has to expose functions that makes text recognition simple and effective; it has to keep track of column and row position as you move through the buffer content – and much, much more. So in TextCraft the buffer is where text methodology is implemented in full.

The parser

Like mentioned the parser is responsible for using the buffer’s methods to recognize and make sense of a text. As it makes its way through the buffer content, it creates model-objects that represents each element. Typical for a language would be structures (records), classes, enums, properties and so on. Each of these will be registered in the AST data model.

The Model

The model is a construct. It is made up of as many mode-object instances as you need to express the text in symbolic form. It doesn’t matter if you are parsing a text document or source code, you would still have to define a model for it.

The model obviously reflect your needs. If you just need a superficial overview of the data then you create a simple model. If you need more elaborate information then you create that.

Note: When parsing a text document, a traditional organization would be to divide the model into: chapter, section, paragraph, line and individual words.

The Context

The context object is what links the parser to our model and buffer objects. By default the parser doesn’t know anything about the buffer or model. This helps us abstract away things that would otherwise turn our code into a haystack of references.

The way the context is used can be described like this:

When parsing complex data you often divide the job into multiple classes. Each class deals with one particular topic. For example: if parsing Delphi source code, you would write a class that parses records, a parser that handles classes, another that handles field declarations (and so on).

As a parser recognize mentioned objects, like say a record, it will create a record model object to hold the information. It will then add that to the context by pushing it onto its reference stack.

The first thing a parser does is to grab the model object from the reference to stack. This way the child parsers will always know where to store their model information. It doesn’t matter how deep or recursive something gets, the stack approach and passing the context object to the child parsers – will always make sure each parser “knows” where to store information.

Why is this important?

This is important because it’s cost-effective in computing terms. The TextCraft framework allows you to create parsers that can chew through complex data without turning your project into spaghetti.

So no matter if you are parsing phone-numbers, zip codes or complex C++ source code, TextCraft will make help you get the job done; in a way that is easy to understand and mentain.

Smart Mobile Studio: more cmd tools

January 24, 2018 Leave a comment

Being able to compile and work with projects from the command-line has been possible with Smart Mobile Studio almost since the beginning. But as projects grows, so does the need for more automation.


510242661The IDE contains a few interesting features, like the “Data to picture” function. This takes a datafile (or any file) and place the raw bytes into a png picture as pixels. This is a great way of loading data that the browser would otherwise block or ignore.

People have asked if we could perhaps turn these into command-line tools as well. And I have finally gotten around to doing just that. So our toolbox now contains 3 more command-line tools (not just the smsc compiler)

  • Superglue
  • DataToImage


When you work with large JavaScript libraries they often consists of multiple files. This is great for JS developers and no different from how we use multiple unit-files to organize a project.

But it can be problematic when you deploy applications, because if the dependencies are heavy then your application will load slower. A typical example is ACE, the code editor we recently added to Smart. Its a fantastic editor, but it consists of a monstrous amount of files.

Superglue can import files based on a filter (like *.js) or a semi-colon delimited list. It will then merge these files together into a single file.

For example, let’s say you have 35 javascript files that makes up a library. And lets say you have downloaded and unpacked this to “C:\Temp” on your harddisk. To link all the JS files into a single file, you would type:

superglue -mode:filter -root:"C:\temp" -filter:"*.js" -sort -out:"C:\Smart\Libraries\MyLibrary\MyLibrary.js"

The above will enumerate all the files in “C:\Temp” and only keep those with a .JS file extension. It will sort the files since the -sort switch is set, and finally link all the files into a new, single file called MyLibrary.js (in another location).

So instead of shipping 35 files, which means 3d http loads, we ship one file and load the data in ourselves when the application starts.


As the name implies this is the same function that you find in the IDE. It takes a raw data file (actually, any file) and injects the bytes as pixels in a new PNG file. Code for extracting the data again already exist in the RTL – but I will brush up again on this when we add these tools to our toolbox.

Using this is simplicity itself:

datatoimage -input:"mysqldb.sq3" -output:"c:\smart\projects\mymobileapp\res\defaultdata.png"

The above takes a default sqlite database and stores it inside a picture. In the application we load the picture in, extract the data, and then use that as our default data — which is later stores in the browser cache. This saves us having to execute a ton of sql-statements to establish a DB from scratch in memory.

Better parsing

These tools are very simple. They dont take long to make, but they do need to be reliable. And they do need to be in place when you need them.

We actually ported over TextCraft, a parser we use both in Smart Mobile Studio and Delphi, so it would compile under Freepascal. There was a huge bug in the way Lazarus deals with parameters, so we ended up writing a fresh new command-line parser.

Future tools

We have a lot on our plate so I doubt we will focus on our toolbox much after these. They simplify library making and data injection for projects, and you can use a shell script to implement “make-files” that most people do these days.

However, one tool that would be very handy is a “project to xmlhelp” or similar. A command-line program that will scan your Smart project and emit a full overview of your classes, methods and properties in traditional xml-help format.

But we will see when time allows — at least making libraries and merging in data will be easier from now on 🙂

Fixed Header in Smart Applications

January 3, 2018 Leave a comment

Smart Mobile Studio gives you a lot of really cool visual controls to play with. One of them is a header control (also called a navigation panel by some) that traditionally show and hide it’s buttons (back and next) in response to form navigation.

One question that many people have asked is: how can I make a header that remains fixed and doesnt scroll with the forms? So no matter what form I navigate to, the header remains in place. Preferably easily accessed.

The Visual Application

Smart Visual Applications are more than just forms and buttons. The first thing that is created when you run a visual Smart Application, is naturally an instance of TApplication; this in turn creates a display control, and inside that again there is something called a “viewport”. Forms are always created inside the viewport.

If you are wondering why on earth we use two nested containers like this, that has to do with scrolling and keeping our controls isolated in one place. Forms are positioned horizontally inside the viewport. So whenever you are moving from Form1 to Form2, depending on the scroll-effect you have picked, the second form is lined up either before or after the current form. We then execute a CSS3 animation that smoothly scrolls the new form into view, or the previous form out of view – depending on how you look at it.

The display

The root display control, TW3Display, has only one job; and that is to house the view control. It also contains code to layout child controls vertically. Since there is typically only one control present – that means you don’t notice much of what TW3Display does.

The “trick” to a static header that remains un-affected by forms, is simply to create the header control with “Application.Display” as the parent. That is all you have to do. You could also create it on Application.Display.View, but then it would cause problems with scrolling. My point for mentioning that is to underline how the RTL has no special rules for it’s structure. All visual entities that make up your Smart Pascal application follow the same laws and are subject to the same rules as TW3Button or TW3Label might be.

Creating controls that don’t attach to a form

The vertical layout that TW3Display does automatically is very simple. It sorts the child elements based on their Y position and places them directly after each other. This means that all you have to do is create the header and then make sure you give it a negative Y position, and it will always remain fixed on top of the Viewport and it’s forms.

TW3Application has a virtual method called ApplicationStarting() that is perfect for what we want to achieve. As the name says this method fires when the application is starting, so this is perfect for creating controls that don’t attach to a form. It also has an accompanying ApplicationClosing() method where we can release the control.

So let’s start by creating our control. Each visual application has a “unit1” that is created automatically. This contains your application object. While TApplication is a bit anonymous under Delphi or Lazarus, under Smart it serves a more central role. It’s the place you expose global values that should be usable throughout the entire program.

unit Unit1;


  Pseudo.CreateForms, // auto-generated unit that creates forms during startup
  System.Types, SmartCL.System, SmartCL.Components, SmartCL.Forms,


  TApplication  = class(TW3CustomApplication)
    FHeader:  TW3HeaderControl;
    procedure ApplicationStarting; override;
    procedure ApplicationClosing; override;
    property  Header: TW3HeaderControl read FHeader;


procedure TApplication.ApplicationStarting;
  FHeader := TW3HeaderControl.Create(Display);
  FHeader.SetBounds(0, -10, 100, 46);

procedure TApplication.ApplicationClosing;


Let’s compile and see what we got so far!


As expected we now have a header outside the form region

Global access

SmartCL, which is the namespace (a collection of units organized under one name) where all visual, DOM based classes live, have a global function for getting the Application object. This is simply Application() and you have probably used it many times.

What is not so well-known is that Application() returns a stock TCustomApplication instance. In other words, if you inspect the instance you will find none of the properties you have defined in TApplication. This is because TApplication is unknown until the application is executed. So in order to access your actual application object, you need to typecast; like I do here:

procedure TForm1.InitializeObject;
  {$I 'Form1:impl'}
  var app := TApplication(Application);
  app.Header.Title.Caption := 'This is my header';

Let’s have a look at the result (note: I added a label as well, just so you don’t think you missed something):


Now this approach works fine for many types of objects. I tend to isolate my database instance there, static header, global storage — all of it can be neatly exposed via TApplication. Fast, simple and efficient.

The final step

The initial state for the static header should be that both buttons are hidden by default. So when you start the application it just shows a title, nothing more.

When you click something that cause navigation to form2 (or some other second form), the back-button should become visible once form2 has scrolled into view.

When the user click the back-button, the opposite should happen. The back button should be disabled while you navigate back to form1, then completely hidden once you have arrived.

I don’t think I need to demonstrate this. Obviously, if you have forms that leads to more forms – then you probably want to add a “navigation stack” to the application object – an array that holds the previously visited forms.

Then whenever someone hits the “back button” you just pop the previous form off the stack, and navigate to it.

Well, hope it helps!



PNG icons on Amiga OS 3.X

December 6, 2017 2 comments

A couple of days back I posted a couple of pictures of my Raspberry PI 3b based Amiga setup. This caused quite a stir on several groups and people were unsure what exactly I was posting. Is this Amiga OS 4? Is it Aros? Scalos? Or perhaps just a pimped up classic Amiga 3.x?


The more the questions arose the more I realized that a lot of people dont really know what the PI can do. I dont blame them, between work, kids and mending a broken back it probably took me a year before I even entertained the idea of setting up a proper UAE environment. And as luck would have it, two good friends of mine Gunnar kristjánsson and Thomas Navarro Garcia, had already done the worst part: namely to produce a Linux distro that auto-boots into Workbench (or technically, into a full screen UAE environment).

Taking advantage of speed

Purists might not be happy about it, but the PI delivers some serious processing power when it comes to Amiga emulation. The version of UAE Thomas and Gunnar opted for is UAE4Arm, which is a special version that contains a hand-optimized JIT engine. This takes 68k code and generates ARM machine code “on the fly” and is thus able to run Amiga software much faster than traditional UAE variations like fs-uae.

But what should we do with all that extra speed? I mean, there is a limited number of tasks that benefits from the extra processing power of the PI (or an acellerator for that matter). Well, being a programmer the process of compilation is one aspect I really love the extra grunt. When using modern compilers like freepascal 3.x on a classic 68k amiga, there is no denying we need all the cpu power we can get. So compiling on the PI is a great boost over ordinary, real Amiga machines.


Freepascal is great, although the old “turbo” ide is due for an overhaul

The second aspects is the infrastructure. And this is where we get to the pimping part. By default Workbench is optimized for low-color representation. Meaning that icons and backdrops will be 4-8 colors, fixed palette and fairly useless by modern standards. Since UAE4Arm has built in support for RTG (re-targetable graphics), which means 15, 16, 24 and 32 bit screen-modes (the same as any modern PC) then surely we can remedy the visuals right?

Well, I had a google around and found that there is an icon library that supports the latest png based icons. These are icons that contain 32bit graphics with support for alpha blending (transparency). This is the exact same icon system that is used in Amiga OS 4.

So what I did was download  the versionb 46.x icon library from Aminet. Since the PI emulates (in my config) a mc68040 cpu, i was able to use the 040 optimized binary. And in essence i just copied that into my “libs” folder (and removed the old one first just to be sure).

And voila, my Workbench was now able to show 32 bit png icons just like OS 4 is!

Getting some bling

With OS 4 style icons supported, where do I get some icons to play with? Well, again I went on Aminet and downloaded a ton of large icon packs. I also visited OS4Depot and downloaded some cool background pictures and even more icons.

Then it was the time consuming process or manually replacing the *.info files. All files that you can see via Workbench has an associated .info file with the same name. So if you have a program called “myprogram”, then the icon file will be “myprogram.info”.

And that’s basically it! I spent a saturday replacing icons and doing some mild tweaking in VisualPrefs (again on Aminet), and suddenly my old, grey workbench was alive with radient colors.


I love it! It might not be perfect but i have seen Linux distros that looks worse!

What I find amazing is that even after 30 years the old Amiga OS 3.x can still suprise us! If nothing else it’s a testament to the flexible architecture the guys at Commodore knocked out, an architecture that thrives in extremely low memory situations – yet delivers in spades if you give it more to work with.

Doing some modern chores

One of the first things I installed on my PI was a copy of freepascal. This has been updated to version 3.1, which is just one revision behind the compiler used on Windows and OSX. This is a bit too nifty for standard Amiga machines. You need at least an A1200 with 64 megabyte ram to work with it. Although the size of the binaries is reasonable small if you stay clear of the somewhat bloated LCL framework.

So I was able to use my object pascal skills to create a unzip/zip command-line program in 15 minutes. Doing this on my Amibian box felt great, and I really enjoy the fresh new look of Workbench. In a perfect work OS4 would be 68k and the CPU’s would all be fpga’s that ran close to Intel i7 speeds, but alas – a humble PI will have to do for now.


If you want to re-create my experiment then start by downloading Amibian. This is a clean Linux Distro and doesnt contain workbench. So after you have made an sd-card with Amibian you need to copy over workbench. I suggest you copy over the raw files and mount a linux folder as a drive. Using harddisk images is possible, but I dont trust them. And should an error occur you lose everything. So yeah, stick with folder-mounted drives if you want less frustration.

You can visit Amibian here: https://gunkrist79.wixsite.com/amibian

HTML5 Attributes, learn how to trigger conditional styling with Smart Mobile Studio

November 8, 2017 Leave a comment

Im not sure if I have written about Attributes before; Probably, because they are so awesome to work with. But today I’m going to show you something that makes it even more awesome, bordering on unbelievable.

What are HTML attributes again?

attribsBefore we dig into the juicy stuff, let’s talk about attributes. For those that dont know much about HTML or CSS, here is a quick and dirty overview. A lot of people use Smart Mobile Studio because they dont know CSS or HTML beyond the basics (or even because they dont want to learn it, quite a few cant stand JavaScript and CSS). Well that is not a problem.

Note-1: While not a vital prerequisite, I do suggest you buy a good book on JavaScript, HTML and CSS. If you are serious about using web technology (like node.js on the server) your Smart skills will benefit greatly by knowing how things work “under the hood” so to speak. You will make better Smart Mobile Studio applications and you will understand the RTL at a deeper level than the average user.

OK, back to attributes. You know how HTML tags have parameters right? For example, a link to another webpage looks like this:

<a href="http://blablabla">This is a link</a>

Note-2: I dont have time to teach you HTML from scratch, so if you have no idea what the “A” tag is then please google it.

Focus here is not on the “a” part, but rather on the “href” parameter. That is actually not a parameter but a tag-attribute (which must not be confused with a tag-property btw).

Back in the day attributes used to be exclusive; Meaning that if you tried to set some attribute value the tag didnt support – nothing would happen. The browser would just ignore it and the information would be deleted.

Around HTML4 all of that changed. Suddenly we got the freedom to declare our own attributes, regardless of tag. The only catch is that the attribute name must be prefixed with “data-“. Which makes sense because the browser needs to tell the difference between valid attributes, junk and intrinsic (supported) attributes.

Storing information outside the pascal instance

When you create a visual control, the control internally creates a DOM element (or tag object, same thing) that it manages. Most visual controls in our RTL manages a DIV element because that is just a square block that can be easily molded and shaped into whatever you like.

spjs_2105But, when you create a Smart Pascal class you dont just get a DOM element in return. You get a Smart Pascal object instance. This is the same as Delphi and Lazarus: a class is a blueprint of an object. You dont create classes you create instances.

The same thing happens when you use Smart Pascal: the JSVM (JavaScript virtual machine) delivers a JavaScript object instance – and that is what your code operates on. When you create a visual class instance, that in turn will create a DOM element and manage that until you release the Smart Pascal instance.

Storing information in a class is easy. It’s one of the fundamental aspects of object oriented programming and there really isnt that much to say about that. But what if you need to store information in a control you dont own? Perhaps you have installed a package you bought, or that a friend shared with you – and you cant change the class (or perhaps dont want to change the class). What then?

This is where the attribute object comes to the rescue. Because now you can store information directly in the DOM element rather than altering the class itself (!)

That is so powerful I dont even know where to start, because you can write libraries that can do amazing things without fields or demanding the user to change their controls (and in some cases, avoid forcing the user to inherit from a particular custom-control).

A real-life example

Our special effect unit, SmartCL.Effects.pas, uses this technique to keep track of effect state. When you execute an effect on a control a busy-flag is written as an attribute to the managed DOM object. And when the effect is finished the busy-flag is reset.


Our CSS hardware powered effect unit uses attributes to keep track of running effects

If you execute 10 effects on a control, it’s this busy flag that stops all of them running at the same time (which would cause havoc). While this attribute is set any queued effects wait their turn.

This would be impossible to achieve without declaring a busy property, or doing some form of stacking behind the scene; both of them expensive codewise. But with attributes it’s a piece of cake.

And now for the juicy parts

styling-forms-with-cssNow that you know what attributes do and how awesome they are, what can possibly make them even more awesome? In short: “CSS attribute pseudo selectors” (phew, that is a mouthful isnt it!).

So what the heck is a pseudo selector? Again its a long story, so im just going to call it “states”. It allows you to define styles that should be activated when a particular state occurs. The most typical state is the :active state. When you press a button the DOM element is said to be active. This allows us to write CSS styles that are applied when you press the button (like changing the background, border or font-color).

But did you know you can also define styles that react to attribute changes?

Just stop and think about this for a moment:

  • You can define your own attributes
  • You can read, write and check for attributes
  • Attributes are part of the DOM element, not the JS instance
  • You can define CSS that apply when an element has an attribute
  • You can define CSS that apply if an attribute has a particular value

If you are still wondering what the heck this is good for, imagine the following:

With this, you can do the following:

  1. Write an event-handler for TW3Application.OnOrientationChange (an event that fires when the user rotate the mobile device horizontally or vertically).
  2. Store the orientation as a attribute value
  3. Define CSS especially for the orientation attribute values

The browser will automatically notice the attribute change and apply the corresponding CSS. This is probably one of the coolest CSS features ever.

Other things that come to mind:

  • You can write CSS that colors the rows in a grid or listbox based on the data-type the row contains. So an integer can have a different background from a float, boolean or string. And all of it can be automated with no code required on your part. You just need to write the CSS rule once and that’s it.
  • You can use attributes it to trigger pre-defined animations. In fact, you could pre define 100 different animations, and based on the attribute-name you can trigger the correct one. Again all of it can be neatly implemented as CSS.

Let’s make a button that triggers a style

While simple, the following example should serve as a good example. It’s easy to build on and not to complex. Let’s start with the CSS:

button[data-funky="this rocks"] {
  background: none;
  background-color: #FF00FF;
  font-color: #FFFFFF;
  font-size: 22px;
  font-weight: bold;

The CSS above should be easy to understand. First we define the name of the DOM element, which in my case is a button. Next we define the attribute, and like mentioned it has to be prefxed with “data-” (our attributes class does this automatically in the RTL, so you dont need to prefix it when you code). And finally we define the value the style should trigger on, “this rocks”.

Right, let’s write some code:

  MyButton := TW3Button.Create(self);
  MyButton.SetBounds(100,280, 100, 44);
  MyButton.Caption := 'Click me!';
  MyButton.OnClick := procedure (Sender: TObject)
    var Text := MyButton.Attributes.Read('funky');
    if Text <> 'this rocks' then
      MyButton.Attributes.Write('funky','this rocks')

The code is very simple, we read the value of the attribute and then we do a toggle based on the content. So when you click the button it will just toggle the trigger value.

This is how the button looks before we click it:


And when we click the button the attribute is written to, and it’s automatically styled:


How cool is that! The things you can automate with this is almost endless. It is a huge boon for anyone writing mobile applications with Smart Mobile Studio and it makes what would otherwise be a difficult task ridiculously easy.


ClientRect, BoundsRect and adventures in Smart Pascal layout land

November 6, 2017 Leave a comment

HTML really is the kitchen sink of ideas. Some of them are good, others are bad – but all them have valid reason for being there.

When coming from Delphi or C++ builder to web development you really feel like you have tumbled down the rabbit hole from time to time. Especially when it comes to things like margins, padding and clientrect values.

You would imagine that BoundsRect gives you the full size of a control. In fact BoundsRect() should just be the same as putting left, top, width, height into a TRect structure right? Same with ClientRect, it should be the same as putting 0, 0, ClientWidth, ClientHeight into a TRect structure right?

Smart Mobile Studio uses absolute positioning, which means that you can layout controls at ordinary cartesian coordinate values. If you place a button at position 10, 10 – that means 10 pixels from the left edge and 10 pixels from the top edge. This is what we are used to from Delphi and other native languages.

But the browser have different boxing models, or box-sizing modes if you like. We are using the one best suited for per-pixel-positioning, namely “border-box”. This means that the width and height values for the control will include the size of the content, it’s padding and the size of the border. It excludes things like margin since that is just empty air the browser adds to the final co-ordinates of a visual control.

Doing it by the book

Since we are taking Smart out of the homebrew production style these days, there had to come a time where this must be dealt with.

If you don’t care about making your own controls then this wont effect you at all. You will always be able to drag & drop some controls on the form-designer, or (like most of us does) create them from code and perform layout during the Resize() method.

But .. if you want to make controls that conforms to our theme engine, that actually give a damn about margins, padding and wants give CSS the power to change existing controls the way it deserves, then you better pay attention.

Having experimented with this for a while now, here are the two cardinal rules you must follow if you want your controls to take height for the margin, padding and border-sizes defined in our CSS theme files:

  1. Margins only apply when positioning child elements with margins
  2. When doing layout of child elements, padding only apply from the parent or container of content, not the content itself.

Example for rule #1

Imagine you have a panel on a form. You want to populate that panel with 10 child elements and you want to do it properly, taking height for whatever padding the panel may have – and also whatever margin’s may exist for some child elements.

  var dx := W3Panel1.Border.Left.Padding;
  var dy := W3Panel1.Border.Top.Padding;
  for var x := 0 to 9 do
    var Item := FItems[x];
    var ItemRect := TRect.Create(dx, dy, Item.width, item. height);
    ItemRect.right -= (Item.Border.Margin.Left + Item.Border.Margin.Right);
    ItemRect.bottom -= (Item.Border.Margin.Top + Item.Border.Margin.Bottom);

Look at the code above. Notice that we dont initialize dx and dy to 0 (zero). We could ofcourse but that would defeat the purpose of being CSS friendsly.

Also notice that we dont add the left and top margin to the final rectangle, this is because the browser automatically does this for us. Instead, we need to scale the right and bottom edge of the rectangle by subtracting the size of the left and right / top and bottom margins.

So if you want theme friendly layout’s, you have to go the extra mile and include these things.

Note: The above was just an example, our ClientRect() function already deals with padding for us, so you would set dx and dy to ClientRect.left and ClientRect.top.

ClientWidth and ClientHeight methods however, remain unaffected by padding. Because there will be cases where you want full control and non-conformity.

Example of rule #2

Think of a text-editor. You want to add a bit of margin to the document and simply drag the left-margin widget to where you need it. Padding for HTML elements works pretty much the same way.

To demonstrate I will create a test container class and a test child class.

First, create a new visual application to play with. Drop a TW3Panel control on the form and size it to full the form (with a bit of air from the edges naturally).

Next, go into project options and check the “use custom stylesheet”. That way Smart will clone whatever style you are using and create a new node in your project manager. Add the following CSS to the stylesheet:

.TTestOwner {
  margin: 20px;
  padding: 4px;
  border: 3px solid #FFFF00;
  background-color: #FF0000;

.TTestChild {
  margin: 10px;
  padding: 4px;
  border: 3px solid #000000;
  background-color: #FFFFFF;

With the CSS in place, add the following pascal classes to your mainform code, just below the “type” keyword:

TTestChild = class(TW3CustomControl)

TTestOwner = class(TW3CustomControl)

Now, prior to writing this article I make a couple of helper functions. If you are using Alpha 1 (which most of you are), add the following class and code to your form1 unit:

TW3Theme = class
  class function  AdjustRectToLayoutFactors(const ThisControl: TW3MovableControl; const Rect: TRect): TRect;
  class function  GetPaddedClientRect(const ThisControl: TW3MovableControl): TRect;

class function TW3Theme.GetPaddedClientRect(const ThisControl: TW3MovableControl): TRect;
  if ThisControl <> nil then
    result := TRect.Create(0, 0, ThisControl.ClientWidth, ThisControl.ClientHeight);
    result.Left += ThisControl.Border.Left.Padding;
    result.Top += ThisControl.Border.Top.Padding;
    result.Right -= ThisControl.Border.Right.Padding;
    result.Bottom -= ThisControl.Border.Bottom.Padding;
  end else
  result := TRect.NullRect;

class function TW3Theme.AdjustRectToLayoutFactors(const ThisControl: TW3MovableControl; const Rect: TRect): TRect;
  if ThisControl <> nil then
    (*  Rule #1, "margins should only be added when dealing with child elements"
        Since we are usins "border-box" as our size model, padding and border is
        already included in the clientwidth / clientheight values we get from the

        More importantly, margin only affect left and top edge of a rectangle because
        those are the only factors that affect MoveTo() type functionality in
        the browser itself.

        So when the browser moves an element to position 10px, 10px, it automatically
        adds the margin. If you have a margin of 10 pixels - the result will be
        (visually) that the control ends up at 20px, 20px instead. *)

    // Start with a carbon copy of the rectangle we were given
    result := Rect;

    (*  Note: The browser can only know about the left and top edge when
        placing elements. It cannot see into the future to know the exact
        height of an element, or if the content will suddenly grow. So we
        have to calculate the right and bottom based on our knowledge
        from the Rect parameter *)
    result.right  -= (ThisControl.border.left.margin + ThisControl.border.right.margin);
    result.bottom -= (ThisControl.border.top.margin + ThisControl.border.bottom.margin);

    (*  Rule #2: Padding should only be applied from a control's padding when
        calculating a position for that child. This is recursive, so a parent
        will apply this to their children, which each child will force it's
        padding on any children it may house. *)

    if ThisControl.parent <> nil then
      var Owner := TW3MovableControl(ThisControl.Parent);
      result.left += Owner.border.left.padding;
      result.top += owner.border.top.padding;
      result.right -= owner.border.right.padding;
      result.bottom -= owner.border.bottom.padding;

With both styling and the pascal classes out-of-the-way, lets add some code to get the magic working.

So copy & paste this into your W3Form1.InitializeForm() procedure:

  var LRect:  TRect;
  var Box:    TTestOwner;
  var Child:  TTestChild;

  // Create parent container
  Box := TTestOwner.Create(W3Panel1);
  LRect := TRect.Create(0, 0, W3Panel1.ClientWidth, 300);
  LRect := TW3Theme.AdjustRectToLayoutFactors(Box, LRect);

  // create child element for our test-owner
  Child := TTestChild.Create(Box);
  LRect := TRect.Create(0, 0, box.ClientWidth, box.ClientHeight);
  LRect := TW3Theme.AdjustRectToLayoutFactors(Child, LRect);

Note: The TW3Theme class is a part of Alpha 2 which should hit the download section next week. But you should now have everything you need to get this working, no matter what version of Smart Mobile Studio you are using.

Putting it all together

OK, lets run our application and have a look at the results. What we should see is a panel on a form – inside that should be a box that is 20 pixels from the edges (since the CSS defines 20 pixel margins). The box also has 4 pixel padding defined, so the total offset from the edges should be 24 pixels.

The child control inside the box likewise has margin and padding. It operates with 10 pixels of margin and 4 pixels padding. It also sports a 3 pixel border. So let’s see what we have so far:


As you can see it’s not that hard to deal with; just a bit of a brain teaser. Those that write custom controls for Delphi are used to dealing with stuff like this all the time. The difference is that native languages are less cryptic about things, and they also make width / height return the full size of the control. Regardless of what the content may be.

You might have noticed that Delphi has a new “Align with margin” property? Not sure when it came into the system but somewhere around Delphi XE I believe (?). There you define the size of the margin – and Delphi does the rest. You don’t have to think about the size of the margin, and it only comes into play when the Align property is activated.

Final notes

We are doing some brainstorming on how to best deal with these things right now. Personally I think the code I have shown so far, especially the helper code, goes a long way to make this easy to work with.

Some have voiced that ClientRect should always start at zero, but why is that? Where does it say that Clientrect should always be “0,0, width-1, height-1” ? That is not the voice of reason, that is the sound of old habits! The whole point of having a ClientRect, be it in Delphi, Lazarus, C++ or C# is because this can change. It would be equally futile to demand that ClipRect should always be the same as client-rect. That is to utterly miss the whole point of sequential rendering and fast graphics.

So the lesson is: If you play by the rules and never use hard-coded values, then your code wont be affected. And if you want to adjust so your code is 100% theme compatible (and again, this only is valuable for component writers) then calling a simple function to get the rectangle adjusted for margin etc. is not exactly rocket science. It’s a one-liner.

Well, hope it helps!

Custom dialog and loading data from JSON in Smart Pascal

October 30, 2017 Leave a comment

Right now we are putting the finishing touches on our next update, which contains our new theme engine. As mentioned earlier (especially if you follow us on Facebook) the new system builds on the older – but we have separated border and background from the basic element styling.

When working with the new theme system, I needed an application that could demonstrate and show all the different border and background types, most of our visual controls – but also information about what Smart Mobile Studio is, what it’s features are and where you can buy it.


So it started as a personal application just to get a good overview of the CSS themes I was working on; but it has become an example in it’s own right.

Dont hardcode, just dont

If you look at the picture above, there is a MenuList with the options: “Introduction”, “Features” and “Where to buy”. When you click these I naturally want to inform the user about these things by displaying information.

I could have hardcoded the information text into the application; in many ways that would have been simpler (considering the data requirements here is practically insignificant). but all that text within the source? I hate mess like that.

Secondly, how exactly was I going to show this information? Would I use the modal framework already in place, or code something more lightweight?

As always I ended up making a new and more lightweight system. A reader style dialog appears and allows you to scroll vertically. The header contains the title of the information and a close button.


Typical “reader” style dialog with scrolling

I also used a block-box to prevent the user from reaching the UI until they click the close-button. You notice that the form, toolbar and header in the back is darkened. This is actually a control that is semi-transparent that does one thing: prevent anyone from clicking or interacting with the UI while the dialog is active.

The JSON file structure

json_structureThe structure I needed was very simple: our records would have a unique ID that we use to fetch and recognize content; It would also have a Title and Text property. It really doesnt have to be more difficult than that.

To work with the JSON i used the online JSON editor JSonEditorOnline, which is actually really good! It allows you to write your JSON and then format it so that special characters (like CR+LF) is properly encoded.

Putting it all together

Having coded the dialog thing first, I now sat down and finished a sort of “Turbo Pascal” record database system for this particular file format. It’s not very big nor extremely advanced – but that’s the entire point! Throwing in SQLite or MongoDB for something as simple as a few records of data – especially when the data is so simple, is just a complete waste of time and effort.

Right, let’s have a peek at the code shall we!

unit infodialog;





  TInfoDialog = class(TW3Panel)
    FHead:    TW3HeaderControl;
    FBox:     TW3Scrollbox;
    procedure InitializeObject; override;
    procedure FinalizeObject; override;
    procedure Resize; override;
    property  Header: TW3HeaderControl read FHead;
    property  Content: TW3Scrollbox read FBox;

    class function ShowDialog(Title, Content: string): TInfoDialog;

  TAppInfoRecord = record
    iiId:     string;
    iiTitle:  string;
    iiText:   string;
    procedure Clear;
    class function Create(const Id, Title, Text: string): TAppInfoRecord;

  TAppInfoDB = class(TObject)
    FStack:     array of TStdCallback;
    FItems:     array of TAppInfoRecord;

    procedure   Parse(DBText: string);

    procedure   HandleDataLoaded(const FromUrl: string;
                const TextData: string; const Success: boolean);
    property    Empty: boolean read ( (FItems.Count < 1) );
    property    Count: integer read (FItems.Count);
    property    Items[index: integer]: TAppInfoRecord
                read  (FItems[index])
                write (FItems[index] := Value);

    function    GetRecById(Id: string; var Info: TAppInfoRecord): boolean;

    procedure   LoadFrom(Url: string; const CB: TStdCallback);
    procedure   Clear;

    destructor  Destroy; override;


uses SmartCL.Application;

// TAppInfoRecord

class function TAppInfoRecord.Create(const Id, Title, Text: string): TAppInfoRecord;
  result.iiId := id.trim();
  result.iiTitle := Title.trim();
  result.iiText := Text;

procedure TAppInfoRecord.Clear;
  iiId := '';
  iiTitle := '';
  iiText := '';

// TAppInfoDB

destructor TAppInfoDB.Destroy;
  if FItems.Count > 0 then

procedure TAppInfoDB.Clear;

function TAppInfoDB.GetRecById(Id: string; var Info: TAppInfoRecord): boolean;
  if not Empty then
    Id := Id.trim().ToLower();
    if id.length > 0 then
      for var x := 0 to Count-1 do
        result := Items[x].iiId.ToLower() = Id;
        if result then
          Info := Items[x];

procedure TAppInfoDB.Parse(DBText: string);
  vId:    variant;
  vTitle: variant;
  vText:  variant;

  DbText := DbText.trim();
  if DbText.length > 0 then
    var FDb := TJSONObject.Create;

    if FDb.Exists('infotext') then
      // get the infotext-> [] array of JS objects
      var Root: TJSInstanceArray := TJSInstanceArray( FDb.Values['infotext'] );

      for var x := 0 to Root.Count-1 do
        var node := TJSONObject.Create(Root[x]);
        if node <> nil then
            .Read('id', vid)
            .Read('title', vtitle)
            .Read('text', vtext);

          FItems.add( TAppInfoRecord.Create(vId, vTitle, vText) );


procedure TAppInfoDB.LoadFrom(Url: string; const CB: TStdCallback);
  if assigned(CB) then
  TW3Storage.LoadFile(Url, @HandleDataLoaded);

procedure TAppInfoDB.HandleDataLoaded(const FromUrl: string;
          const TextData: string; const Success: boolean);
    // Parse if data ready
    if Success then
    // Perform callbacks
    while FStack.Count>0 do
      var CB := FStack.pop();
      if assigned(CB) then

// TInfoDialog

procedure TInfoDialog.InitializeObject;
  FHead := TW3HeaderControl.Create(self);
  FHead.BackButton.Visible := false;
  FHead.NextButton.Caption := 'Close';

  // By default the header text is centered within the space allocated for it,
  // which by default is 2/4. This can look a bit off when we never show
  // the left-button. So we force text-align to the left [normal].
  FHead.Title.Handle.style['text-align'] := 'left';

  FBox := TW3Scrollbox.Create(self);
  FBox.ScrollBars := sbIndicator;

procedure TInfoDialog.FinalizeObject;

procedure TInfoDialog.Resize;
  var LBounds := ClientRect;
  var dy := LBounds.top;

  if FHead <> nil then
    FHead.SetBounds(LBounds.left, LBounds.top, LBounds.width, 32);
    inc(dy, FHead.Height +1);

  if FBox <> nil then
    FBox.SetBounds(LBounds.left, dy, LBounds.width, LBounds.height - dy);

class function TInfoDialog.ShowDialog(Title, Content: string): TInfoDialog;
  var Host := Application.Display;
  var Shade := TW3BlockBox.Create(Host);

  var wd := Host.Width * 90 div 100;
  var hd := Host.Height * 80 div 100;
  var dx := (Host.Width div 2) - (wd div 2);
  var dy := (Host.Height div 2) - (hd div 2);

  var Dialog := TInfoDialog.Create(Shade);
  Dialog.Header.Title.Caption := Title;
  Dialog.SetBounds(dx, dy, wd, hd);
  Dialog.fxZoomIn(0.3, procedure ()
    Dialog.Content.Content.InnerHTML := Content;

  Dialog.Header.NextButton.OnClick := procedure (Sender: TObject)
    Dialog.fxFadeOut(0.2, procedure ()
      TW3Dispatch.Execute( procedure ()
      end, 100);

  result := Dialog;


Using the code

The first thing you want to do is to create an instance of TAppInfoDb when your application starts. Remember to add your JSON file and that it’s formatted property, and then use the LoadFrom() method to load in the data:

  // Create our info database and load in the
  // introduction, features etc. JSON datafile
  FInfoDb := TAppInfoDB.Create;
  FInfoDb.LoadFrom('res/JSON1', nil);

The final parameter in the LoadFrom() method is a callback. So if you want to be notified when the file has loaded, just put an anonymous procedure there if you need it.

Showing a dialog with the information is then reduced to looking up the text you need by it’s ID, and firing up the reader dialog for it:

  W3Button1.OnClick := procedure (Sender: TObject)
    var LInfo: TAppInfoRecord;
    if FInfoDb.GetRecById('introduction', LInfo) then
      TInfoDialog.ShowDialog(LInfo.iiTitle, LInfo.iiText);

And that’s it! Simple, effective and ready to be dropped into any application. Enjoy!

Making your own DOM events in Smart Pascal

October 20, 2017 Leave a comment

Being able to listen to events is fairly standard stuff in Smart Mobile Studio and JavaScript in general. But what is not so common is to create your own event-types from scratch that fire on a target, and that users of JS can listen to and use.

The word Events in cut out magazine letters pinned to a cork not

Now before you get confused and think this is a newbie post, I am talking about DOM (document object model) level events here; these are quite different from the event model we have in object pascal. So what im talking about is being able to create events that external libraries can use for instance. Libraries written in plain JavaScript rather than Smart Pascal.

Interesting events

While you may think that events like that, which are akin to all the other DOM events, have little or no use – think again. First of all you can dispatch them on any element and event-emitter. So you can in fact register events on common elements like Document. You can then use custom events as a bridge between your Smart code and third party libraries for instance. So if you have written a kick-ass media system and wants to sell it to a customer who only knows JavaScript – then using native JS events can act as a bridge.

Right, let’s look at a little unit I wrote to simplify this:

unit userevents;




  IW3Prototype = interface
    procedure AddField(FieldName: string; const DataType: TRTLDatatype);
    function  FieldExists(FieldName: string): boolean;
    procedure SetEventName(EventName: string);

  TW3CustomEvent = class(TObject, IW3Prototype)
    FName:      string;
    FData:      TJSONObject;
    FDefining:  boolean;
    procedure   SetEventName(EventName: string);
    procedure   AddField(FieldName: string; const DataType: TRTLDatatype);
    function    FieldExists(FieldName: string): boolean;
    function    GetReady: boolean;
    property    Name: string read FName;
    property    Ready: boolean read GetReady;

    function    DefinePrototype(var IO: IW3Prototype): boolean;
    procedure   EndDefine(var IO: IW3Prototype);
    function    NewEventData: TJSONObject;

    procedure   Dispatch(const Handle: TControlHandle; const EventData: TJSONObject);

    constructor Create; virtual;
    destructor  Destroy; override;


// TW3CustomEvent

constructor TW3CustomEvent.Create;
  inherited Create;
  FData := TJSONObject.Create;

destructor TW3CustomEvent.Destroy;

function TW3CustomEvent.GetReady: boolean;
  result := (FDefining = false) and (FName.Length > 0);

procedure TW3CustomEvent.Dispatch(const Handle: TControlHandle; const EventData: TJSONObject);
  LEvent: THandle;
  LParamData: variant;
  if GetReady() then
    if (Handle) then
      // Check for detail-fields, get javascript object if available
      if EventData <> nil then
        if EventData.Count > 0 then
          LParamData := EventData.Instance;

      if (LParamData) then
        // Create event object with detail-data
        var LName := FName.ToLower().Trim();
        @LEvent = new CustomEvent(@LName, { detail: @LParamData });
      end else
        // Create event without detail-data
        var LName := FName.ToLower().Trim();
        @LEvent = new Event(@LName);

      // Dispatch event-object

procedure TW3CustomEvent.SetEventName(EventName: string);
  if FDefining then
    EventName := EventName.Trim().ToLower();
    if EventName.Length > 0 then
      FName := EventName
      raise EW3Exception.Create
      ('Invalid or empty event-name error');
  end else
    raise EW3Exception.Create
    ('Event-name can only be written while defining error');

function TW3CustomEvent.FieldExists(FieldName: string): boolean;
  if FDefining then
    result := FData.Exists(FieldName)
    raise EW3Exception.Create
    ('Fields can only be accessed while defining error');

procedure TW3CustomEvent.AddField(FieldName: string; const DataType: TRTLDatatype);
  if FDefining then
    if not FData.Exists(FieldName) then
      FData.AddOrSet(FieldName, TDataType.NameOfType(DataType))
      raise EW3Exception.CreateFmt
      ('Field [%s] already exists in prototype error', [FieldName]);
  end else
  raise EW3Exception.Create
  ('Fields can only be accessed while defining error');

function TW3CustomEvent.NewEventData: TJSONObject;
   MAX_INT_16 = 32767;
   MAX_INT_08 = 255;
  result := TJSONObject.Create;

    function (Name: string; var Data: variant): TEnumState
      // clear data with datatype value to initialize
      case TDataType.TypeByName(TVariant.AsString(Data)) of
      itBoolean:  Data := false;
          Data := MAX_INT_08;
          Data := $00;
          Data := $FFFF;
          Data := $0000;
          Data := 00000000;
          Data := MAX_INT_16;
          Data := 0000;
          Data := MAX_INT;
          Data := 0;
          Data := 1.1;
          Data := 0.0;
          Data := 20.44;
          Data := 0.0;
      itString:   Data := '';
      else        Data := null;
      result := esContinue;

function TW3CustomEvent.DefinePrototype(var IO: IW3Prototype): boolean;
  result := not FDefining;
  if result then
    FDefining := true;
    IO := (Self as IW3Prototype);

procedure TW3CustomEvent.EndDefine(var IO: IW3Prototype);
  if FDefining then
    FDefining := false;
  IO := nil;


Patching the RTL

Sadly there was a bug in the RTL that prevented the TJSONObject.ForEach() to function properly. This has been fixed in the update we are preparing now, but there will still be a few days before that is released.

You can patch this manually right now with this little fix. Just go into the System.JSON.pas file and replace the TJSonObject.ForEach() method with this one:

function TJSONObject.ForEach(const Callback: TTJSONObjectEnumProc): TJSONObject;
  LData:  variant;
  result := self;
  if assigned(CallBack) then
    var NameList := Keys();
    for var xName in NameList do
      Read(xName, LData);
      if CallBack(xName, LData) = esContinue then
        Write(xName, LData)

Creating events

Events come in two flavours: those with data and those without. This is why we have the DefinePrototype() and EndDefine() methods – namely to define what data fields the event should take. If you dont populate the prototype then the class will create an event without it.

Secondly, events dont need to be registered somewhere. You create it, dispatch it to a handle (or element) and if there is an event-listener attached there looking for that name – it will fire.

Ok let’s have a peek:

  // Create a custom, new, system-wide event
  var LEvent := TW3CustomEvent.Create;
  var IO: IW3Prototype = nil;
  if LEvent.DefinePrototype(IO) then
      IO.AddField('name', TRTLDataType.itString);
      IO.AddField('id', TRTLDataType.itInt32);

  // Setup a normal event-listner
    procedure (ev: variant)
      var data := ev.detail;
      if (data) then

  // Populate some event-data
  var MyData := LEvent.NewEventData();
  MyData.Write('name','John Doe');
  MyData.Write('id', '{F6EB5680-5DC1-422E-8F72-5C60EAC0B46F}');

  // Now send the event to whomever is listening
  LEvent.Dispatch(Display.Handle, MyData);

In the above example I use the Application.Display control as the event-target. There is no special reason for this except that it’s always available. You would naturally create events like this inside your TW3CustomControl (or perhaps the Document element, under a namespace).

You will also notice that any data sent ends up in the “detail” field of the event object. We use a variant datatype since that maps directly to any JS object and also lets us access any property (and create properties for that mapper); so thats why the “ev” parameter in addEventListner() is a variant, not a fixed class.

Well, hope you enjoy the show and happy coding!

PS: Smart now uses an event-manager to deal with input events (mouse, touch), but the other events works like before. Have a look at SmartCL.Events.pas to see some time-saving event classes. So instead of having to use ASM sections and variants, you can use object pascal classes to map any event. 

Smart Mobile Studio and CSS: part 4

October 18, 2017 Leave a comment

If you missed the previous articles, I urge you to take the time to read through them. While not explicit to the content of this article, they will give you a better context for the subject of CSS and how Smart Mobile Studio deals with things:

Scriptable CSS

If you are into web technology you probably know that the latest fad is so-called css compilers [sigh]. One of the more popular is called Less, which you can read up on over at lesscss.org. And then you have SASS which seem to have better support in the community. I honestly could not care less (pun intended).

So what exactly is a CSS compiler and why should it matter to you as a Smart Pascal developer? That is a good question! First, it doesnt matter to you at all. Not one iota. Why? Because Smart Mobile Studio have supported scriptable CSS for years. So while the JS punters think they have invented gunpowder, they keep on re-inventing the exact same stuff native languages and their programmers have used for ages. They just bling it up with cool names to make it seem all new and dandy (said the grumpy 44 year old man child).

In short a CSS compiler allows you to:

  • Define variables and constant values you can use throughout your style-sheet
  • Define repeating sections of CSS, a poor man’s “for-next block” if you like
  • Merge styles together, which is handy at times

Smart Mobile Studio took it all one step further, because we have a lot more technology on our hands than just vanilla JavaScript. So what we did was to dump the whole onslaught of power from Delphi Web Script – and we bolted that into our CSS linker process. So while the JS guys have a parser system with a ton of cryptic identifiers – we added something akin to ASP to our CSS module. It’s complete overkill but it just makes me giggle like a little girl whenever I use it.


The new themes being created now all tap into scripting to automate things

But how does it work you say? Does it execute with the program or? Nope. Its purely a part of the linking process, so it executes when you compile your program. Whatever you emit (using the Print() method) or assign via the tags, ends up at that location in the output. Think php or ASP but for CSS instead:

  1. Smart takes your CSS file (with code) and feed’s it to DWScript
  2. DWScript runs it, and spits out the result to a buffer
  3. The buffer is sent to the linker
  4. The linker saves the data either as a separate CSS file, or statically links it into your HTML file.

Pretty cool or what!

So what good can that do?

It can do a world of good. For instance, when you create a theme it’s important to use the same values to ensure that things have the same general layout, colors and styles. Since you can now use constants, variables, for/next loops, classes, records and pretty much everything DWScript has to offer – you have a huge advantage over these traditional JS based compilers.

  • Gradients are generated via a pascal function
  • Font names are managed via constants
  • Font sizes can be made uniform throughout the theme
  • Standard colors that you can also define in your Smart code, thus having a unified color system, can be easily shared between the css-pascal and the smart-pascal codebases.
  • Instead of defining the same color over and over again, perhaps in hundreds of places, use a constant. When you need to adjust something you change one value instead of 200 values!

It’s no secret that browser-standards are hard to deal with. For instance, did you know that there are 3 different webkit formats for defining a top-down gradient? Then you have the firefox version, the microsoft version (edge), the microsoft IE version, the opera version and heaven-forbid: the W3C “standard” that nobody seem interested in supporting. Now having to hand-carve the same gradients over and over for the different backgrounds (of a theme) that might use them – that can be both time consuming and infuriating.

Let’s look at some code that can be used in your stylesheet’s straight away. It’s almost like a mini-unit that perhaps should be made external later. But for now, have a peek:

<span 				data-mce-type="bookmark" 				id="mce_SELREST_start" 				data-mce-style="overflow:hidden;line-height:0" 				style="overflow:hidden;line-height:0" 			></span><?pas   const EdgeRounding          = "4px";   const clDlgBtnFace          = "#ededed";   //#############################################   //Fonts   //#############################################   const fntDefaultName = '"Ubuntu"';   const fntSmallSize   = "12px";   const fntNormalSize  = "14px";   const fntMediumSize  = "18px";   const fntLargeSize   = "24px";   const fntDefaultSize =  fntNormalSize;   type   TRGBAText = record     rs: string;     gs: string;     bs: string;     ac: string;   end;   type   TBrowserFormat = (     gtWebkit1,     gtWebkit2,     gtMoz,     gtMs,     gtIE,     gtAny   );   function GetR(ColorDef: string): string;   begin     if ColorDef.StartsWith('#') then     begin       delete(ColorDef, 1, 1);       var temp := Copy(ColorDef, 1, 2);       result := HexToInt(temp).ToString();     end else     result := '00';   end;   function GetG(ColorDef: string): string;   begin     if ColorDef.StartsWith('#') then     begin       delete(ColorDef, 1, 1);       var temp := Copy(ColorDef, 3, 2);       result := HexToInt(temp).ToString();     end else     result := '00';   end;   function GetB(ColorDef: string): string;   begin     if ColorDef.StartsWith('#') then     begin       delete(ColorDef, 1, 1);       var temp := Copy(ColorDef, 5, 2);       result := HexToInt(temp).ToString();     end else     result := '00';   end;   function OpacityToStr(const Opacity: float): string;   begin     result := FloatToStr(Opacity);     if result.IndexOf(',') ><span 				data-mce-type="bookmark" 				id="mce_SELREST_end" 				data-mce-style="overflow:hidden;line-height:0" 				style="overflow:hidden;line-height:0" 			></span> 0 then
      result := StrReplace(result, ',', '.')

  function ColorDefToRGB(const ColorDef: string): TRGBAText;
    result.rs := GetR(ColorDef);
    result.gs := GetG(ColorDef);
    result.bs := GetB(ColorDef);
    result.ac := '255';

  function ColorDefToRGBA(const ColorDef: string; Opacity: float): TRGBAText;
    result.rs := GetR(ColorDef);
    result.gs := GetG(ColorDef);
    result.bs := GetB(ColorDef);
    result.ac := OpacityToStr(Opacity);

  function GetRGB(ColorDef: string): string;
    result += 'rgb(';
    result += GetR(ColorDef) + ', ';
    result += GetG(ColorDef) + ', ';
    result += GetB(ColorDef);
    result += ')';

  function GetRGBA(ColorDef: string; Opacity: float): string;
    result += 'rgba(';
    result += GetR(ColorDef) + ', ';
    result += GetG(ColorDef) + ', ';
    result += GetB(ColorDef) + ', ';
    result += OpacityToStr(Opacity);
    result += ')';

  function SetGradientRGBSInMask(const Mask: string; First, Second: TRGBAText): string;
    result := StrReplace(Mask,   '$r1', first.rs);
    result := StrReplace(result, '$g1', first.gs);
    result := StrReplace(result, '$b1', first.bs);

    if result.contains('$a1') then
      result := StrReplace(result, '$a1', first.ac);

    result := StrReplace(result, '$r2', Second.bs);
    result := StrReplace(result, '$g2', Second.bs);
    result := StrReplace(result, '$b2', Second.bs);

    if result.contains('$a2') then
      result := StrReplace(result, '$a2', second.ac);

  function GradientTopBottomA(FromColorDef, ToColorDef: TRGBAText;
           BrowserFormat: TBrowserFormat): string;
    var xfirst := FromColorDef;
    var xSecond := ToColorDef;

    case BrowserFormat of
        var mask := "-webkit-gradient(linear, left top, left bottom, color-stop(0, rgba($r1,$g1,$b2,$a1)), color-stop(100, rgba($r2,$g2,$b2,$a2)))";
        result := SetGradientRGBSInMask(mask, xFirst, xSecond);

        var mask := "-webkit-linear-gradient(top, rgba($r1,$g1,$b2,$a1) 0%, rgba($r2,$g2,$b2,$a2) 100%)";
        result := SetGradientRGBSInMask(mask, xFirst, xSecond);

        var mask := "-moz-linear-gradient(top, rgba($r1,$g1,$b2,$a1) 0%, rgba($r2,$g2,$b2,$a2) 100%)";
        result := SetGradientRGBSInMask(mask, xFirst, xSecond);

        var mask := "-ms-linear-gradient(top, rgba($r1,$g1,$b2,$a1) 0%, rgba($r2,$g2,$b2,$a2) 100%)";
        result := SetGradientRGBSInMask(mask, xFirst, xSecond);

        var mask := "filter: progid:DXImageTransform.Microsoft.gradient(startColorstr=rgba($r1,$g1,$b2,$a1), endColorstr=rgba($r2,$g2,$b2,$a2),GradientType=0)";
        result := SetGradientRGBSInMask(mask, xFirst, xSecond);

        var mask := "linear-gradient(to bottom, rgba($r1,$g1,$b2,$a1) 0%, rgba($r2,$g2,$b2,$a2) 100%)";
        result := SetGradientRGBSInMask(mask, xFirst, xSecond);

  function GradientTopBottom(FromColorDef, ToColorDef: string;
           BrowserFormat: TBrowserFormat): string;
    (* var xfirst  := ColorDefToRGB(FromColorDef);
    var xSecond := ColorDefToRGB(ToColorDef);
    var mask := ''; *)

    case BrowserFormat of
        var mask := "-webkit-gradient(linear, left top, left bottom, color-stop(0, $a), color-stop(100, $b))";
        result := StrReplace(mask, '$a', FromColorDef);
        result := StrReplace(result, '$b', ToColorDef);

        var mask := "-webkit-linear-gradient(top, $a 0%, $b 100%)";
        result := StrReplace(mask, '$a', FromColorDef);
        result := StrReplace(result, '$b', ToColorDef);

        var mask := "-moz-linear-gradient(top, $a 0%, $b 100%)";
        result := StrReplace(mask, '$a', FromColorDef);
        result := StrReplace(result, '$b', ToColorDef);

        var mask := "-ms-linear-gradient(top, $a 0%, $b 100%)";
        result := StrReplace(mask, '$a', FromColorDef);
        result := StrReplace(result, '$b', ToColorDef);

        var mask := "filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='$a', endColorstr='$b',GradientType=0)";
        result := StrReplace(mask, '$a', FromColorDef);
        result := StrReplace(result, '$b', ToColorDef);

        var mask := "linear-gradient(to bottom, $a 0%, $b 100%)";
        result := StrReplace(mask, '$a', FromColorDef);
        result := StrReplace(result, '$b', ToColorDef);

This code has to be placed on top of your CSS. It should be the very first of the css file. Now let’s make some gradients!

.TW3ButtonBackground {
  background-color: <?pas=clDlgBtnFace?>;
  background-image: <?pas=GradientTopBottom('#FFFFFF','#F0F0F0', gtWebkit1)?>;
  background-image: <?pas=GradientTopBottom('#FFFFFF','#F0F0F0', gtWebkit2)?>;
  background-image: <?pas=GradientTopBottom('#FFFFFF','#F0F0F0', gtMoz)?>;
  background-image: <?pas=GradientTopBottom('#FFFFFF','#F0F0F0', gtMs)?>;
  background-image: <?pas=GradientTopBottom('#FFFFFF','#F0F0F0', gtIE)?>;
  background-image: <?pas=GradientTopBottom('#FFFFFF','#F0F0F0', gtAny)?>;

.TW3ButtonBackground:active {
  background-color: <?pas=clDlgBtnFace?>;
  background-image: <?pas=GradientTopBottom('#E7E7E7','#FFFFFF', gtWebkit1)?>;
  background-image: <?pas=GradientTopBottom('#E7E7E7','#FFFFFF', gtWebkit2)?>;
  background-image: <?pas=GradientTopBottom('#E7E7E7','#FFFFFF', gtMoz)?>;
  background-image: <?pas=GradientTopBottom('#E7E7E7','#FFFFFF', gtMs)?>;
  background-image: <?pas=GradientTopBottom('#E7E7E7','#FFFFFF', gtIE)?>;
  background-image: <?pas=GradientTopBottom('#E7E7E7','#FFFFFF', gtAny)?>;

Surely you agree that the above makes gradients a lot easier to work with? (and we can simplify it even more later). You can also abstract it further right now by putting the start and stop colors into constants – thus making it super easy to maintain and change whatever style use those constant-colors.

Now let’s use our styles for something. Start a new Smart Mobile Studio Visual Project. Do as mentioned in the previous articles and make the stylesheet visible (project options, use custom css).

Now copy and paste in the code on top of your css-file, then copy and paste in the style-code above at the end of the css-file.

In the Smart IDE, drop a button on the form, then go into the code-editor and locate InitializeForm. Add the following to the procedure:

w3button1.StyleClass := '';

Compile and run the progra, and voila, you will now have a button with a nice, gradient background. A gradient that will work in all modern browsers, and that will be easy to maintain and change later should you want to.

Start today

Smart has had support for scriptable CSS files for quite some time. If you go into the Themes folder of your Smart Mobile Studio installation, you will find plenty of CSS files. Many of these use scripting as a part of their makeup. But it’s only recently that we have started to actively use it as it was meant to be used.

But indeed, spend a little time looking at the code in the existing stylesheets, and feel free to play around with the code I have posted here. The sky is the limit when it comes to creative and elegant solutions – so I’m sure you guys will do miracles with it.

Smart Mobile Studio and CSS: part 3

October 12, 2017 Leave a comment

In the first article we looked at some ground rules for how Smart deals with CSS. The most important part is how Smart Mobile Studio maps pascal class names to CSS style names. Simple, but extremely effective.

In the second article we looked at how you should write classes to make styling easy. We also talked about code discipline and that you should never use TW3CustomControl directly, because it makes styling time-consuming and cumbersome.

In this article we are going to cover two things: first we are going to look at probably the most powerful feature CSS has to offer, namely cascades. And then we are going to talk a bit about the new theme system we are working on. Please note that the new theme system is not yet available in the alpha releases yet. Like all things it has to go through the testing stage. All our visual controls needs a little adjustment to support the new themes as well, which doesn’t affect you – but it is time-consuming work.


Writing a CSS style for your control should be pretty easy if you have read our previous two articles. But do you really want a large, complex and monolithic style? If you have a look at the stylesheet that ships with Smart Mobile Studio (any of them, there are several), you probably agree that it’s not easy to understand at times. Each control have it’s style definition, that part is clear, but every style includes font, backgrounds, text colors, text shadowing, margins, borders, border shadows, gradients (ad nauseum). Long story short: stylesheets like this is hell to maintain and extremely time-consuming to make.

CSS have this cool feature where you can actually take any number of styles and apply them to the same element. This might sound nutty at first but think it through, because it is going to make your like a lot easier:

  • We can isolate the border style separately
  • We can have multiple border styles and pick the ones we want, rather than a single, hardcoded and fixed version
  • We can define the backgrounds, any number of them, as separate styles

That doesn’t sound to bad does it? But wait, there is more!

Remember how I told you that animations are also defined in CSS? Since CSS allows you to add multiple styles to a control, this also means you can define a style with an animation – and then just add it when you want something to happen, and then remove the style when you don’t need it any more.

You have probably seen these spinners that websites use right? So while the website is loading something, a circle or dot keeps rotating to signal that work is being performed in the background? Well, that’s pretty easy to achieve when you understand how cascades work. You just define the animation, use it in a style — and then add that style to your control. When you want to stop this behavior you just remove the style. Thats it!

But let’s start with something simple. Let’s define a border and a background and apply that to a control using code. And remember: styles you add does not exclude the initial style. Like we talked about earlier – Smart will take the pascal classname and use a CSS style with that name. So whatever you add to the control is extra. Which is really powerful!

You probably want to start a new visual project for this one. And remember to pick a theme in the project options (in my case I picked the “Android-HoloLight.,css” theme, just so you know), save the project, then go back into the options and check the “Use custom theme” checkbox. Again exit the dialog and click save – the IDE will now create a copy of whatever theme you picked and give you direct access to it from the IDE.

Remember to check the custom theme in project options

With that out of the way you should now have a fresh, blank visual application with an item called “Custom CSS” in the project manager list. Now double click on that like before so we can get cracking, and go down to the end of the file. Add the following text:

<?pas   const EdgeRounding = "4px";   const clDlgBtnFace = "#ededed"; ?>

.TMyButtonBorder {
  border-radius:  <!--?pas=EdgeRounding?-->;
  border-top:     1px solid rgba(250, 250, 250, 0.7);
  border-left:    1px solid rgba(250, 250, 250, 0.7);
  border-right:   1px solid rgba(240, 240, 240, 0.5);
  border-bottom:  1px solid rgba(240, 240, 240, 0.5);

  -webkit-box-shadow: 0px 0px 1px 1px rgba(81, 81, 81, 0.8);
     -moz-box-shadow: 0px 0px 1px 1px rgba(81, 81, 81, 0.8);
          box-shadow: 0px 0px 1px 1px rgba(81, 81, 81, 0.8);

.TMyButtonBorder:active {
  border-radius:  <!--?pas=EdgeRounding?-->;
  border-top:     1px solid rgba(240, 240, 240, 0.5);
  border-left:    1px solid rgba(240, 240, 240, 0.5);
  border-right:   1px solid rgba(250, 250, 250, 0.7);
  border-bottom:  1px solid rgba(250, 250, 250, 0.7);

  -webkit-box-shadow: 0px 0px 1px 1px rgba(81, 81, 81, 0.8);
     -moz-box-shadow: 0px 0px 1px 1px rgba(81, 81, 81, 0.8);
          box-shadow: 0px 0px 1px 1px rgba(81, 81, 81, 0.8);

.TMyButtonBackground {
  background-color: <!--?pas=clDlgBtnFace?-->;
  background-image: -webkit-gradient(linear, 0% 0%, 0% 100%,color-stop(0, rgb(255, 255, 255)),color-stop(1, rgb(240, 240, 240)));
  background-image: -webkit-repeating-linear-gradient(top,rgb(255, 255, 255) 0%,rgb(240, 240, 240) 100%);
  background-image: repeating-linear-gradient(to bottom,rgb(255, 255, 255) 0%,rgb(240, 240, 240) 100%);
  background-image: -ms-repeating-linear-gradient(top,rgb(255, 255, 255) 0%,rgb(240, 240, 240) 100%);

.TMyButtonBackground:active {
  background-color: <!--?pas=clDlgBtnFace?-->;
  background-image: -webkit-gradient(linear, 0% 0%, 0% 100%,color-stop(0, rgb(231, 231, 231)),color-stop(0.496, rgb(231, 231, 231)),color-stop(0.5, rgb(231, 231, 231)),color-stop(1, rgb(255, 255, 255)));
  background-image: -webkit-repeating-linear-gradient(top,rgb(231, 231, 231) 0%,rgb(231, 231, 231) 49.6%,rgb(231, 231, 231) 50%,rgb(255, 255, 255) 100%);
  background-image: repeating-linear-gradient(to bottom,rgb(231, 231, 231) 0%,rgb(231, 231, 231) 49.6%,rgb(231, 231, 231) 50%,rgb(255, 255, 255) 100%);
  background-image: -ms-repeating-linear-gradient(top,rgb(231, 231, 231) 0%,rgb(231, 231, 231) 49.6%,rgb(231, 231, 231) 50%,rgb(255, 255, 255) 100%);

Now this might look like a huge mess, but most of this is gradient coloring. If you look closer you will notice that it’s the exact same gradients but with different browser prefixing. This is to ensure that things look exactly the same no matter what browser people use. Making gradients like this is easy, there are a ton of websites that deals with this. One of my favorites is ColorZilla, which will make all this code for you.

If you don’t know your CSS you might be wondering – what is that :active postfix? You have two declarations with the same name – but one of them has :active appended to it? The active selector (which is the fancy name) simply tells the browser that whenever someone interacts with an element that has this state – it should switch and display the :active one instead. Typically a button will look 3d when it’s not pressed, and sunken when you press it. This is automated and you just need to define how an element should look when it’s pressed via the :active postfix (note: since different controls do different things, “active” can hold different meanings. But for most controls it means when you click it, touch it or otherwise interact with it).

And now for the big question: what on earth is that first segment that looks like pascal code? Well, that is pascal code! All your CSS stylesheets are processed by a scripting engine and only the result is actually given to the linker. So yes indeed, you can write both functions and procedures and use them to make your CSS like easier (take that Adobe!).

What we have done in the pascal snippet is to define a standard rounding value. That way we don’t have to update 300 places where border-radius is set (or you can blank it out if you dont want round edges). We change the constant and it will spread to any style that uses it. Clever huh?

OK, lets use our styles for something fun! What we have here is a nice border definition, both active and non-active, and also a nice background. Let’s use cascades to change how a button looks like!

What is a button anyways

If you switch to Form1 in your application and place a TW3Button on the form, we can start to work with it. The first thing you need to do is to clear out the styleclass so that Smart doesn’t apply the default styling. That way it’s easier to see what happens. So here is how it looks when I just run and compile it:


Now go into the procedure TForm1.InitializeForm() in the unit Form1. And write the following code:

procedure TForm1.InitializeForm;

  // Remove the default styling
  w3button1.StyleClass := '';

  // Add our border

  // And add our background

  // Make the font autosize to the container
  w3button1.font.AutoSize := true;

Now save, compile and run and we get the following result:


Suddenly our vanilla Android button have all the majesty of Ubuntu Linux! And all we did was define a couple of styles and then manually add them. We could of course have stuffed all of this into a single, monolithic style – no shame in that, but im sure you agree that by fragmenting border from background, and background from content – we have a lot of power on our hands!

As an experiment: Remove the line that clears the StyleClass string and see what happens. When you click the button the browser actually blends the two backgrounds together! Had we used RGBA values in our background gradients – the browser would have blended the standard theme button with our added styles. It’s pretty frickin’ awesome if you ask me.

Here is a more extensive example of our upcoming Ubuntu Linux theme. This is not yet ready for alpha, but it represents the first theme system where all our controls makes use of multiple styles. It looks and behaves beautifully.


From the labs: A Ubuntu Linux inspired theme that is done using cascading exclusively

Brave new themes

So far we I have written exclusively about things you can do right now. But we are working every single day on Smart Mobile Studio, and right now my primary task is to finish a working prototype of our theme engine. As you can see from the picture above we still have a few controls that needs to be adjusted. In the previous article I mentioned the importance of respecting borders, padding and margins from the stylesheet; well let’s just say that I have learnt that the hard way.

Most of our controls were written with no consideration regarding these things, we use an absolute boxing model after all so we don’t have to. But not having to do something and taking the time to do something is often the difference between quality and fluff. And this time we are doing things right every step of the way.

Much like the effect system (SmartCL.Effects.pas) the theming system makes use of partial classes. This means that it simply doesn’t exist until you include the unit SmartCL.Theme in your project.

With the theme unit included (actually it’s included by the RTL so it’s there no matter what, but it wont be visible unless you include it in your unit scope) TW3CustomControl suddenly gains a couple of properties and methods:

  • ThemeBorder property
  • ThemeBackground property
  • ThemeReset() method

When you create custom controls you can (if you need to) define a style for that control, but this time you don’t need to define borders or backgrounds. A style is now reduced to padding, margins, some font stuff and perhaps shading if you need that. Then simply assign a ThemeBorder and ThemeBackground in the StyleTagObject() method of your control – and you can make your control look and feel at home with everything else using that theme.

Lets look at the standard borders first:

  • btNone
  • btFlatBorder
  • btControlBorder
  • btContainerBorder
  • btButtonBorder
  • btDialogButtonBorder
  • btDecorativeBorder
  • btEditBorder
  • btListBorder
  • btToolContainerBorder
  • btToolButtonBorder
  • btToolControlBorder
  • btToolControlFlatBorder

And then we have pre-defined backgrounds matching these:

  • bsNone
  • bsDisplay
  • bsControl
  • bsContainer
  • bsList
  • bsListItem
  • bsListItemSelected
  • bsEdit
  • bsButton
  • bsDialogButton
  • bsDecorative
  • bsDecorativeInvert
  • bsDecorativeDark
  • bsToolContainer
  • bsToolButton
  • bsToolControl

And as mentioned, you can assign these to any control you like

Same defines, many themes

The cool thing about the new system is that it’s not just one theme. We start with one of course but ultimately all our themes will follow the new styling scheme. The goal is to use pure constants, much like what Delphi did with colors (clBtnFace and so on) so that we only need to change the coloring constants – and then the changes will spread to the whole theme.

You as a Smart Mobile Studio developer don’t need to care about the details. As long as you stick to the normal types listed above, your custom controls will always match whatever theme is being used. And it will always look good and match the theme.


Still  few controls to style, but I’m sure you agree that its starting to look nice

Well that has been a rather long introduction to Smart and CSS. I hope you have enjoyed reading it. I will keep you all posted on the progress we make, which is moving ahead very fast these days!

Personally I can’t wait until Smart Mobile Studio 3.0 is ready, and I hope people value the effort we have put into this. And we are just getting started!

Smart Mobile Studio and CSS: part 2

October 11, 2017 Leave a comment

In my previous article we had a quick look at some fundamental concepts regarding CSS. These concepts are not unique to Smart Mobile Studio, but simply just the way things work with CSS in general. The exception being the way Smart maps your pascal class-name to a CSS style of the same name.

To sum up what we covered last time:

  • Smart maps a control’s class-name to a CSS style with the same name. So if your control is called TMyControl, it expects to find a CSS style cleverly named “.TMyControl”. This works very well and is easy to apply.
  • CSS can affect elements recursively, so you can write CSS that changes the appearance and behavior of child controls. This technique is typically used if you inject HTML directly via the InnerHTML property
  • CSS is cascading, meaning that you can add multiple styles to the same control. The browser will merge them into a final, computed style. The rule of thumb is to avoid styles that affect the same properties
  • CSS can define more than colors; things like animations, gradients, animated gradients and whatnot can all be defined in CSS
  • Smart Mobile Studio ships with units for creating, applying and working with CSS from your pascal code. It also ships with effect classes that can trigger defined CSS animations.
  • Smart Mobile Studio has a special effect unit (SmartCL.Effects) that when added to the uses list, adds quite a few effect procedures to TW3MovableControl. These effect methods are prefixed with Fx (ex: fxMoveTo, fxFadeOut, fxScaleTo).

Best practices

When you write your own controls, don’t cheat. I have seen a lot of code where people create instances of TW3CustomControl for instance, and then jump through hoops trying to make that look good. TW3CustomControl is a base-class, it’s designed to be inherited from – not used “as is”. I can understand the confusion to some extent, I mean since TW3CustomControl manage a DIV by default – people with some HTML background probably think creating one of these is the same as making a DIV. But by doing so they essentially short-circuit the whole theme-system since (as underlined above) all pascal classes will use a style with the same name. And TW3CustomControl is just a transparent block of nothing.

No matter how small a thing you are creating, always inherit out your own classes and give them distinct names. This is extremely important with regards to styling, but also as a discipline of writing readable, maintainable code. Using TW3CustomControl all over the place will make the code a mess to maintain – let alone share with others who don’t have a clue what you are doing.

A practical example

To show how easy it is to style things once you have written code that uses distinct class names and clear-cut structure, let’s take the time to write a little list-box. Nothing fancy, just a control that can take X number of child rows, style them and display the items vertically like a list. Let’s begin with the class code:


  // Define an exception especially for our control
  EMyControl = class(EW3Exception);

  // Define a baseclass, that way we can grow in the future
  TMyChild = class(TW3CustomControl)

  // Define a class type, good when working with lists or
  // collections of elements that share ancestors
  TMyChildClass = class of TMyChild;

  // Define a clear child class, that way we can apply
  // styling without problems
  TMyChildRed = class(TMyChild)

  // Create a custom version with sensitive properties only
  // available to ancestors. Here we place these in the protected
  // section (items and count)
  TCustomMyControl = class(TW3CustomControl)
    property Items[const index: integer]: TMyChild
            read ( TMyChild(GetChildObject(Index)) ); default;
    property Count: integer read ( GetChildCount );

    procedure Resize; override;
    function Add(const NewItem: TMyChild): TMyChild; overload;
    function Add(const &Type: TMyChildClass): TMyChild; overload;

  // The actual control we use, this is the one we write
  // CSS code for and that we create and use in our applications.
  // This step is optional ofcourse, but it has it's perks
  TMyControl = class(TCustomMyControl)
    property Items;
    property Count;


procedure TCustomMyControl.Resize;
  LCount: integer;
  bl, bt, br, bb: integer;
  wd, dy: integer;
  LItem: TMyChild;

  // Avoid doing work if there is nothing there
  LCount := GetChildCount();
  if LCount > 0 then
    // Get the values of the borders/padding etc from CSS
    // We need to respect these when working in the client-rect
    bl := Border.Left.Width + Border.Left.Padding + Border.Left.Margin;
    bt := Border.Top.Width + Border.Top.Padding + Border.Top.Margin;
    br := Border.Right.Width + Border.Right.Padding + Border.Right.Margin;
    bb := Border.Bottom.Width + Border.Bottom.Padding + Border.Bottom.Margin;

    // This is the maximum width an element can have without
    // bleeding over whatever styling is present
    wd := ClientWidth - (bl + br);

    // Start at the top
    dy := bt;

    // Now layout each element vertically
    for var x := 0 to LCount-1 do
      LItem := Items[x];
      LItem.SetBounds(bl, dy, wd, LItem.Height);
      inc(dy, LItem.Height);

function TCustomMyControl.Add(const &Type: TMyChildClass): TMyChild;
  if &Type <> nil then
    // Start update

    // Create our control & return it
    result := &Type.Create(self);

      // Define that a resize must be issued

    // End update. If update was not called elsewhere
    // the resize will happen now. If not, it will happen
    // when the last EndUpdate() is called (clever stuff!)
  end else
  raise EMyControl.Create('Failed to add item, classtype was nil error');

function TCustomMyControl.Add(const NewItem: TMyChild): TMyChild;
  result := NewItem;
  if NewItem <> nil then
    // Are we the current parent?
    if not Handle.Contains(NewItem.Handle) then
      // Remove from other parent

      // Start update

      // Add child to ourselves

      // Define that a resize must be issued

      // End update. If update was not called elsewhere
      // the resize will happen now. If not, it will happen
      // when the last EndUpdate() is called (clever stuff!)
  end else
  raise EMyControl.Create('Failed to add item, instance was nil error');

If you are wondering about the strange property getter’s, where we don’t call a function but instead have some code inside (), that is another perk of Smart Pascal. The GetChildObject() method is a part of TW3TagContainer which ultimately TW3CustomControl inherits from, so we simply typecast and call that. This is perfectly legal in Smart as long as it’s a simple function or expression with matching type.

And now lets look at the CSS for our new control and its red child:

.TMyChildRed {
  padding: 2px;
  background-color: #FF0000;
  font-family: "Ubuntu", "Helvetica Neue", Helvetica, Verdana;
  color: #FFFFFF;
  border-bottom: 1px solid #AA0000;

.TMyControl {
  padding: 4px;
  background-color: #FFFFFF;
  border: 1px solid #000000;
  border-radius: 4px;
  margin: 1px;

We need to populate the list before we can see anything of course, so if we add the following code to InitializeForm() things will start to happen:

  // Lets create our control. We use an inline variable
  // here since this is just an example and I wont be
  // accessing it later. Otherwise you want to define it
  // as a form field in the form-class
  var LTemp := TMyControl.Create(self);
  LTemp.SetBounds(100, 100, 300, 245);

  // We call beginupdate here to prevent the
  // control calling Resize() for every elements.
  // It will only resize when the last EndUpdate (below)
  // is called. Also see how we use this inside the
  // procedures that needs to force a change
  for var x := 1 to 10 do
    // Create a new "red" child
    var NewItem := LTemp.Add(TMyChildRed);

    // Fill the content with something
    NewItem.InnerHTML := 'Item number ' + x.ToString();

The end result might not look fancy but it demonstrates some very basic concepts that is fundamental to working with Smart Mobile Studio. Namely how to define CSS that map to your classes, and also how to use BeginUpdate() and EndUpdate() to prevent a ton of calls to Resize() when adding multiple items.


It wont win any prices for looks, but it demonstrates some very important principles when writing controls


Being able to style and layout child elements in your own controls is cool, but applications can quickly become dull and static without visual feedback. This is why I wrote the effect unit, namely to make it so easy to use GPU powered effects in your applications that anyone can make stuff move around.

So let’s make a little change to our mini-list control. When a user press one of the items, we want the item to scale up while the mouse is pressed, and then gracefully shrink back to normal size when you let go of the mouse. We could make it spin around for that matter, but let’s start with something a bit more down to earth.

This is where defining our own classes comes into play. We are going to add some code to our root child class, TW3MyChild, because this behavior should be universal. For sake of simplicity im just going to use the controls own events for this purpose. So Let’s expand our ancestor class to the following:

  TMyChild = class(TW3CustomControl)
    FDown: boolean;
    procedure HandleMouseDown(Sender: TObject; Button: TMouseButton;
                        Shift: TShiftState; X, Y: integer);
    procedure HandleMouseUp(Sender: TObject; Button: TMouseButton;
                        Shift: TShiftState; X, Y: integer);
    procedure InitializeObject; override;

The implementation needs to keep track of when a scale is in process, otherwise we can scale the element out of sync with the UI. Again this is just an example, there are many ways to keep track of things but let’s keep it simple:

procedure TMyChild.InitializeObject;
  self.OnMouseDown := HandleMouseDown;
  self.OnMouseUp := HandleMouseUp;

procedure TMyChild.HandleMouseDown(Sender: TObject; Button: TMouseButton;
                    Shift: TShiftState; X, Y: integer);
  if Button = TMouseButton.mbLeft then
    if not FDown then
      FDown := true;
      fxScaleUp(1.0, 1.5, 0.3);

procedure TMyChild.HandleMouseUp(Sender: TObject; Button: TMouseButton;
                    Shift: TShiftState; X, Y: integer);
  if Button=TMouseButton.mbLeft then
    if FDown then
      fxScaleDown(1.5, 1.0, 0.3, procedure ()
        FDown := false;

The result? Well, when we press one of the items in our list that items grows to 1.5 of it’s original size (the parameter names for the effects are easy to understand). So we scale from 1.0 (normal size) to 1.5, and we tell the system to execute this transition in 0.3 seconds.

All the effect methods have an optional callback procedure you can use (anonymous procedure) that will fire when the effect is finished. As you can see in the HandleMouseUp() method we use this to reset the FDown field, allowing the effect to be executed again on the next click.


Smooth scaling via hardware

Next time

Hopefully the past two articles have been interesting. In our next article we will look at some of the stuff we are building in our labs. That means talking about styling and how we are working to improving this (read: not yet available but in the process).

In the meantime, have a peek at what you can do with proper use of CSS effects


You can do some amazing things with effects and JS (click image)

Happy coding!

Smart Mobile Studio and CSS: part 1

October 9, 2017 Leave a comment

If I were to pinpoint a single feature of the modern HTML5 rendering engine that demands both respect and care, it would have to be CSS. While it’s true that no other piece of technology has seen the level of development as “the browser” for the past 20 years – the piece that has seen the most is without a doubt CSS.

When we designed Smart Mobile Studio styling became an issue almost from the start. I knew CSS well and I was reluctant to create a theming engine for Smart, because it’s so easy to fall into the same pit that Macromedia once did; namely that you end up boxing the user into a corner with the best of intentions. So instead of writing a large and complex styling engine, we designed the simplest possible system we could imagine – and left the rest to our users.

For advanced users that know their way around CSS, HTML and Javascript as well as they know object pascal, this has been a great bonus. But for users that come directly from Delphi or Lazarus with little or no background in web technology – CSS has been a black box they would rather not touch. Which is really sad because well written CSS makes up as much as 40% of a good application. If not more (!).

CSS for smarties

Most Delphi developers in their 40’s who never really got into Web development (because they were too busy coding in Delphi) probably think of CSS as a coloring language. I keep on hearning the same thing over and over “CSS? You can set colors, background pictures and stuff”. In part they are right, back in the late 90s that is. Yes CSS allows you to define how things should be colored and stuff like that – but CSS have evolved side by side with modern JavaScript and HTML, and as such it’s capable of a lot more than just setting colors.

The most important features you want to know about is:

  • You can define gradients as backgrounds, not just a static color or picture
  • You can use alpha blending (rgba) rather than fixed colors (#rrggbb)
  • You can define elaborate animations
  • Animations can use most CSS properties: colors, size, opacity and / or position
  • CSS is recursive, you can define rules that applies to child elements of a control using a style. You can also target child elements by name.
  • CSS is no longer just 2D but also 3d (Note: Sprite3d has been ported to Smart, see SmartCL.Sprite3d.pas), so you can place elements in 3d space
  • Rotation is now standard, be it purely 2d or 3d
  • You can define transitions directly on a property, like how long a move should take
  • CSS is cascading (hence the term “cascading style sheets”)
  • CSS allows elements to inherit properties from their parents, which is extremely handy if you want all child elements to use the font you set in the first, actual control you are making.
  • Filters! You can now apply great graphics filters on your content
  • CSS is powered by the GPU (graphical processing unit) and makes full use of the graphics chipset on the target device

This is just the tip of the iceberg of what modern CSS has to offer, but before you dive in, lets look at some fundamental facts you need to know when working in Smart Mobile Studio.

Class to style mapping

Have you ever wondered how a custom control in Smart knows what css style to use? For instance, if you drop a TW3Panel on a form – where does the style come from? Is there some magical spell that automatically assigns a piece of css to the visual control? Sure you know there is a CSS file that’s generated for the application, and you can pick between a few themes, but how is the panel CSS style attached to an instance of TW3Panel?

Like I mentioned above, we tried to leave CSS alone in fear of boxing the user into a system that was too limited or too lose; But we did one stroke of genius, and that was to automatically map the pascal class-name to the CSS class name. And this turned out to be a very efficient method of dealing with styling.

So to make this crystal clear: Let’s say you create a new control called TMyControl right? When you create an instance of that control in your pascal code, it will automatically try to use a CSS style with the same name. So far that is the only rule we have enforced. But it is extremely important to know this and understand how powerful that is.

Recursive CSS

The next thing I want to explain is how you can define recursive styles. Again Let’s say you have created a new custom-control called TMyControl. You go into your project options, click on “Linker” from the treeview on the left – and then check the “Use custom theme” checkbox. This makes a copy of whatever theme you picked for your application and stores that copy within your project file. When you click “OK” to exit the project options dialog and click “Save”, your project will get a new item cleverly named “Custom CSS”. This is where you add your own styles.


So ok, we have a control called TMyControl and now we want to style it. So we double-click on the “Custom CSS” node in our project, and we are greeted with a ton of weird looking CSS code.

So let’s go ahead and create a style with the same name as our pascal class, that way they will find each other:

.TMyControl {
  background-color: #FF0000;

Click “Save” again (or “CTRL + S” on your keyboard) and compile + run your program. If you had created an instance of TMyControl on your form, you should now see a red box. Not much to look at just yet, but we will deal with that later.

But a blank control is really not much fun. So for sake of argument let’s say you want to display a header inside your control. So you create a second class called TMyHeader and then create that in the constructor of TMyControl. And we want to place it at the top of our TW3MyControl display, 32 pixels high. So we end up with something like this:

unit Unit1;


  System.Types, System.Colors, System.Types.Convert,
  SmartCL.System, SmartCL.Graphics, SmartCL.Components, SmartCL.Forms,
  SmartCL.Fonts, SmartCL.Borders;


// our header
TMyHeader = class(TW3CustomControl)

// our new cool control
TMyControl = class(TW3CustomControl)
  FHeader: TMyHeader;
  procedure InitializeObject; override;
  procedure FinalizeObject; override;
  procedure Resize; override;
  property Header: TMyHeader read FHeader;


procedure TMyControl.InitializeObject;
  FHeader := TMyHeader.Create(self);

procedure TMyControl.FinalizeObject;

procedure TMyControl.Resize;
  FHeader.SetBounds(0, 0, ClientWidth, 32);


At this point we can ofcourse do the same as we just did, namely to add a CSS style called “.TMyHeader” and define our header there – which is also how you should do things. But there will be cases where you dont have this fine control over things – perhaps you are using a library or maybe you are generating html and just injecting it via the innerHTML property? Who knows, but the point is we can actually write CSS that targets ANY child element without knowing much about it. And we do that using something called a CSS selector.

So let’s say I want to color all children of TMyControl, regardless of type, green (just for the hell of it). Well, then I can do like this in our CSS:

.TMyControl {
  background-color: #FF0000;

/* Color all (*) children green! */
.TMyControl > * {
  background-color: #00FF00;

We can also be more spesific and say: Color the first P (paragraph) inside the first DIV child green! And I should mention that the default tag that TW3CustomControl manages is a DIV. Well, to target the text paragraph inside the first child we would write:

.TMyControl {
  background-color: #FF0000;

/* Color the P inside the first DIV green! */
.TMyControl > :first-child > P {
  background-color: #00FF00;

Now you are probably wondering, but where did that “P” come from? There is no paragraph in my code? Well, like mentioned we can add that via the innerHTML property if we like:

procedure TMyControl.InitializeObject;
  FHeader := TMyHeader.Create(self);
  FHeader.innerHTML := '<p>This is the text!</p>';

Note: WordPress has a tendency to kill html tags, so if you dont see a paragraph tag in the string above, wordpress gobbled it up.

Now the point of this code so far has not been to teach how to write good code. In fact, you really should try to avoid code like this unless you really know what you are doing. The point here was to show you how CSS can be made to operate on structures. If a style is selected by a control, selector code like I demonstrated above will kick-in automatically and you can do some pretty amazing things with it. Just changing the background doesnt really give this system the credit it deserves. You can add animations, change the row-color of every odd listitem, add a glowing rectangle only around a particular element — the sky is the limit!

The cascading part

This is probably one of the simplest features ever, yet it’s one that people fail to remember when they sit down to write CSS code. So please make a note of this one because it will save you so much time.

So far we have looked at single styles attached to a control. But truth be told, you can assign 100 styles to the same control – at the same time (!). What happens is that the rendering engine will merge them all together and draw whatever the outcome is onto the display. The only rule is: they must not collide. If you define two backgrounds the style engine will try to merge them, but odds are only one of them will survive.

But let’s stop for a minute and think about what this means:

  • Instead of one large, monolithic style for a control, you can divide it into smaller and more managable parts
  • You can define borders in one style, background in another and fonts in a third
  • You can have two separate animations running at the same time targeting the same element – and as long as they dont manipulate the same properties – it will work just fine.

It can take a while for the true potential of this to really sink in.

To give you a practical example: This is how Smart Mobile Studio deals with disabled states. Whenever you disable a control, a style called “DisabledState” is added to the control. This takes over opacity, disables mouse and touch events, changes the mouse cursor and draws a diagonal pattern that covers the control.

When the control is enabled again, we simply remove the style and it reverts back to normal. It’s pretty cool if I say so myself!

TW3CustomControl, which is the foundation for all visible controls on the palette, has a property called “CSSClasses”. This has been deprecated and replaced by “TagStyles”, but both still works. This class gives you easy methods for adding, removing and checking if any extra styles (apart from the default style) has been added.

It looks like this:

TW3TagStyle = class(TW3OwnedObject)
    FCache:     TStrArray;
    FCheck:     integer;
    FHandle:    TControlHandle;
    function    GetCount: integer; virtual;
    function    GetItem(const Index: integer): string; virtual;
    procedure   SetItem(const Index: integer; const Value: string); virtual;
    procedure   ParseToCache(CssStyleText: String); virtual;
    procedure   CacheToTag; virtual;
    procedure   TagToCache; virtual;
    function    AcceptOwner(const CandidateObject: TObject): Boolean; override;
    property    Handle: TControlHandle read FHandle;
    property    Count: integer read GetCount;
    property    Items[const Index: integer]: string read GetItem write SetItem;

    procedure   Update; virtual;

    class procedure AddClassToControl(const Handle: TControlHandle; CssClassName: string);
    class function ControlContainsClass(const Handle: TControlHandle; CssClassName: string): boolean;
    class procedure RemoveClassFromControl(const Handle: TControlHandle; CssClassName: string);

    function    Contains(const CssClassName: string): boolean;
    function    Add(CssClassName: string): integer;
    function    Remove(const Index: integer): string;
    function    RemoveByName(CssClassName: string): string;
    function    IndexOf(CssClassName: string): integer;
    function    ToString: string;
    procedure   Clear;
    constructor Create(AOwner: TObject); override;
    destructor Destroy; override;

So Let’s say you have a fancy animated background you want to show while doing something, then simply call the AddClassToControl() method.

I should mention that I have used the word “style” so far to avoid confusion. A css definition is not really called a style in HTML land, but a style class. I just used style to make the distinction easier for everyone.

Summing up

In this short article we have had a look at the fundamental rules of CSS. We have looked at how a control match and finds it’s css style, how to define your own styles. We also brushed into the concept of CSS selectors, which can recursively affect child elements in your controls — and last but not least, we have talked about cascading and how you can assign multiple styles to the same element.

In our next article we are going to look at some of the next-generation features in our RTL regarding styles, and also talk a bit about what we have cooking in our labs. Needless to say, CSS is going to become easier and much more powerful in the weeks to come, so it’s important that you pick up on the basics now!

Homework (if you need it) is to have a look at the CSS pascal classes in our RTL. They contain a lot of nice features, helper classes and more to generate platform independent CSS code that you can use right now.

You want to go through the following units:

  • SmartCL.CSS.StyleSheet
  • SmartCL.CSS.Classes
  • SmartCL.Effects

Have a peek at the methods “TSuperStyle.AnimGlow” and see how CSS can be written as code, although in most cases it’s easier to just write it as vanilla CSS. You will also be happy to know that stylesheets can be created as normal pascal objects, so you dont have to put all your eggs into one basket.

The last unit in that list, SmartCL.Effects is special. It uses something called “partial classes” which is not supported by Delphi or Lazarus. In general it means that you can spread the declaration of a class over many units.

When you add SmartCL.Effects to your form’s uses clause, TW3CustomControl suddenly gains a ton of effect methods (prefixed by “fx”). These are CSS animation effects that you can call on any control. You can also daisy-chain them together and they will execute in sequence. Again this demonstrates what you can achieve with CSS and some clever programming.

Until next time!

Webfonts in Smart Mobile Studio

October 4, 2017 2 comments

Webfonts is something I have wanted to include in Smart for ages now. It’s such a simple feature, but when you use it right it becomes powerful and assuring.

What is a webfont?

Well, you know how you have to define what fonts you use under html right? And if the user doesn’t have that font, you have fallback fonts it can use instead? If you have worked with web technology for a while you no doubt know how haphazardly the results can be. You would think that a font like “verdana” looks exactly the same from system to system -but that is not always the case.


Adding webfonts to your project is very easy

Apple for instance have their own tweak on just about every typeface; Linux often have alternatives that looks good but might not be 100% identical (on some distros, Linux is not exactly “one thing”). And Microsoft tends to live in their own universe.

The solution? Webfonts. In short it means that the browser will double-check if the user has the font you need installed. And if they don’t – the font is downloaded from a font provider (like Google) when your web application starts.

Fonts, glorious fonts!

The result is that your application will look and feel the same no matter what device is used. And that is a very important thing – because coding flexible, adaptive UI’s that should work on Android, iOS, TV’s and ordinary browsers is no picnic to begin with. Having to worry that your fancy Ubuntu based UI is rendered using vanilla Sans-Serif (read: looking like something out of the 80s) has been an ever-present reality for two decades now.


Plenty of good looking fonts on Google

If you head over to https://fonts.google.com and take a gander at the fonts available, I’m sure you agree that this is a fabulous idea. And as always, when you combine good-looking fonts with some cool CSS – the results can be spectacular.

Still in Alpha

We are still in Alpha for Smart Mobile Studio 3.0, so there might be hiccups along the way. But all in all you should be able to enjoy webfonts in our next update.


Why buy a Vampire accelerator?

August 24, 2017 2 comments

With the Amiga about to re-enter the consumer market, a lot of us “old timers” are busy knocking dust of our old machines. And I love my old machines even though they are technically useless by modern standards. But these machines have a lot of inspiration in them, especially if you write code. And yes there is a fair bit of nostalgia involved in this, there is no point in lying about any of this.

I mean, your mobile phone is probably 100 times faster than a vintage Amiga. But like you will discover with the new machines that are about to hit the market, there is more to this computer than you think. But vintage Amiga? Sadly they lack the power to anything useful [in the “modern” sense].

Enter the vampire

The Vampire is a product that started shipping about a year ago. It’s a FPGA based accelerator, and it’s quite frankly turning the retro scene on its head! Technically it’s a board that you just latch onto the CPU socket of your classical Amiga; it then takes over the whole machine and replace the CPU and chipset with its versions of these. Versions that are naturally a hell of a lot faster!

vanpireThe result is that the good old Amiga is suddenly beefy enough to play Doom, Quake, MP3 files and MPG video (click here to read the datasheet). In short: this little board gives your old Amiga machine a jolt of new life.

Emulation vs. FPGA

Im not going to get into the argument about FPGA not being “real”, because that’s not what FPGA is about. Nor am I negative to classical hardware – because I own a ton of old Amiga gear myself. But I will get in your face when it comes to buying a Vampire.

Before we continue I just want to mention that there are two models of the vampire. There is the add-on board I have just mentioned which is again divided into different models for various Amiga versions (A600, A500 so far). The second model is a completely stand-alone vampire motherboard that wont even need a classic Amiga to work. It will be, for all means and purposes, a stand alone SBC (single board computer) that you just hook up power, video, storage and mouse – and off you go!

This latter version, the stand-alone, is a project I firmly believe in. The old boards have been out of production since 1993 and are getting harder to come by. And just like people they will eventually break down and stop working. There is also price to consider because getting your 20-year-old A500 fixed is not easy. First of all you need a specialist that knows how to fix these old things, and he will also need parts to work with. Since parts are no longer in production and homebrew can only go so far, well – a brand new motherboard that is compatible in every way sounds like a good idea.

There is also the fact that FPGA can reach absurd speeds. It has been mentioned that if the Vampire used a more expensive FPGA modules, 68k based Amiga’s could compete with modern processors (Source: https://www.generationamiga.com/2017/08/06/arria-10-based-vampire-could-reach-600mhz/). Can you imagine a 68k Amiga running side by side with the latest Intel processors? Sounds like a lot of fun if you ask me !


Amiga 1000, in my view the best looking Amiga ever produced

But then there is emulation. Proper emulation, which for Amiga users can only mean one thing: UAE in all its magnificent diversity and incarnations.

Nothing beats firing up a real Amiga, but you know what? It has been greatly exaggerated. I recently bought a sexy A1000 which is the first model that was ever made. This is the original Amiga, made way back before Commodore started to mess around with it. It cost me a small fortune to get – but hey, it was my first ever Amiga so I wanted to own one again.

But does it feel better than my Raspberry PI 3b powered A500? Nope. In fact I have only fired up the A1000 twice since I bought it, because having to wait for disks to load is just tedious (not to mention that you can’t get new, working floppy disks anymore). Seriously. I Love the machine to bits but it’s just damn tedious to work on in 2017. It belongs to the 80s and no-one can ever take away its glory or it’s role in computer history. That achievement stands forever.

High Quality Emulation

If you have followed my blog and Amiga escapades, you know that my PI 3b based Amiga, overclocked to the hilt, yields roughly 3.2 times the speed of an Amiga 4000/040. This was at one point the flagship Commodore computer. The Amiga 4000’s were used in movie production, music production, 3d rendering and heavy-duty computing all over the world. And the 35€ Raspberry PI gives you 3.2 times the power via the UAE4Arm emulator. I don’t care what the vampire does, the PI will give it the beating of its life.

Compiling anything, even older stuff that is a joke by today standard, is painful on the Raspberry PI. Here showing my retro-fitted A500 PI with sexy led keyboard. It will soon get a makeover with an UP board :)

My retrofitted Raspberry PI 3b Amiga. Serious emulation at high speed allowing for software development and even the latest Freepascal 3.x compiler

Then suddenly, out of the blue, Asus comes along with the Tinkerboard. A board that I hated when it first came out (read part-1 here, part-2 here) due to its shabby drivers. The boards have been collecting dust on my office shelf for six months or so – and it was blind luck that i downloaded and tested a new disk image. If you missed that part you can read the full article here.

And I’m glad I did because man – the Tinkerboard makes the Raspberry PI 3b look like a toy! Asus has also adjusted the price lately. It was initially priced at 75€, but in Norway right now it retails for about 620 NKR – or 62€. So yes, it’s about twice the price of the PI – but it also gives you twice the memory, twice the graphics performance, twice the IO performance and a CPU that is a pleasure to work with.

The Raspberry PI 3b can’t be overclocked to the extent the model 1 and 2 could. You can over-volt it and tweak the GPU and memory and make it run faster. But people don’t call that “overclock” in the true sense of the word, because that means the CPU is set to run at speeds beyond the manufacturing specifications. So with the PI 3b there is relatively little you can do to make it run faster. You can speed it up a little bit, but that’s it. The Tinkerboard can be overclocked to the hilt.


The A1222 motherboard is just around the corner [conceptual art]

Out of the box it runs at 1.5 Ghz, but if you add a heatsink, fan (important) and a 3A PSU – you can overclock it to 2.6 Ghz. And like the PI you can also tweak memory and gpu. So the Tinkerboard will happily run 3 times faster than the PI. If you add a USB3 harddisk you will also beef up IO speeds by 100 megabyte a second – which makes a huge difference. Linux does memory paging and it slows down everything if you just use the SD card.

In short: if you fork out 70€ you get a SBC that runs rings around both the vampire and the Raspberry PI 3b. If we take height for some Linux services and drivers that have to run in the background, 3.2 x 3 = 9.6. Lets round that off to 9 since there will be performance hits by the background services. But still — 70€ for an Amiga that runs 9 times faster than A4000 @ MC68040 cpu ? That should blow your mind!

I’m sorry but there has to be something wrong with you if that doesn’t get your juices flowing. I rarely game on my classic Amiga setup. I’m a coder – but with this kind of firepower you can run some of the biggest and best Amiga titles ever made – and the Tinkerboard wont even break a sweat!

You can’t afford to be a fundamentalist

There are some real nutbags in the Amiga community. I think we all agree that having the real deal is a great experience, but the prices we see these days are borderline insane. I had to fork out around 500€  to get my A1000 shipped from Belgium to Norway. Had tax been added on the original price, I would have looked at something in the 700€ range. Still – 500€ for a 20-year-old computer that can hardly run Workbench 1.2? Unless you add the word “collector” here you are in fact barking mad!

If you are looking to get an Amiga for “old times sakes”, or perhaps you have an A500 and wonder if you should fork out for the Vampire? Will it be worth the 300€ pricetag? Unless you use your Amiga on a daily basis I can’t imagine what you need a vampire for. The stand-alone motherboard I can understand, that is a great idea – but the accelerator? 300€?

I mean you can pay 70€ and get the fastest Amiga that ever existed. Not a bit faster, not something on second place – no – THE FASTEST Amiga that has ever existed. If you think playing MP3 and MPG media files is cool with the vampire, then you are in for a treat here because the same software will work. You can safely download the latest patches and updates to various media players on the classic Amiga, and they will run just fine on UAE4Arm. But this time they will run a hell of a lot faster than the Vampire.


My old broken A500 turned into an ass-kicking, battle hardened ARM monster

You really can’t be a fundamentalist in 2017 when it comes to vintage computers. And why would you want to? With so much cool stuff happening in the scene, why would you want to limit your Amiga experience to a single model? Aros is doing awesome stuff these days, you have the x5000 out and the A1222 just around the corner. Morphos is stable and good on the G5 PPC — there has never been a time when there were so many options for Amiga enthusiasts! Not even during the golden days between 1989-1994 were there so many exciting developments.

I love the classic Amiga machines. I think the Vampire stand-alone model is fantastic and if they ramp up the fpga to a faster model, they have in fact re-created a viable computer platform. A 68080 fpga based CPU that can go head to head with x86? That is quite an achievement – and I support that whole heartedly.

But having to fork out this amount of cash just to enjoy a modern Amiga experience is a bit silly. You can actually right now go out and buy a $35 Raspberry PI and enjoy far better results than the Vampire is able to deliver. How that can be negative? I have no idea, nor will I ever understand that kind of thinking. How do any of these people expect the Amiga community to grow and get new, young members if the average price of a 20-year-old machine costs 500€? Which incidentally is 50€ more than a brand new A1222 PPC machine capable of running OS 4.

And with the Tinkerboard you can get 9 times the speed of an A4000? How can that not give you goosebumps!

People talk about Java and Virtual-Machines like its black magic. Well UAE gives you a virtual CPU and chipset that makes mince-meat of both Java and C#. It also comes with one of the largest software libraries in the world. I find it inconceivable that no-one sees the potential in that technology beyond game playing – but when you become violent or nasty over hardware, then I guess that explains quite a bit.

I say, use whatever you can to enjoy your Amiga. And if your perfect Amiga is a PI or a Tinkerboard (or ODroid) – who cares!

I for one will not put more money into legacy hardware. I’m happy that I have the A1000, but that’s where it stops for me. I am looking forward to the latest Amiga x5000 PPC and cant wait to get coding on that – but unless the Appollo crew upgrades to a faster FPGA I see little reason to buy anything. I would gladly pay 500 – 1000 € for something that can kick modern computers in the behind. And I imagine a lot of 68k users would be willing to do that as well. But right now PPC is a much better option since it gives you both 68k and the new OS 4 platform in one price. And for affordable Amiga computing, emulation is now of such quality that you wont really notice the difference.

And I love coding 68k assembler on my Amibian emulator setup. There is nothing quite like it 🙂

The Tinkerboard Strikes Back

August 20, 2017 Leave a comment

For those that follow my blog you probably remember the somewhat devastating rating I gave the Tinkerboard earlier this year (click here for part 1, and here for part 2). It was quite sad having to give such a poor rating to what is ultimately a fine piece of hardware. I had high hopes for it – in fact I bought two of the boards because I figured there was no way it could suck with that those specs. But suck it did and while the muscle was there, the drivers were in such a state that it never emerged for the user. It was released prematurely, and I think most people that bought it agrees on this.


The initial release was less than bad, it was horrible

Since my initial review those months ago good things have happened. Asus seem to have listened to the “poonami” of negative feedback and adapted their website accordingly. Unlike the first time I visited when you literally had to dig into recursive menus (which was less than intuitive in this case) just to download the software – the disk images are now available at the bottom of the product page. So thumbs up for that (!)

They have also made the GPIO programming API a lot easier to get; downloading it is reduced to a “one liner” for C developers, which is the way it should be. And they have likewise provided wrappers for other languages, like ever popular python and scratch.

I am a bit disappointed that they don’t provide freepascal units. A lot of developers use object pascal on these board after all, because Object Pascal gives you a better balance between productivity and depth. Pascal is easier to learn (it was designed for that after all) but avoids some of the pitfalls of C/C++ while retaining all the good things. Porting over C headers is fairly easy for a good pascal programmer – but it would be cool of Asus remember that there are more languages in the world than C and python.

All of this aside: the most important change of all is what Asus has done with the drivers! They have finally put together drivers that shows off the capabilities of the hardware and unleash the speed we all hoped for when the board was first announced. And man does it show! My previous experience with the Tinkerboard was horrible; it was the text-book example of a how not to release a product (the whole release has been odd; Asus is a huge, multi-national corporation. Yet their release had basement 3 man band written all over it).

So this is fantastic news! Finally the Tinkerboard delivers and can be used for real life projects!

Smart IOT

At The Smart Company we both create and use our core product, Smart Mobile Studio, to deliver third-party solutions. As the name implies Smart is a software development system initially made for mobile applications; but it quickly grew into a much larger toolchain and is exceptionally good for making embedded applications. With embedded applications I mean things that run on kiosk systems, cash machines and stuff like that; basically anything with a touch-screen that does something.


The Smart desktop gives you a good starting point for embedded work

One of the examples that ship with Smart Pascal is a fully working desktop embedded environment. Smart compiles for both ordinary browsers (JavaScript environments with a traditional HTML5 display) but also for node.js, which is JavaScript unbound by the strict rules of a browser. Developers typically use node.js to write highly scalable server software, but you are naturally not limited to that. Netflix is written 100% in Node.js, so we are talking serious firepower here.

Our embedded environment is called The Smart Desktop (also known as Amibian.js) and gives you a ready-made node.js back-end that couples with a HTML5 front-end. This is a ready to use environment that you can deploy your own applications through. Things like storage, a nice looking UI, user logon and credentials and much, much more is all implemented for you. You don’t have to use it of course, you can write your own system from scratch if you like. We created “Amibian” to demonstrate just how powerful Smart Pascal can be in the right hands.

With this in mind – my main concern when testing SBC’s (single board computers) is obviously web performance. By default JavaScript is a single core event-driven runtime system; you can spawn threads of course but its done somewhat different from how you would work in Delphi or C++.  JavaScript is designed to be system friendly and a gentle giant if you like, which has turned out to be a good thing – because the way JS schedules execution makes it ideal for clustering!

Most people find it hard to believe that JavaScript can outperform native code, but the JavaScript runtimes of today is almost a whole eco system in themselves. With JIT compilers and LLVM optimization — it’s a whole new ballgame.

Making a scale

To give you a better context to see where the Tinkerboard is on a scale, I decided to set up a couple of simple tests. Nothing fancy, just running the same web applications and see how each of them perform on different boards. So I used the same 3 candidates as before, namely the Raspberry PI 3b, the Hardkernel ODroid XU4 and last but not least: the Asus Tinkerboard.

I setup the following applications to compile with the desktop system, meaning that they were compiled with the Smart project. We got plenty of web applications but for this I wanted to pack the most demanding apps in our library:

  • Skid-Row intro remake using the CODEF library
  • Quake 3 asm.js build
  • Plex

OK let’s go through them and see where the chips land!

The Raspberry PI 3b


Bassoon ran well, its not that demanding

The Raspberry PI was aweful (click here for a video). There is no doubt that native applications like UAE4Arm runs extremely well on the PI (which contains hand optimized assembler, not exactly a fair fight)- but when it comes to modern HTML5 the PI doesn’t stand a chance. You could perhaps use a Raspberry PI 3b for simple applications which are not graphic and cpu intensive, but you can forget about anything remotely taxing.

It ran Bassoon reasonably fast, but all in all you really don’t want a raspberry when doing high quality IOT, unless its headless code and node.js perhaps. Frameworks like Johnny #5 gives you a ton of GPIO features out of the box – in fact you can target 40 embedded systems without any change to your code. But for large, high quality web front-ends, the PI just wont cut it.

  • Skid-Row: 1 frame per second or less
  • Quake: Can’t even start, just forget it
  • Plex: Starts but it lags so much you can’t watch anything

But hey, I never expected $35 to give me a kick ass ARM experience anyways. There are 1000 things the PI does very well, but HTML is not one of them.

ODroid XU4


The ODroid packs a lot of power!

The ODroid being faster than the Raspberry PI is nothing new, but I was surprised at how much power this board delivers. I never expected it to give me a Linux experience close to that of a x86 PC; I mean we are talking about a 45€ SBC here. And it’s only 10€ more than the Raspberry PI, which is a toy at best. But the ODroid XU4 delivers a good Linux desktop; And it’s well worth the extra 10€ when compared to the PI.

Personally I don’t understand why people keep buying PI’s when there is so much better options on the market now. At least not if web technology is involved. A small server or emulator sure, but not HTML5 and browsers. The PI just cant handle it.

  • Skid-Row: 4-5 frames per second
  • Quake: Runs at very enjoyable speed (!)
  • Plex: Runs well but you may want to pick SD or 720p to avoid lags

What really shocked me was that ODroid XU4 can run Quake.js! The PI can’t even start that because it’s so demanding. It is one of the largest and most resource hungry asm.js projects out there – but ODroid XU4 did a fantastic job.

Now it’s not a silky smooth experience, I would guess something along the lines of 17-20 fps. But you know what? Thats pretty good for a $45 board.

I have owned far worse x86 PC’s in my day.

The Tinkerboard

Before i powered up the board I was reluctant to push it too far, because I thought it would fail me once again. I did hope that something had been done by Asus to rectify the situation though, because Asus really should have done a better job before releasing it. It’s now been roughly 6 months since I bought it, and roughly 8 months since it was released here in Europe. It would have been better for them to have waited with the release. I was not alone about butchering the whole board, its been a source of frustration for those that bought it. 75€ is not much, but no-one likes to throw money out the window like that.

Long story short: I downloaded the latest Ubuntu image and burned that to an SD card (I actually first downloaded the Debian Jessie image they have, but sadly you have to do a bit of work to turn that into a desktop system – so I decided to go for Ubuntu instead). If the drivers are in order I have a feeling the Jessie image will be even faster – Ubuntu has always been a high-quality distribution, but it’s also one of the most demanding. One might even say it’s become bloated. But it does deliver a near Microsoft Windows like experience which has served the Linux community well.

But the Tinkerboard really delivers! (click here for the video) Asus have cleaned up their act and implemented the drivers properly, and you can feel that the moment the desktop comes into view. With the PI you are always fighting with lagging performance. When you start a program the whole system freezes for a while, when you quit a program the system freezes – hell when you move the mouse around the system bloody freezes! Well that is not the case with the Tinkerboard that’s for sure. The tinkerboard feels more like running vanilla Ubuntu on a normal x86 PC to be honest.

  • Skid-Row: 10-15 frames per second
  • Quake: Full screen 32bit graphics, runs like hell
  • Plex: Plays back fullscreen HD, just awesome!

All I can say is this: if you are going to do any bit of embedded coding, regardless if you are using Smart Mobile Studio or some other devkit — this is the board to get (!)

Like already mentioned it does cost almost twice as much as the PI, but that extra 30€ buys you loads of extra power. It opens up so many avenues of code and you can explore software far more complex than both the PI and ODroid combined. With the tinkerboard you can finally deliver a state of the art product built with off the shelves web components. It’s in a league of its own.

The ‘tinker’ rocks at last

When I first bought the tinker i felt cheated. It was so frustrating because the specs were so good and the terrible performance just came down to sloppy work and Asus releasing it prematurely for cash (lets face it, they tapped into the lucrative market established by the PI foundation). By looking at the specs you knew it had the firepower to deliver so much, but it was held back by ridicules drivers.

There is still a lot that can be done to make the Tinkerboard run even faster. Like I mentioned Ubuntu is not the racecar of distributions out there. Ubuntu is fat, there is no other way of saying it. So if someone took the time to create a minimalistic Jessie image, recompile every piece with maximum llvm optimization and as few running services as possible — the tinkerboard would positively fly!

So do I recommend it? I am thrilled to say that yes, I can finally recommend the tinkerboard! It is by far the coolest board in my collection now. In fact it’s so good that I’m donating one to my daughter. She is presently using an iMac which is overkill for her needs at age 10. Now I can make a super simple menu with Netflix and Youtube, buy a nice touch-screen display and wall mount it in her room.

Well done Asus!

Where is PowerPC today?

August 5, 2017 5 comments

Phase 5 PowerUP board prototype

Anyone who messed around with computers back in the 90s will remember PowerPC. This was the only real alternative for Intel’s complete dominance with the x86 CPU’s and believe me when I say the battle was fierce! Behind the PowerPC you had companies like IBM and Motorola, companies that both had (or have) an axe to grind with Intel. At the time the market was split in half – with Intel controlling the business PC segment – while Motorola and IBM represented the home computer market.

The moment we entered the 1990s it became clear that Intel and Microsoft was not going to stay on their side of the fence so to speak. For Motorola in particular this was a death match in the true sense of the word, because the loss of both Apple and Commodore represented billions in revenue.

What could you buy in 1993?

The early 90’s were bitter-sweet for both Commodore and Apple. Faster and affordable PC’s was already a reality and as a consequence – both Amiga machines and Mac’s were struggling to keep up.

The Amiga 1200 still represented a good buy. It had a massive library of software, both for entertainment and serious work. But it was never really suited for demanding office applications. It did wonders in video and multimedia development, and of course games and entertainment – but the jump in price between A1200 and A4000 became harder and harder to justify. You could get a well equipped Mac with professional tools at that range.

Apple on the other hand was never really an entertainment company. Their primary market was professional graphics, desktop publishing and music production (Photoshop, Pro-tools, Logic etc. were exclusive Mac products). When it came to expansions and ports they were more interested in connecting customers to industrial printers, midi devices and high-volume storage. Mac was always a machine for the upper class, people with money to burn; The Amiga dominated the middle-class. It was a family type computer.

But Apple was not a company in hiding, neither from Commodore or the Wintel threat. So in 1993 they introduced the Macintosh Quadra series to the consumer market. Unlike their other models this was aimed at home users and students, meaning that it was affordable, powerful and could be used for both homework and professional applications. It was a direct threat to upper middle-class that could afford the big box Amiga machines.


The 68k Macintosh Quadra came out in October of 1993

But no matter how brilliant these machines were, there was no hiding the fact that when it came to raw power – the PC was not taking any prisoners. It was graphically superior in every way and Intel started doubling the CPU speed exponentially year by year; Just like Moore’s law had predicted.

With the 486-DX2 looming on the horizon, it was game over for the old and faithful processors. The Motorola 68k family had been there since the late 70’s, it was practically an institution, but it was facing enemies on all fronts and simply could not stand in the way of evolution.

The PowerPC architecture

If you are in your 20’s you wont remember this, but back in the late 80’s early 90’s, the battle between computer vendors was indeed fierce. You have to take into consideration that Microsoft and Intel did a real number on IBM. Microsoft stabbed IBM in the back and launched Windows as a direct competitor for IBM’s OS2. When I write “stabbed in the back” I mean that literally because Microsoft was initially hired to create parts of OS/2. It was the typical lawsuit mess, not unlike Microsoft and Sun later, where people would pick sides and argue who the culprit really was.

As you can imagine IBM was both bitter and angry at Microsoft for stealing the home PC market in such a shameful way. They were supposed to help IBM and be their ally, but turned out to be their most fierce competitor. IBM had also created a situation where the PC was licensed to everyone (hence the term “ibm clone”) – meaning that any company could create parts for it and there was little IBM could do to control the market like they were used to. They would naturally get revenue from these companies in the form of royalties (and would later retire 99% of all their products. Why work when they get billions for doing nothing?), but at the time they were still in the game.

Motorola was in a bad situation themselves, with the 68k line of processors clearly incapable of facing the much faster x86 CPU’s. Something new had to be created to ensure their market share.

The result of this “marriage of necessity” was the PowerPC line of processors.


The Apple “Candy” Mac’s made PPC and computing sexy

Apple jumped on the idea. It was the only real alternative to x86. And you have to remember that – had Apple gone to x86 at that point, they would basically have fed the forces that wanted them dead. You could hardly make out where Microsoft started and Intel ended during the early 90s.

I’m going to spare you the whole fall and rebirth of Apple. Needless to say Apple came to the point where their branch of PowerPC processors caused more problems than they had benefits. The type of PowerPC processors Apple used generated an absurd amount of heat, and it was turning into a real problem. We see this in their later models, like the dual cpu G5 PowerMac where 40% of the cabinet is dedicated purely to cooling.

And yes, Commodore kicked the bucket back in 1994 so they never finished their new models. Which is a damn shame because unlike Apple they went with a dedicated RISC processor. These models did not suffer the heating problems the PPC’s used by Apple had to deal with.

Note: PPC and RISC are two sides of the same coin. PPC processors are RISC based, but naturally there exists hundreds of different implementations. To avoid a ton of arguments around this topic I treat PPC as something different from PA-RISC which Commodore was playing with in their Hombre “skunkworks” projects.

You can read all about Apple’s strain of PowePC processors here, and PA-RISC here.

PPC in modern computers?

I am going to be perfectly honest. When I heard that the new Amiga machines were based on PowerPC my reaction was less than polite. I mean who the hell would use PowerPC in our day and age? Surely Apple’s spectacular failure would stand as a warning for all time? I was flabbergasted to say the least.

the_red_one_498240f858553The Amiga One came out and I didn’t even give it the time of day. The Sam440 motherboards came out, I couldn’t care less. It would have been nice to own one, but the price at the time and the lack of software was just to disproportionate to make sense.

And now there is the Amiga x5000 and a smaller, more affordable A1222 (a.k.a “Tabour”) model just around the corner. And they are both equipped with a PPC CPU. There are just two logical conclusions you can make when faced with this: either the maker of these products is nuttier than a snicker’s bar, or there is something the general public doesn’t know.

What the general public doesn’t know has turned out to be quite a lot. While you would think PPC was dead and buried, the reality of PPC is not that simple. Turns out there is not just one PPC family (or branch) but several. The one that Apple used back in the day (and that MorphOS for some odd reason support) represents just one branch of the PPC tree if you like. I had no idea this was the case.

The first thing you are going to notice is that the CPU in the new Amiga’s doesn’t have the absurd cooling problems the old Mac’s suffered. There are no 20cm cooling ribs and you don’t need 2 fans on Ritalin to prevent a cpu meltdown; and you also don’t need a custom aluminium case to keep it cool (everyone thinks the “Mac Pro” cases were just to make them look cool. Turned out it was more literal, it was to turn the inside into a fridge).

In other words, the branch of PPC that we have known so far, the one marketed as “PowerPC” by Apple, Phase5 and everyone back in the 90’s is indeed dead and buried. But that was just one branch, one implementation of what is known as PPC.

Remember when ARM died?

When I started to dig into the whole PPC topic I could not help but think about the Arm processor. It’s almost spooky to reflect on how much we, the consumer, blindly accept as fact. Just think about it: You were told that PowerPC was the bomb, so you ended up buying that. Then you were told that PowerPC was crap and that x86 was the bomb, so you mentally buried PowerPC and bought x86 instead. The consumer market is the proverbial cheep farm where most of us just blindly accept whatever advertising tell us.

This was also the case with Arm. Remember a company called Acorn? It was a great british company that invented, among other things, the Arm core. I remember reading articles about Acorn when I was a kid. I even sold my Amiga for a while and messed around with an Acorn Archimedes. A momentary lapse of sanity, I know; I quickly got rid of it and bought back my Amiga. But I did learn a lot from messing around in RISC OS.


The Acorn Archimedes, a brilliant RISC based machine that sadly didnt make it

My point is, everyone was told that Arm was dead back in the 80’s. The Acorn computers used a pure RISC processor at the time (again, PPC is a RISC based CPU but I treat them as separate since the designs are miles apart), but it was no secret that they were hoping to equip their future Acorn machines with this new and magic Arm thing. And reading about the power and speed of Arm was very exciting indeed. Sadly such a computer never saw the light of day back in the 80’s. Acorn went bust and the market rolled over Acorn much like it would Commodore later.

The point im trying to make is that, everyone was told that Arm died with Acorn. And once that idea was planted in the general public, it became a self-fulfilling prophecy. Arm was dead. End of story. It doesn’t matter that Acorn had set up a separate company that was unaffected by the bankrupcy. Once the public deem something as dead, it just vanish from the face of the earth.

Fast forward to our time and Arm is no longer dead, quite the opposite! It’s presently eating its way into just about every piece of electronics you can think of. In many ways Arm is what made the IOT revolution possible. The whole Raspberry PI phenomenon would quite frankly never have happened without Arm. The low price coupled with the fantastic performance -not to mention that these cpu’s rarely need cooling (unless you overclock the hell out of them) has made Arm the most successful CPU ever made.

The PPC market share

With Arm’s so-called death and re-birth in mind, let’s turn our eyes to PPC and look at where it is today. PPC has suffered pretty much the same fate as Arm once did. Once a branch of the tech is defined “dead” by media and spin-doctors, regardless if the PPC is actually a cluster of designs not a single design or “chip”, the general public blindly follows – and mentally bury the whole subject.

And yes I admit it, I am guilty of this myself. In my mind there was no distinction between PPC and PowerPC. Which is a bit like not knowing the difference between Rock & Roll as a genre, and KISS the rock band. If we look at this through a parallel what we have done is basically to ban all rock bands, regardless of where they are from, because one band once gave a lousy concert.

And that is where we are at. PPC has become invisible in the consumer market, even though it’s there. Which is understandable considering the commercial mechanisms at work here, but is really PPC dead? This should be a simple question. And commercial mechanisms not withstanding the answer is a solid NO. PPC is not dead at all. We have just parked it in a mental limbo. Out of sight, out of mind and all that.


Playstation 3, Nintendo WII U and Playstation VR all use Freescale PPC

PPC today has a strong foothold in industrial computing. The oil sector is one market that use PPC SBC’s extensively (read: single board computers). You will find them in valve controllers, pump and drill systems and pretty much any systems that require a high degree of reliability.

You will also be surprised to learn that cheap PPC SBC’s enjoy the same low energy requirements people adopt Arm over (3.3 – 5.0 V). And naturally, the more powerful the chip – the more juice it needs.

The reason that PPC is popular and still being used with great success is first of all reliability. This reliability is not just physical hardware but also software. PPC gives you two RTOS’s (real-time operating system) to choose from. Each of them comes with a software development toolchain that rivals whatever Visual Studio has to offer. So you get a good-looking IDE, a modern and up to date compiler, the ability to debug “live” on the boards – and also real-time signal processing protocols. The list of software modules you can pick from is massive.


QNX RTOS desktop, This is a module you can embed in your own products

The last part of that paragraph, namely real-time signal processing, is extremely important. Can you imagine having an oil valve with 40.000 cubic tons of pressure failing, but the regulator that is supposed to compensate doesn’t get the signal because Linux or Windows was busy with something else? It get’s pretty nutty at that level.

The second market can be found with set-top boxes, game consoles and tv signal decoders. While this market is no doubt under attack from cheap Arm devices – PPC still has a solid grip here due to their reliability. PPC as an embedded platform has roughly two decades head start over Arm when it comes to software development. That is a lifetime in computing terms.

When developers look at technology for a product they are prototyping, the hardware is just one part of the equation. Being able to easily write software for the platform, perform live debugging of code on the boards, and maintain products over decades rather than consumer based 1-3 year warranties; it’s just a completely different ballgame. Technology like external satellite-dish parts runs for decades without maintenance. And there are good reasons why you dont see x86 or Arm here.


Playstattion 3 and the new PSX VR box both have a Freescale PPC cpu

As mentioned earlier, the PPC branch used today is not the same branch that people remember. I cannot stress this enough, because mixing these is like mistaking Intel for AMD. They may have many of the same features but ultimately they are completely different architectures.

The “PowerPC” label we know from back in the day was used to promote the branch that Apple used. Amiga accelerators also used that line of processors for their PowerUP boards. And anyone who ever stuffed a PowerUP board in their A1200 probably remember the cooling issues. I bought one of the more affordable PowerUP boards for my A1200, and to this day I associate the whole episode as a fiasco. It was haunted by instability, sudden crashes and IO problems – all of it connected to overheating.

But PPC today as delivered by Freescale Semiconductors (bought by NXP back in 2015) are different. They don’t suffer the heat problem of their remote and extinct cousins, have low power requirements and are incredibly reliable. Not to mention leagues more powerful than anything Apple, Phase 5 or Commodore ever got their hands on.

Is Freescale for the Amiga a total blunder?

Had you asked me a few days back chances are I would said yes. I have known for a while that Freescale was used in the oil sector, but I did not take into consideration the strength of the development tools and the important role an RTOS system holds in a critical production environment.

I must also admit that I had no idea that my Playstation and Nintendo consoles were PPC based. Playstation 4 doesn’t use PPC on its motherboard, but if you buy the fantastic and sexy VR add-on package, you get a second module that is again – a PPC based product.

It also turns out that IBM’s high-end mainframes, those Amazon and Microsoft use to build the bedrock for cloud computing are likewise PPC based. So once again we see that PPC is indeed there and it’s playing an important role in our lives – but most people don’t see it. So all of this is a matter of perspective.


The Nintendo WII U uses a Freescale PPC cpu, not exactly a below-par gaming system

But the Amiga x5000 or A1222 will not be controlling a high-pressure valve or serving half a million users (hopefully); so does this affect the consumer at all? Does any of this hold any value to you or me? What on earth would real-time feedback mean for a hobby user that just want to play some games, watch a movie or code demos?

The answer is: it could have a profound benefit, but it needs to be developed and evolved first.

Musicians could benefit greatly from the superior signal processing features, but as of writing I have yet to find any mention of this in the Amiga NG SDK. So while the potential is there I doubt we will see it before the Amiga has sold in enough volume.

Fast and reliable signal dispatching in the architecture will also have a profound effect on IPC (inter process communication), allowing separate processes to talk with each other faster and more reliably than say, windows or Linux. Programmers typically use a mutex or a critical-section to protect memory while it’s being delivered to another process (note: painting in broad strokes here), this is a very costly mechanism under Windows and Linux. For instance, the reason UAE is still single threaded is because isolating the custom chips in separate threads and having them talk – turned out to be too slow. If PPC is able to deal with that faster, it also means that processes would communicate faster and more interesting software can be made. Even practical things like a web-server would greatly benefit from real-time message dispatching.


There is no lack of vendors for PPC SBC’s online, here from Abaco Systems

So for us consumers, it all boils down to volume. The Freescale branch of PPC processors is not dead and will be around for years to come; they are sold by the millions every year to great variety of businesses; and while most of them operate outside the traditional consumer awareness, it does have a positive effect on pricing. The more a processor is sold the cheaper it becomes.

Most people feel that the Amiga x5000 is to expensive for a home computer and they blame that on the CPU. Forgetting that 50% of the sub total goes into making the motherboard and all the parts around the CPU. The CPU alone does not represent the price of a whole new platform. And that’s just the hardware! On top of this you have the job of re-writing a whole operating system from scratch, add features that have evolved between 1994 and 2017, and make it all sing together through custom written drivers.

So it’s not your average programming project to say the least.

But is it really too expensive? Perhaps. I bought an iMac 2 years back that was supposed to be my work machine. I work as a developer and use VMWare for almost all my coding. Turned out the i5 based beauty just didn’t have the ram. And fitting it with more ram (it came with 16 gigabytes, I need at least 32) would cost a lot more than a low-end PC. The sad part is that had I gone for a PC I could have treated myself to an i7 with 32 gigabyte ram for the same price.

I later bit the bullet and bought a 3500€ Intel i7 monster with 64 gigabytes of ram and the latest Nvidia graphics card. Let’s just say that the Amiga x5000 is reasonable in context with this. I basically have an iMac i have no use for, it just sits there collecting dust and is reduced to a music player.

Secondly we have to look at potential. The Mac and Windows machines now have their potential completely exposed. We know what these machines do and it’s not going to change any time soon.

The Amiga has a lot of hidden potential that has yet to be realized. The signal processing is just one of them. The most interesting is by far the Xena chip (XMOS) that allow developers to implement custom hardware in software. It might sound like FPGA but XMOS is a different technology. Here you write code using a custom C compiler that generates a special brand of opcodes. Your code is loaded onto a part of the chip (the chip is divided into X number of squares, each representing a piece of logic, or “custom chip” if you like) and will then act as a custom-chip.


The Amiga x5000 in all her glory, notice the moderate cooling for the CPU

The XENA technology could really do wonders for the Amiga. Instead of relying on traditional library files that are executed by the main CPU, things like video decoding, graphical effecs, auxiliary 3D functionality and even emulation (!) can now be dealt-with by XENA and executed in parallel with the main CPU.

If anything is going to make or break the Amiga, it wont be the Freescale PPC processor – it will be the XENA chip and how they use it to benefit the consumer.

Just imagine running UAE almost solely on the XENA chip, emulating 68k applications at near native speed – without using the main CPU at all? Sounds pretty good! And this is a feature you wont find on a PC motherboard. Like always they will add it should it become popular, but right now it’s not even on the radar.

So I for one do believe that the next generation Amiga machines have a shot. The A1222 is probably going to be the defining factor. It will retail at an affordable price (around 450€) and will no doubt go head-to-head with both consoles and mid-range PC’s.

So like always it’s about volume, timing and infrastructure. Everything but the actual processor to be honest.

Last words

Its been a valuable experience to look around and read up on PPC. When I started my little investigation I had a dark picture in my head where the new Amiga machines were just a waste of time. I am happy to say that this is not true and the Freescale processors are indeed alive and kicking.

It was also interesting to see how widespread PPC technology really is. It’s not just a specialist platform, although that is absolutely where it’s strength is financially; it ships in everything from your home router to your tv-signal decoder or game system. So it does have a foot in the consumer market, but like I have outlined here – most consumers have parked it in a blind-spot and we associate the word “PowerPC” with the fiasco of Apple in the past. Which is a bit sad because it’s neither true or fair.


Amiga OS 4.x is turning out to be a very capable system

I have no problem seeing a future where the Amiga becomes a viable commercial product again. I think there is some way to go before that happens, and the spear-head is going to be the A1222 or a similar product.

But like I have underlined again and again – it all boils down to developers. A platform is only as good as the software you can run on it, and Hyperion should really throw themselves into porting games and creativity software. They need to build up critical mass and ship the A1222 with a ton of titles.

For my personal needs I will be more than happy just owning the x5000. It doesn’t need to be a massive commercial success, because Amiga is in my blood and I will always enjoy using it. And yes it is a bit expensive and I’m not in the habit of buying machines like this left and right. But I can safely say that this is a machine that I will be enjoying for many, many years into the future – regardless of what others may feel about it.

I would suggest that Hyperion lower their prices to somewhere around 1000€ if possible. Right now they need to think volume rather than profit, and hopefully Hyperion will start making the OS compatible with Arm. Again my thoughts go to volume and that IOT and embedded systems need an alternative to Linux and Windows 10 embedded.

But right now I’m itching to start developing for it – and I’m not alone 🙂