Archive
Quartex Pascal, Build 10b is out for backers
I am deeply moved by some of the messages I have received about Quartex Pascal. Typically from people who either bought Smart Mobile studio or have followed my blog over the years. In short, developers that like to explore new ideas; people who also recognize some of the challenges large and complex run-time libraries like the VCL, FMX and LCL face in 2020.
Since I started with all this “compile to JavaScript” stuff, the world has changed. And while I’m not always right – I was right about this. JavaScript will and have evolved into a power-house. Largely thanks to Microsoft killing Basic. But writing large scale applications in JavaScript, that is extremely time consuming — which is where Quartex Pascal comes in.

Quartex Pascal evolves every weekend. There are at least 2 builds each weekend for backers. Why not become a backer and see the product come to life? Get instant access to new builds, the docs, and learn why QTX code runs so much faster than the alternatives?
A very important distinction
Let me first start by iterating what I mentioned in my previous post, namely that I am no longer involved with The Smart Company. Nor am I involved with Smart Mobile Studio. I realize that it can be difficult for some to separate me from that product, since I blogged and created momentum for it for more than a decade. But alas, life change and sometimes you just have to let go.
The QTX Framework which today has become a fully operational RTL was written by me back between 2013-2014 (when I was not working for the company due to my spinal injury). And the first version of that framework was released under an open-source license.
When I returned to The Smart Company, it was decided that to save time – we would pull the QTX Framework into the Smart RTL. Since I owned the QTX Framework, and it was open source, it was perfectly fine to include the code. The code was bound by the open source licensing model, so we broke no rules including it. And I gave dispensation that it could be included (although the original license explicitly stated that the units should remain untouched and separate, and only inherited from).

Quartex Media Desktop, a complete desktop system (akin to X Windows for Linux) written completely in Object Pascal, including a clustered, service oriented back-end. All of it done in Quartex Pascal — a huge project in its own right. Under Quartex Pascal, this is now a project type, which means you can have your own cloud solution at the click of a button.
As I left the company for good before joining Embarcadero, The Smart Company and myself came to an agreement that the parts of QTX that still exists in the Smart Mobile Studio RTL, could remain. It would be petty and small to make a huge number out of it, and I left on my own terms. No point ruining all that hard work we did. So we signed an agreement that underlined that there would be overlaps here and there in our respective codebases, and that the QTX Framework and Quartex Media Desktop was my property.
Minor overlaps
As mentioned there will be a few minor overlaps, but nothing substantial. The class hierarchy and architecture of the QTX RTL is so different, that 90% of the code in the Smart RTL simply won’t work. And I made it that way on purpose so there would be no debates over who made what.
QTX represents how I felt the RTL should have been done. And its very different from where Smart Mobile Studio ended up.
The overlaps are simple and few, but it can be helpful for Smart developers to know about if they plan on taking QTX for a test-drive:
- TInteger, TString and TVariant. These were actually ported from Delphi (part of the Sith Library, a pun on Delphi’s Jedi Library).
- TDataTypeConverter came in through the QTX Framework. It has been completely re-written from scratch. The QTX version is endian aware (works on both ARM, X86 and PPC). Classes that deal with binary data (like TStream, TBuffer etc) inherit from TDataTypeConverter. That way, you dont have to call a secondary instance just to perform conversion. This is easier and much more efficient.
- Low-level codecs likewise came from the QTX Framework, but I had to completely re-write the architecture. The old model could only handle binary data, while the new codec classes also covers text based formats. Codecs can be daisy-chained so you can do encoding, compression and encryption by feeding data into the first, and catching the processed data from the last codec in the chain. Very handy, especially when dealing with binary messages and database drivers.
- The in-memory dataset likewise came from the QTX Framework. This is probably the only unit that has remained largely unchanged. So that is a clear overlap between the Smart RTL and QTX.
- TextCraft is an open source library that covers Delphi, Freepascal and DWScript. The latter was pulled in and used as the primary text-parser in Smart. This is also the default parser for QTX, and have been largely re-written so it could be re-published under the Shareware license.
Since the QTX RTL is very different from Smart, I haven’t really bothered to use all of the old code. Stuff like the CSS Effects units likewise came from the QTX Framework, but the architecture I made for Smart is not compatible with QTX, so it makes no sense using that code. I ported my Delphi tweening library to DWScript in 2019, which was a part of my Patreon project. So most of the effects in QTX use our own tweening library. This has some very powerful perks, like being able to animate a property on any object rather than just a HTML Element. And you can use it for Canvas drawing too, which is nice.
Progress. Where are we now?
So, where am I in this work right now? The RTL took more or less 1 year to write from scratch. I only have the weekends for this type of work, and it would have been impossible without my backers. So I cannot thank each backer enough for the faith in this. The RTL and new IDE is actually just a stopping point on the road to a much bigger project, namely CloudForge, which is the full IDE running as an application on the Quartex Media Desktop. But first, let’s see what has been implemented!
AST unit view

The Unit Overview panel. Easy access to ancestor classes as links (still early R&D). And the entire RTL on a second tab. This makes it very easy to learn the new RTL. There is also proper documentation, both as PDF and standard helpfile.
When the object-pascal code is compiled by DWScript, it goes through a vigorous process of syntax checking, parsing, tokenizing and symbolization (or objectification is perhaps a better word), where every inch of the code is transformed into objects that the compiler can work with and produce code from. These symbols are isolated in what is known as an AST, short for “Abstract Symbol Tree”. Basically a massive in-memory tree structure that contains your entire program reduced to symbols and expressions.
In order for us to have a live structural view of the current unit, I have created a simple background process that compiles the current unit, grabs the resulting AST, locates the unit symbol, and then displays the information in a meaningful way. This is the exact same as most other IDE’s do, be it Visual Studio, Embarcadero Delphi, or Lazarus.
So we have that in place already. I also want to make it more elaborate, where you can click yourself to glory by examining ancestors, interfaces, partial class groups – as well as an option to include inherited members (which should be visually different). Either way, the AST code is done. Just need to consolidate a few tidbits so that each Treeview node retains information about source-code location (so that when you double-click a symbol, the IDE navigates to where the symbol exists in the codebase).
JavaScript parsing and compilation
QTX doesn’t include just one compiler, but three. In order for the unit structure to also work for JavaScript files I have modified Besen, which is an ES5 compatible JavaScript engine written in Delphi. And I use this much like DWScript to parse and work with the AST.

Besen is a wonderful compiler. Where DWScript produces JavaScript from Object Pascal, Besen produces bytecodes from JavaScript (which are further JIT compiled). This opens up for some interesting options. I need to add support for ES6 though, modules and require are especially important for modern node.js programming (and yes, the QTX RTL supports these concepts)
HTML5 Rendering and CSS preview
Instead of using Chromium inside the IDE, which is pretty demanding, I have decided to go for HTMLComponents to deal with “normal” tasks. The “Welcome” tab-page for example — it would be complete overkill to use a full Chromium instance just for that, and TEdgeBrowser is likewise shooting sparrows with a Bazooka.
THTMLComponents have a blistering fast panel control that can render more or less any HTML5 document you throw at it (much better than the old TFrameViewer component). But obviously, it doesn’t have JS support. But we won’t be using JS when displaying static information – or indeed, editing HTML5 compliant content.
WYSIWYG Editor
The biggest benefit for HTMLComponents, is that it’s a fully operational HTML compliant editor. Which means you can do more or less all your manual design with that editor. In Quartex Pascal there is direct support for HTML files. Quartex works much like Visual Studio code, except it has visual designers. So you can create a HTML file and either type in the code manually, or switch to the HTMLComponents editor.
Which is what products like Help & Manual uses it for

Image from HTMLComponents application gallery website
Support for HTML, CSS and JS files directly
While not new, this is pretty awesome. Especially since we can do a bit of AST navigation here too to present similar information as we do for Object Pascal. The whole concept behind the QTX RTL, is that you have full control over everything. You can stick to a normal Delphi like form designer and absolute positioning, or you can opt for a more dynamic approach where you create content via code. This is perfect for modern websites that blend scrolling, effects and content (both dynamic and static content) for a better user experience.
You can even (spoiler alert), take a piece of HTML and convert it into visual controls at runtime. That is a very powerful function, because when doing large-scale, elaborate custom controls – you can just tell the RTL “hey, turn this piece of HTML into a visual control for me, and deliver it back when you are ready).
Proper Form Designer
Writing a proper form designer like Delphi has is no walk in the park. It has to deal not just with a selected control, but also child elements. It also has to be able to select multiple elements based on key-presses (shift + click adds another item to the selection), or from the selection rectangle.

A property form layout control. Real-time rendering of controls is also possible, courtesy of HTMLComponents. But to be honest, it just gets in the way. Its much easier to work with this type of designer. It’s fast, responsive, accurate and will have animated features that makes it a joy to work with.
Well, that’s not going to be a problem. I spent a considerable amount of time writing a proper form designer, one that takes both fixed and dynamic content into account. So the Quartex form designer handles both absolute and stacked layout modes (stacked means top-down, what in HTML is knock as blocking element display, where each section stretch to the full width, and only have a defined height [that you can change]).
Node.js Service Protocol Designer
Writing large-scale servers, especially clustered ones, is very fiddly in vanilla JavaScript under node.js. It takes 3 seconds to create a server object, but as we all know, without proper error handling, a concurrent message format, modern security and a solid framework to handle it all — that “3 second” thing falls to the ground quickly.
This is where the Ragnarok message system comes in. Which is both a message framework, and a set of custom servers adapted for dealing with that type of data. It presently supports WebSocket, TCP and UDP. But expanding that to include REST is very easy.

This is where the full might of the QTX Framework begins to dawn. As i wrote before we started on the Quartex Media Desktop (Which this IDE and RTL is a part of), in the future developers wont just drag & drop components on a form; they will drag & drop entire ecosystems ..
But the power of the system is not just in how it works, and how you can create your own protocols, and then have separate back-end services deal with one part of your infrastructure’s workload. It is because you can visually design the protocols using the Node Builder. This is being moved into the QTX IDE as I type. So should make it for Build 12 next weekend.
In short, you design your protocols, messages and types – a bit like RemObjects SDK if you have used that. And the IDE generates both server and client code for you. All you have to do is fill in the content that acts on the messages. Everything else is handled by the server components.
Suddenly, you can spend a week writing a large-scale, platform agnostic service stack — that would have taken JavaScript developers months to complete. And instead of having to manage a 200.000 lines codebase in JavaScript — you can enjoy a 4000 line, easily maintainable Object Pascal codebase.
Build 11
Im hoping to have build 11 out tomorrow (Sunday) for my backers. Im still experimenting a bit with the symbol information panel, since I want it to be informative not just for classes, but also for methods and properties. Making it easy to access ancestor implementations etc.
I also need to work a bit more on the JS parsing. Under ES5 its typical to use variables to hold objects (which is close to how we think of a class), so composite and complex datatypes must be expanded. I also need to get symbol position to work property, because since Besen is a proper bytecode compiler, it doesn’t keep as much information in it’s AST as DWScript does.
Widgets (which is what visual controls are called under QTX) should appear in build 12 or 13. The IDE supports zip-packages. The file-source system I made for the TVector library (published via Embarcadero’s website a few months back) allows us to mount not just folders as a file-source, but also zip files. So QTX component packages will be ordinary zip-files containing the .pas files, asset files and a metadata descriptor file that tells the IDE what to expect. Simple, easy and very effective.
Support the project!
Want to support the project? All financial backers that donates $100+ get their name in the product, access to the full IDE source-code on completion, and access to the Quartex Media Desktop system (which is a complete web desktop with a clustered back-end, compiled to JavaScript and running on node.js. Portable, platform and chipset independent, and very powerful).

Your help matters! It pays for components, hours and above all, tools and motivation. In return, you get full access to everything and a perpetual license. No backers will ever pay a cent for any future version of Quartex Pascal. Note: PM me after donating so I can get you added to the admin group! Click here to visit paypal: https://www.paypal.com/paypalme/quartexNOR
All donations are welcome, both large and small. But donations over $100, especially reoccurring, is what drives this project forward. It also gets you access to the Quartex Developer group on Facebook where new builds, features etc is happening. It is the best way to immediately get the latest build, read documentation as its written and see the product come to life!
Quartex Pascal, convergence is near

A Quartex Cluster of 5 x ODroid XU4. A $400 super computer running Quartex media Desktop. Enough to power a school.
I only have the weekends to work on Quartex Pascal, but I have spent the past 18 months tinkering away, making up for wasted time. So I’m just going to leave some pictures here for you to enjoy.
Note: I was asked on LinkedIn if this has anything to do with Smart Mobile Studio, and the answer is a resounding no. I have nothing to do with Smart any more. QTX Pascal is a completely separate project that is written from scratch by yours truly.
The QTX Framework was initially a library I created back in 2014, but it has later been completely overhauled and turned into a full RTL. It is not compatible with Smart Pascal and has a completely different architecture.
QTX Pascal is indirectly funded by the Amiga Retro Community (which might sound strange, but the technical level of that community is beyond anything I have encountered elsewhere) since QTX is central to the creation of the Quartex Media Desktop. It is a shame that Embarcadero decided to not back the project. The compiler and toolchain would have been a part of Delphi by now, and I wouldn’t have to write a separate IDE. But when they see what this system can deliver in terms of services, database work, mobile and embedded -they might regret it. The project only accepts donation funding, I am not interested in investors or partners. If you want a vision turned into reality, you gotta do it yourself. Everything else just gets in the way.
For developers by developers
Quartex Pascal is made for the community. It will be free for students and open-source projects. And a commercial license will never exceed $300. It is a shareware license and the financial aspects is purely to help fund further research and development of the desktop cloud platform. The final goal (CloudForge) is to compile the IDE itself to JavaScript, so people only need a browser to write enterprise level applications via Quartex Media Desktop. When that is finished, my work is done – and people have a clear path to the future.

Unlike other systems, QTX started with the non-visual stuff, so the system has a well implemented infrastructure for writing universal services and servers, using node.js as a deployment host. Services are also Docker friendly. Runs without change on Windows, Mac OS, Linux and a wealth of embedded systems and SBCs (single board computers)

A completely new RTL written from scratch generates close to native speed JS, highly compatible (even with legacy browsers) and rock solid

There are several display modes for QTX forms, from dynamic to absolute positioning. You can mix and match between HTML and QTX code, including a HTML5 compliant WYSIWYG editor and style manager. Makes content handling a lot easier

Write object pascal, JavaScript, HTML, LDEF (webassembly), node.js services – or mix and match between them all for maximum potential. Writing mobile applications is now ridiculously easy compared to “other tools” out there.
Oh and for the proverbial frosting — The full clustered Quartex Media desktop and services is a project type. Thats right. A complete cloud infrastructure suitable for teams, kiosks, embedded, schools, intranets – and even an replacement OS for ChromeOS. You don’t need to interface with Amazon, you get your own Amazon (optional naturally).

Filesystem over websocket, IPC between hosted apps and desktop, full back-end services that are clustered, and run on anything from a Raspberry PI 4 to low-cost ARM SBCs.

Let there be rock
Oh, and documentation. Loads and loads of documentation.

Proper documentation, both class overview and explanations that a human being has written is paramount for learning and getting up to speed quickly.
I don’t have vacation this year, which means I only have weekends to tinker away. But i have spent the past 18-ish months preparing and slowly finishing the pieces I needed. From vector containers to form design controls, to a completely re-written RTL from scratch — so yeah. This time I’m doing it my way.
Nodebuilder, QTX and the release of my brand new social platform
First, let me wish everyone a wonderful new year! With xmas and the silly season firmly behind us, and my batteries recharged – I feel my coding fingers itch to get started again.
2019 was a very busy year for me. Exhausting even. I have juggled both a full time job, kids and family, as well as our community project, Quartex Media Desktop. And since that project has grown considerably, I had to stop and work on the tooling. Which is what NodeBuilder is all about.
I have also released my own social media platform (see further down). This was initially scheduled for Q4 2020, but Facebook pissed me off something insanely, so I set it up in december instead.
NodeBuilder
For those of you that read my blog you probably remember the message system I made for the Quartex Desktop, called Ragnarok? This is a system for dealing with message dispatching and RPC, except that the handling is decoupled from the transport medium. In other words, it doesnt care how you deliver a message (WebSocket, UDP, REST), but rather takes care of serialization, binary data and security.
All the back-end services that make up the desktop system, are constructed around Ragnarok. So each service exposes a set of methods that the core can call, much like a normal RPC / SOAP service would. Each method is represented by a request and response object, which Ragnarok serialize to a JSON message envelope.
In our current model I use WebSocket, which is a full duplex, long-term connection (to avoid overhead of having to connect and perform a handshake for each call). But there is nothing in the way of implementing a REST transport layer (UDP is already supported, it’s used by the Zero-Config system. The services automatically find each other and register, as long as they are connected to the same router or switch). For the public service I think REST makes more sense, since it will better utilize the software clustering that node.js offers.
Now for small services that expose just a handful of methods (like our chat service), writing the message classes manually is not really a problem. But the moment you start having 20 or 30 methods – and need to implement up to 60 message classes manually – this approach quickly becomes unmanageable and prone to errors. So I simply had to stop before xmas and work on our service designer. That way we can generate the boilerplate code in seconds rather than days and weeks.
While I dont have time to evolve this software beyond that of a simple service designer (well, I kinda did already), I have no problem seeing this system as a beginning of a wonderful, universal service authoring system. One that includes coding, libraries and the full scope of the QTX runtime-library.
In fact, most of the needed parts are in the codebase already, but not everything has been activated. I don’t have time to build both a native development system AND the development system for the desktop.

NodeBuilder already have a fully functional form designer and code editor, but it is dormant for now due to time restrictions. Quartex Media Desktop comes first
But right now, we have bigger fish to fry.
Quartex Media Desktop
We have made tremendous progress on our universal desktop environment, to the point where the baseline services are very close to completion. A month should be enough to finish this unless something unforeseen comes up.
You have to factor in that, this project has only had weekends and the odd after work hours allocated for it. So even though we have been developing this for 12 months, the actual amount of days is roughly half of that.
So all things considered I think we have done a massive amount of code in such a short time. Even simple 2d games usually take 2 years of daily development, and that includes a team of at least 5 people! Im a single developer working in my spare time.
So what exactly is left?
The last thing we did before xmas was upon us, was to throw out the last remnants of Smart Mobile Studio code. The back-end services are now completely implemented in our own QTX runtime-library, which has been written from scratch. There is not a line of code from Smart Mobile Studio in QTX, which means we no longer have to care what that system does or where it goes.
To sum up:
- Push all file handling code out of the core
- Implement file-handling as it’s own service
Those two steps might seem simple enough, but you have to remember that the older code was based on the Linux path system, and was read-only.
So when pushing that code out of the core, we also have to add all the functionality that was never implemented in our prototype.

Each class actually represents a separate “mini” program, and there are still many more methods to go before we can put this service into production.
Since Javascript does not support threads, each method needs to be implemented as a separate program. So when a method is called, the file/task manager literally spawns a new process just for that task. And the result is swiftly returned back to the caller in async manner.
So what is ultimately simple, becomes more elaborate if you want to do it right. This is the price we pay for universality and a cluster enabled service-stack.
This is also why I have put the service development on pause until we have finished the NodeBuilder tooling. And I did this because I know by experience that the moment the baseline is ready, both myself and users of the system is going to go “oh we need this, and that and those”. Being able to quickly design and auto-generate all the boilerplate code will save us months of work. So I would rather spend a couple of weeks on NodeBuilder than wasting months having to manually write all that boilerplate code down the line.
What about the QTX runtime-library?
Writing an RTL from scratch was not something I could have anticipated before we started this project. But thankfully the worst part of this job is already finished.
The RTL is divided into two parts:
Non Visual code. Classes and methods that makes QTX an RTL- Visual code. Custom Controls + standard controls (buttons, lists etc)
Visual designer
As you can see, the non-visual aspect of the system is finished and working beautifully. It’s a lot faster than the code I wrote for Smart Mobile Studio (roughly twice as fast on average). I also implemented a full visual designer, both as a Delphi visual component and QTX visual component.

Quartex Media Desktop makes running on several machines [cluster] easy and seamless
Why is the QTX Runtime-Library important again?
When the desktop is out the door, the true work begins! The desktop has several roles to play, but the most important aspect of the desktop – is to provide an ecosystem capable of hosting web based applications. Offering features and methods traditionally only found in Windows, Linux or OS X. It truly is a complete cloud system that can scale from a single affordable SBC (single board computer), to a high-end cluster of powerful servers.
Clustering and writing distributed applications has always been difficult, but Quartex Media Desktop makes it simple. It is no more difficult for a user to work on a clustered system, as it is to work on a normal, single OS. The difficult part has already been taken care of, and as long as people follow the rules, there will be no issues beyond ordinary maintenance.
And the first commercial application to come out of Quartex Components, is Cloud Forge, which is the development system for the platform. It has the same role as Visual Studio for Windows, or X Code for Apple OS X.
I have prepared 3 compilers for the system already. First there is C/C++ courtesy of Clang. So C developers will be able to jump in and get productive immediately. The second compiler is freepascal, or more precise pas2js, which allows you to compile ordinary freepascal code (which is highly Delphi compatible) to both JavaScript and WebAssembly.
And last but not least, there is my fork of DWScript, which is the same compiler that Smart Mobile Studio uses. Except that my fork is based on the absolute latest version, and i have modified it heavily to better match special features in QTX. So right out of the door CloudForge will have C/C++, two Object Pascal compilers, and vanilla Javascript and typescript. TypeScript also has its own WebAssembly compiler, so doing hard-core development directly in a browser or HTML5 viewport is where we are headed.
Once the IDE is finished I can finally, finally continue on the LDEF bytecode runtime, which will be used in my BlitzBasic port and ultimately replace both clang, freepascal and DWScript. As a bonus it will emit native code for a variety of systems, including x86, ARM, 68k [including 68080] and PPC.
This might sound incredibly ambitious, if not impossible. But what I’m ultimately doing here -is moving existing code that I already have into a new paradigm.
The beauty of object pascal is the sheer size and volume of available components and code. Some refactoring must be done due to the async nature of JS, but when needed we fall back on WebAssembly via Freepascal (WASM executes linear, just like ordinary native code does).
A brand new social platform
During december Facebook royally pissed me off. I cannot underline enough how much i loath A.I censorship, and the mistakes that A.I does – in which you are utterly powerless to complain or be heard by a human being. In my case i posted a gif from their own mobile application, of a female body builder that did push-ups while doing hand-stands. In other words, a completely harmless gif with strength as the punchline. The A.I was not able to distinguish between a leotard and bare-skin, and just like that i was muted for over a week. No human being would make such a ruling. As an admin of a fairly large set of groups, there are many cases where bans are the result. Disgruntled members that acts out of revenge and report technical posts about coding as porn or offensive. Again, you are helpless because there are nobody you can talk to about resolving the issue. And this time I had enough.
It was always planned that we would launch our own social media platform, an alternative to Facebook aimed at adult geeks rather than kids (Facebook operates with an age limit of 12 years). So instead of waiting I rushed out and set up a brand new social network. One where those banale restrictions Facebook has conditioned us with, does not apply.
Just to underline, this is not some simple and small web forum. This is more or less a carbon copy of Facebook the way it used to be 8-9 years ago. So instead of having a single group on facebook, we can now have as many groups as we like, on a platform that looks more or less identical to Facebook – but under our control and human rules.

Amigadisrupt.com is a brand new social media platform for geeks
You can visit the site right now at https://www.amigadisrupt.com. Obviously the major content on the platform right now is dominated by retro computing – but groups like Delphi Developer and FPC developer has already been setup and are in use. But if you are expecting thousands of active users, that will take time. We are now closing in on 250 active users which is pretty good for such a short period of time. I dont want a platform anywhere near as big as FB. The goal is to get 10k users and have a stable community of coders, retro geeks, builders and creative individuals.
AD (Amiga Disrupt) will be a standard application that comes with Quartex Media Desktop. This is the beauty of web technology, in that it can unify different resources under one roof. And we will have our cake and eat it come hell or high water.
Disclaimer: Amiga Disrupt has a lower age limit of 18 years. This is a platform meant for adults. Which means there will be profanity, jokes that would get you banned on Facebook and content that is not meant for kids. This is hacker-land, and political correctness is considered toilet paper. So if you need social toffery like FB and Twitter deals with, you will be kicked by one of the admins.
After you sign up your feed will be completely empty. Here is how to get it started. And feel free to add me to your friends-list!
NVidia Jetson Nano: Part 2
Last week I posted a review of the NVidia Jetson Nano IoT board, where I gave the board a score of 6 out of 10. This score was based on the fact that the CPU in particular was a joke compared to other, far better and more affordable boards on the market.

Ubuntu running on the NVidia Jetson Nano
Since my primary segment for IoT boards involves web technology, WebAssembly and Asm.js (HTML5) in particular, the cpu did not deliver the performance I expected from an NVidia board. But as stated in my article, this board was never designed to be a multi-purpose maker board. This is a board for training and implementing A.I models (machine learning), which is why it ships with an astonishing 128 CUDA cores in it’s GPU.
However, two days after I posted said review, NVidia issued several updates. The graphics drivers were updated, followed by a custom Chrome browser build. Naturally I was eager to see if this would affect my initial experience, and to say it did would be an understatement.
Driver update
After this update the Jetson was able to render the HTML5 based desktop system I use for testing, and it passed more or less all my test-cases with flying colors. In fact, I was able to run two tests simultaneously. It renders Quake 3 (uses WebGl) in full-screen at 45 fps, with Tyrian (uses the gpu’s 2d functions) running at a whopping 35 fps (!).
Obviously that payload is significant, and all 4 CPU cores were taxed at 70% when running two such demanding programs inside the same viewport. Personally I’m not much of a gamer, but I find that testing a SoC by its ability to process HTML5 and JS, especially code that taps into the underlying GPU for both 3d and 2d – gives a good picture of the SoC’s capabilities.
I still think the CPU was a poor choice by NVidia. The production costs of adding an A72 CPU is practically non-existent. The A72 would add 50 cents (USD 0.5) to the boards retail value, but in return it would give us a much higher clock speed and beefier CPU cache. The A72 is in practical terms twice as fast as the A57.
A better score
But, fair is fair, and I am changing the review score to 7 out of 10, which is the best score I have ever given to any IoT device.
The only board that has a chance of topping that, is the ODroid N2, when (if ever) Hardkernel bothers to implement X drivers for the Mali GPU.
NVidia Jetson Nano
The sheer volume of technology you can pick up for $100 in 2019 is simply amazing. Back in 2012 when the IoT revolution began, people were amazed at how the tiny Raspberry PI SoC was able to run a modern browser and play quake. Fast forward to 2019 and that power has been multiplied a hundred fold.
Of all the single board computers available today, the NVidia Jetson Nano exists in a whole different sphere compared to the rest. It ships with 4 Gb of ram, a mid-range 64-bit quad processor – and a whopping 128 GPU cores (!).
It does cost a little more than the average board, but not much. The latest Raspberry PI v4 sells for $79, just $20 less than the Jetson. But the Video-Core IV that powers the Raspberry PI v4 was not designed for tasks that Jetson can take on. Now don’t get me wrong, the Raspberry PI v4 is roughly the same SoC (the CPU on the PI is one revision higher), and the Video-Core GPU is a formidable graphics processor. But if you want to work with artificial intelligence, you need proper Cuda support; something the Video-Core can’t deliver.
The specs
Let’s have a look at the specs before we continue. To give you some idea of the power here, this is roughly the same specs as the Nintendo Switch. In fact, if we go up one model to the NVidia TX 1 – that is exactly what the Nintendo Switch uses. So if you are expecting something to replace your Intel i5 or i7 desktop computer, you might want to look elsewhere. But as an IoT device for artificial intelligence work, it does have a lot to offer:
- 128-core Maxwell GPU
- Quad-core Arm A57 processor @ 1.43 GHz
- System Memory – 4GB 64-bit LPDDR4 @ 25.6 GB/s
- Storage – microSD card slot (devkit) or 16GB eMMC flash (production)
- Video Encode – 4K @ 30 | 4x 1080p @ 30 | 9x 720p @ 30 (H.264/H.265)
- Video Decode – 4K @ 60 | 2x 4K @ 30 | 8x 1080p @ 30 | 18x 720p @ 30 (H.264/H.265)
- Video Output – HDMI 2.0 and eDP 1.4 (video only)
- Gigabit Ethernet (RJ45) + 4-pin PoE header
- USB – 4x USB 3.0 ports, 1x USB 2.0 Micro-B port for power or device mode
- M.2 Key E socket (PCIe x1, USB 2.0, UART, I2S, and I2C)
- 40-pin expansion header with GPIO, I2C, I2S, SPI, UART signals
- 8-pin button header with system power, reset, and force recovery related signals
- Misc – Power LED, 4-pin fan header
- Power Supply – 5V/4A via power barrel or 5V/2A via micro USB port
ARM for the future
Most of the single board computers (SBC’s) on the market today, sell somewhere around the $70 mark. As a consequence their processing power and graphical capacity is severely limited compared to traditional desktop computers. IoT boards are closely linked to the mobile phone industry, typically behind the latest SoC by a couple of years; Which means you can expect to see ARM IoT boards that floors an Intel i5 sometime in 2022 (perhaps sooner).
Top of the line mobile phones, like the Samsung Galaxy Note 10, are presently shipping with the Snapdragon 830 SoC. The Snapdragon 830 delivers the same processing power as an Intel i5 CPU (actually slightly faster). It is amazing to see how fast ARM is catching up with Intel and AMD. Powerful technology is expensive, which is why IoT is brilliant; by using the same SoC’s that the mobile industry has perfected, they are able to tap into the price drop that comes with mass production. IoT is quite frankly piggybacking on the the every growing demand for faster mobile devices; which is a great thing for us consumers.
Intel and AMD is not resting on their laurels either, and Intel in particular has lowered their prices considerably to meet the onslaught of cheap ARM boards hitting the market. But despite their best efforts – there is no doubt where the market is heading; ARM is the future – both for mobile, desktop and server.
The Nvidia Jetson Nano
Out of all the boards available on the market for less than US $100, and that is a long list – one board deserves special attention: the NVidia Jetson Nano developer board. This was released back in march of this year. I wanted to get this board as it hit the stores – but since my ODroid XU4 and N2 have performed so well, there simply was no need for it; until recently.
First of all, the Jetson SBC is not a cheap $35 board. The GPU on this board are above and beyond anything available for competing single board computers. Most IoT boards are fitted with a relatively modest GPU (graphical processing unit), typically with 1, 2, 4 or 8 cores. The NVidia Jetson Nano though, ships with 128 GPU cores (!); making it by far the best option for creating, training and deploying artificial intelligence models.

The Jetson boards were designed for one thing, A.I. It’s not really a general purpose SBC
I mean, the IoT boards to date have all suffered from the same tradeoffs, where the makers must try to strike a balance between CPU, GPU and RAM. This doesn’t always work out like the architects have planned. Single board computers like like the Asus Tinkerboard or NanoPI Fire 3 are examples of this; The Tinkerboard is probably the fastest 32-bit ARM processor I have ever experienced, but the way the different parts have been integrated, coupled with poorly planned power distribution (as in electrical power) resulted in terrible stability problems. The NanoPI Fire 3 is also a paper-tiger that, for the best of intentions, is useless for anything but Android.
The Jetson Nano is more expensive, but NVidia has managed to balance each part in a way that works well; at least for artificial intelligence work. I must underline that the CPU was a big disappointment for me personally. I misread the specs before I ordered and expected it to be fitted with a quad core A72 cpu @ 2 GHz. Instead it turned out to use the A57 cpu running @ 1.43 GHz. This might sound unimportant, but the A72 is almost twice as fast (and costing only 50 cents more). Why NVidia decided on that model of CPU stands as a completely mystery.
In my view, this sends a pretty clear message that NVidia has no interest in competing with other mainstream boards, but instead aim solely at the artificial intelligence market.
Artificial intelligence
If working with A.I is something that you find interesting, then you are in luck. This is the primary purpose of the Jetson Nano. Training an A.I model on a Raspberry PI v4 or ODroid N2 is also possible, but it would be a terrible and frustrating experience because those boards are not designed for it. What those 128 cuda cores can chew through in 2 hours, would take a week for a Raspberry PI to process.
If we look away from the GPU, single board computers like the ODroid N2 is better in every way. The power distribution on the SoC is better, the boot options are better, the bluetooth and IR options are better, the network and disk controllers are more powerful, the ram runs at higher clock-speeds -and last but not least, the “little-big” CPU architecture ODroid devices are known for, just floors the Jetson. It literally has nothing to compete with if we ignore that GPU.
So if you have zero interest in A.I, cuda or Tensorflow, then I urge you to look elsewhere. To put the CPU into perspective: The model used on the Jetson is only slightly faster than the Raspberry PI 3b. The extra memory helps it when browsing complex HTML5 websites, but not enough to justify the price. By comparison the ODroid XU4 retails at $40 and will slice through those website like it was nothing. So if you are looking for a multi-purpose board, the Jetson Nano is not it.
Upstream Ubuntu
Most IoT boards ship with a trimmed-down Linux distribution especially adapted for that board. This is true for more or less all the IoT boards I have tested over the last 18 months. The reason companies ship these slim and optimized setups is simply because running a full Linux distribution, like we have on x86, demands too much of the CPU (read: there will be little left for your code). These are micro computers after all.

The Jetson Ubuntu distro comes pre-loaded with drivers and various A.I libraries
The difference between SBC’s like the PI, ODroid (or whatnot) and the Jetson Nano, is that Jetson comes with a full, upstream Ubuntu based distribution. In other words – the same distribution you would install for your desktop PC, is somehow expected to perform miracles on a device only marginally faster than a Raspberry PI 3b.
I must say I am somewhat puzzled by Nvidia’s choice of Linux for this board, but against all odds the board does perform. The desktop is responsive and smooth despite the payload involved, most likely because the renderer and layout engine taps into the GPU cores. But optimal? Hardly.
Other distributions?
One thing that must be stressed when it comes to the Jetson Nano, is that Nvidia is not shipping a system where everything is wide open. Just like the Raspberry PI’s VC (video core) GPUs are proprietary and requires commercial, closed-source drivers, so does the Jetson. It’s not just a question of driver in this case, but a customized kernel.
The board is called the “Nvidia Jetson Nano developer kit”, which means Nvidia sells this exclusively to work with A.I under their standards. For this reason, there are no alternative distributions or flavours. You are expected to work with Ubuntu, end of story.
Having said that, the way to get around this -is simply to work backwards, and strip the official disk image of services and packages you don’t need. A full upstream Ubuntu install means you have two browsers, email and news client, music players, movie players, photo software, a full office suite -and more python libraries than you will ever need. With a little bit of cleaning, you can trim the distro down and make it suitable for other tasks.
Practical uses
No single board computer is identical. There are different CPUs, different GPUs, IO controllers, ram – which all helps define what a board is good at.
The Jetson Nano is a powerful board, but only within the sphere it was designed for. The moment you move away from artificial intelligence, the board is instantly surpassed by cheaper models.

51 fps running Quake [Javascript] is very good. But the general experience of HTML5 is not optimal due to the cpu not being the most powerful
But if gaming is you sole interest, $100 will buy you a much better x86 board to play with. In one of my personal projects (Quartex Media Desktop), which is a cluster “supercomputer” consisting of many IoT boards connected through a switch – the Jetson was supposed to act as the master board. In a cluster design you typically have X number of slaves which runs headless (no desktop or graphical output as all) – and a master unit that deals with the display / desktop. The slave boards don’t need much in terms of GPU since they won’t run graphical tasks (so the GPU is often disabled)– which puts enormous pressure on the master to represent everything graphically.
In my case the front-end is written in Javascript and uses Chrome to render a very complex display. Finding an SBC with enough power to render modern and complex HTML5 has not been easy. I was shocked that the Jetson Nano had poorer performance than devices costing half it’s asking price.
It could be throttling based on heat, because the board gets really hot. It has massive heatsink much like the ODroid N2, but I have a sneaking suspicion that active cooling is going to be needed to get the best out of this puppy. At worst i get 17 fps when running Quake 3 [javascript], but i think there was an issue with a driver. After an update was released, the frame-rate went back to 50+ fps. Extremely annoying because you feel you can’t trust the SoC.
Final verdict
The Nvidia Jetson Nano is an impressive IoT board, and when it comes to GPU power and artificial intelligence, those 128 cuda cores is a massive advantage. But computing is not just about fancy A.I models. And this is ultimately where the Jetson comes up short. When something as simple as browsing modern websites can unexpectedly bring the SoC to a grinding halt, then I cannot recommend the system for anything but what it was designed to do: artificial intelligence.
There are also some strange design decisions on hardware level. On most boards the USB sockets have separate data-lines and power route. But on the Jetson the four USB 3.0 sockets share a single data-line. In other words, whatever you connect to the device, will affect data throughput.
Nvidia also made the same mistake that Asus did, namely to power the USB ports and CPU from the same route. It’s not quite as bad as the Tinkerboard. Nvidia made sure the board has two power sockets, so should undervolting become an issue, you can just plug in a secondary 5V barrel socket (disabled by default).
The lack of eMMc support or a sata interface is also curious. Hosting a full Ubuntu on a humble SD-card, with an active swapfile, is a recipe for disaster.
So is it worth the $99 asking price?
If you work with artificial intelligence or GPU based computing then yes, the board will help you prototype your products. But if you expect to do anything ordinary, like run demanding HTML5 apps or host CPU intensive system services – I find myself struggling to say yes.
It’s a cool board, and it also makes one hell of a games machine – but the hardware is already outdated. If you want a general purpose single board computer, you are much better off buying the Raspberry PI 4 or the ODroid N2, both are cheaper and deliver much better processing power. Obviously the GPU is hard to match.
I will give the board a score of 6 out of 10, purely because the board is brilliant for A.I and compute tasks. This might change as more distros become available. A full Ubuntu install is one hell of a load on the system after all.
Quartex “Cloud Ripper” hardware
For close to a year now I have been busy on a very exciting project, namely my own cloud system. While I have written about this project quite a bit these past months, mostly focusing on the software aspect, not much has been said about that hardware.
So let’s have a look at Cloud Ripper, the official hardware setup for Quartex Media Desktop.
Tiny footprint, maximum power
Despite its complexity, the Quartex Media Desktop architecture is surprisingly lightweight. The services that makes up the baseline system (read: essential services) barely consume 40 megabytes of ram per instance (!). And while there is a lot of activity going on between these services -most of that activity is message-dispatching. Sending messages costs practically nothing in cpu and network terms. This will naturally change the moment you run your cloud as a public service, or setup the system in an office environment for a team. The more users, the more signals are shipped between the processes – but with the exception of reading and writing large files, messages are delivered practically instantaneous and hardly use CPU time.
One of the reasons I compile my code to JavaScript (Quartex Media Desktop is written from the ground up in Object Pascal, which is compiled to JavaScript) has to do with the speed and universality of node.js services. As you might know, Node.js is powered by the Google V8 runtime engine, which means the code is first converted to bytecodes, and further compiled into highly optimized machine-code [courtesy of llvm]. When coded right, such Javascript based services execute just as fast as those implemented in a native language. There simply are no perks to be gained from using a native language for this type of work. There are however plenty of perks from using Node.js as a service-host:
- Node.js delivers the exact same behavior no matter what hardware or operating-system you are booting up from. In our case we use a minimal Linux setup with just enough infrastructure to run our services. But you can use any OS that supports Node.js. I actually have it installed on my Android based Smart-TV (!)
- We can literally copy our services between different machines and operating systems without recompiling a line of code. So we don’t need to maintain several versions of the same software for different systems.
- We can generate scripts “on the fly”, physically ship the code over the network, and execute it on any of the machines in our cluster. While possible to do with native code, it’s not very practical and would raise some major security concerns.
- Node.js supports WebAssembly, you can use the Elements Compiler from RemObjects to write service modules that executes blazingly fast yet remain platform and chipset independent.
The Cloud-Ripper cube
The principal design goal when I started the project, was that it should be a distributed system. This means that instead of having one large-service that does everything (read: a typical “native” monolithic design), we instead operate with a microservice cluster design. Services that run on separate SBC’s (single board computers). The idea here is to spread the payload over multiple mico-computers that combined becomes more than the sum of their parts.

Cloud Ripper – Based on the Pico 5H case and fitted with 5 x ODroid XU4 SBC’s
So instead of buying a single, dedicated x86 PC to host Quartex Media Desktop, you can instead buy cheap, off-the-shelves, easily available single-board computers and daisy chain them together. So instead of spending $800 (just to pin a number) on x86 hardware, you can pick up $400 worth of cheap ARM boards and get better network throughput and identical processing power (*). In fact, since Node.js is universal you can mix and match between x86, ARM, Mips and PPC as you see fit. Got an older PPC Mac-Mini collecting dust? Install Linux on it and get a few extra years out of these old gems.
(*) A single XU4 is hopelessly underpowered compared to an Intel i5 or i7 based PC. But in a cluster design there are more factors than just raw computational power. Each board has 8 CPU cores, bringing the total number of cores to 40. You also get 5 ARM Mali-T628 MP6 GPUs running at 533MHz. Only one of these will be used to render the HTML5 display, leaving 4 GPUs available for video processing, machine learning or compute tasks. Obviously these GPUs won’t hold a candle to even a mid-range graphics card, but the fact that we can use these chips for audio, video and computation tasks makes the system incredibly versatile.
Another design goal was to implement a UDP based Zero-Configuration mechanism. This means that the services will find and register with the core (read: master service) automatically, providing the machines are all connected to the same router or switch.
The first “official” hardware setup is a cluster based on 5 cheap ARM boards; namely the ODroid XU4. The entire setup fits inside a Pico Cube, which is a special case designed to house this particular model of single board computers. Pico offers several different designs, ranging from 3 boards to a 20 board super-cluster. You are not limited ODroid XU4 boards if you prefer something else. I picked the XU4 boards because they represent the lowest possible specs you can run the Quartex Media Desktop on. While the services themselves require very little, the master board (the board that runs the QTXCore.js service) is also in charge of rendering the HTML5 display. And having tested a plethora of boards, the ODroid XU4 was the only model that could render the desktop properly (at that low a price range).
Note: If you are thinking about using a Raspberry PI 3B (or older) as the master SBC, you can pretty much forget it. The media desktop is a piece of very complex HTML5, and anything below an ODroid XU4 will only give you a terrible experience (!). You can use smaller boards as slaves, meaning that they can host one of the services, but the master should preferably be an ODroid XU4 or better. The ODroid N2 [with 4Gb Ram] is a much better candidate than a Raspberry PI v4. A Jetson Nano is an even better option due to its extremely powerful GPU.
Booting into the desktop
One of the things that confuse people when they read about the desktop project, is how it’s possible to boot into the desktop itself and use Quartex Media Desktop as a ChromeOS alternative?
How can a “cloud platform” be used as a desktop alternative? Don’t you need access to the internet at all times? If it’s a server based system, how then can we boot into it? Don’t we need a second PC with a browser to show the desktop?
Accessing the desktop like a “web-page” from a normal Linux setup
To make a long story short: the “master” in our cluster architecture (read: the single-board computer defined as the boss) is setup to boot into a Chrome browser display under “kiosk mode”. When you start Chrome in kiosk mode, this removes all traces of the ordinary browser experience. There will be no toolbars, no URL field, no keyboard shortcuts, no right-click popup menus etc. It simply starts in full-screen and whatever HTML5 you load, has complete control over the display.
What I have done, is to to setup a minimal Linux boot sequence. It contains just enough Linux to run Chrome. So it has all the drivers etc. for the device, but instead of starting the ordinary Linux Desktop (X or Wayland) -we instead start Chrome in kiosk mode.

Booting into the same desktop through Chrome in Kiosk Mode. In this mode, no Linux desktop is required. The Linux boot sequence is altered to jump straight into Chrome
Chrome is started to load from 127.0.0.1 (this is a special address that always means “this machine”), which is where our QTXCore.js service resides that has it’s own HTTP/S and Websocket servers. The client (HTML5 part) is loaded in under a second from the core — and the experience is more or less identical to starting your ChromeBook or NAS box. Most modern NAS (network active storage) devices are much more than a file-server today. NAS boxes like those from Asustor Inc have HDMI out, ships with a remote control, and are designed to act as a media center. So you connect the NAS directly to your TV, and can watch movies and listen to music without any manual conversion etc.
In short, you can setup Quartex Media Desktop to do the exact same thing as ChromeOS does, booting straight into the web based desktop environment. The same desktop environment that is available over the network. So you are not limited to visiting your Cloud-Ripper machine via a browser from another computer; nor are you limited to just using a dedicated machine. You can setup the system as you see fit.
Why should I assemble a Cloud-Ripper?
Getting a Cloud-Ripper is not forced on anyone. You can put together whatever spare hardware you have (or just run it locally under Windows). Since the services are extremely lightweight, any x86 PC will do. If you invest in a ODroid N2 board ($80 range) then you can install all the services on that if you like. So if you have no interest in clustering or building your own supercomputer, then any PC, Laptop or IOT single-board computer(s) will do. Provided it yields more or equal power as the XU4 (!)
What you will experience with a dedicated cluster, regardless of putting the boards in a nice cube, is that you get excellent performance for very little money. It is quite amazing what $200 can buy you in 2019. And when you daisy chain 5 ODroid XU4 boards together on a switch, those 5 cheap boards will deliver the same serving power as an x86 setup costing twice as much.

The NVidia Jetson Nano SBC, one of the fastest boards available at under $100
Pico is offering 3 different packages. The most expensive option is the pre-assembled cube. This is for some reason priced at $750 which is completely absurd. If you can operate a screwdriver, then you can assemble the cube yourself in less than an hour. So the starter-kit case which costs $259 is more than enough.
Next, you can buy the XU4 boards directly from Hardkernel for $40 a piece, which will set you back $200. If you order the Pico 5H case as a kit, that brings the sub-total up to $459. But that price-tag includes everything you need except sd-cards. So the kit contains power-supply, the electrical wiring, a fast gigabit ethernet switch [built-into the cube], active cooling, network cables and power cables. You don’t need more than 8Gb sd-cards, which costs practically nothing these days.
Note: The Quartex Media Desktop “file-service” should have a dedicated disk. I bought a 256Gb SSD disk with a USB 3.0 interface, but you can just use a vanilla USB stick to store user-account data + user files.
As a bonus, such a setup is easy to recycle should you want to do something else later. Perhaps you want to learn more about Kubernetes? What about a docker-swarm? A freepascal build-server perhaps? Why not install FreeNas, Plex, and a good backup solution? You can set this up as you can afford. If 5 x ODroid XU4 is too much, then get 3 of them instead + the Pico 3H case.
So should Quartex Media Desktop not be for you, or you want to do something else entirely — then having 5 ODroid XU4 boards around the house is not a bad thing.
Oh and if you want some serious firepower, then order the Pico 5H kit for the NVidia Jetson Nano boards. Graphically those boards are beyond any other SoC on the market (in it’s price range). But as a consequence the Jetson Nano starts at $99. So for a full kit you will end up with $500 for the boards alone. But man those are the proverbial Ferrari of IOT.
ARM Linux Services with Oxygene and Elements
Linux is one of those systems that just appeals to me out of the box. I work with Windows on a daily basis, but at this point there is really nothing in the way of me jumping ship all together. I mean, whenever i need something that is Windows specific, I can just fire up a virtual-machine and get the job done there.
The only thing that is stopping me from going “all in” with Linux (and believe me I have tried) is that finding proper documentation for Linux written with Windows converts in mind, is actually a challenge in itself. Most tutorials are either meant for non-developers, like how to install a program via Synaptic and so on; which is brilliant if you have no experience with Linux whatsoever. But finding articles that aims to help a Windows developer get up to speed on Linux, that’s the tricky bit.
One of the features I wanted to learn about, was how to run a program as a service on Linux. Under Windows this is quite easy. You have the service manager that gives you a good overview of registered services. And programatically a service is ultimately just a normal WinAPI program that supports the service-api messages. Writing services in either Object-Pascal or C# is pretty straight-forward. I also do a lot of service work via Quartex Pascal (my own toolchain) that compiles to JavaScript. Node.js is actually a very capable service host once you understand the infrastructure.
Writing Daemons with Oxygene and Elements
Since the Elements compiler generates code for ARM Linux, learning how to get a service registered and started on boot is something that I think many developers will be interested in. It was one of the first questions I had when I started looking at Linux, and it took a while to find a clean cut answer.
In this little article I will show you how I went about this, but please keep in mind that Linux never has “one way” of doing something. Part of the strength that Linux delivers, is that you can configure and customize the system completely, from Kernel to desktop. You literally have different service sub-systems to pick from, as well as different windowing-managers, desktop systems (e.g Wayland or X) and even keyring implementations. This is what makes Linux so confusing when coming from a mono culture like Microsoft Windows.
As for hardware, i’m using an ODroid N2, which is a very powerful ARM based SBC (single board computer). You can use more or less any ARM device with Elements, providing the Linux distribution is based on Debian. So a Raspberry PI v4 with Ubuntu or Lubuntu will work fine. I’m using the ODroid N2 “full disk image” with Ubuntu Mate. So nothing out of the ordinary.
To make something clear off the bat: a Linux service (called a daemon, the ancient greek word for “helper” and “informer”) is just an ordinary shell application. You don’t have to do anything in particular in terms of code. Once your service is registered, you can start and stop it with the systemctl shell command like any other Linux service.
Note: There is also fork() mechanisms (cloning processes), but that’s out of scope for this little post.
Service manifest
Before we can get your binary registered as a service, we need to write a service manifest file. This is just a normal text-file in INI format that defines how you want your service to run. Here is an example of such a file:
[Unit] Description=Elements Service After=network.target StartLimitIntervalSec=0 [Service] Type=simple Restart=always RestartSec=1 User=qtx ExecStart=/usr/bin /usr/bin/ElementsService.exe [Install] WantedBy=multi-user.target
Before you save this file, remember to replace the username (user=) and the name of your executable.
Note: The ExecStart property can be defined in 3 ways:
- Direct path to the executable
- Current working directory + path to executable (like above)
- Current working directory + path to executable + parameters
You can read more about each property here: System.d service info
Systemd
For debian based distributions (Ubuntu branches) the most common service-host (or process manager) is called systemd. I am not even going to pretend to know the differences between systemd and the older Init. There are fierce debates in the Linux community around these two alternatives. But unless you are a Linux C developer that likes to roll your own Kernel in the weekends, it’s not really relevant for our goals in this post. Our task here is to write useful services and make them run side-by-side with other services.
With the service-manifest file done, we need to copy the service manifest in place where systemd can find it. So start by saving the manifest file as “elements.service” here:
/etc/systemd/system/elements.service
As you probably guessed from the ExecPath property, your service executable goes in:
/usr/bin/ElementsService.exe
If all went well you can now start your service from the command-line:
systemctl start elements
And you can stop the service with:
systemctl stop elements
Resident services
Starting and stopping a service is all good and well, but that doesn’t mean it will automatically start when you reboot your Linux box. In order to make the service resident (persisted, so Linux remembers to fire it up on boot), you have to enable the service:
systemctl enable elements
If you want to stop the service from starting on boot, just disable it:
systemctl disable elements
Now there is a ton of things you can tweak and change in the service-manifest file. For example, do you want Linux to restart your service if it crashes? How many times should Linux attempt to bring the service back up? Should it only bring it back up if the exit-code is zero?
If you want Linux to always restart a service if it stops (regardless of reason), you set the following flag in the service-manifest:
Restart=always
If you want Linux to only restart if the service fails, meaning that the exit-code of the application is <> 0, then you use this value instead:
Restart=on-failure
You can also set the service to only start after some other service, for example if your service has networking as a criteria (which is what we set in the service-manifest above), or a database engine.
There is a ton of different settings you can apply to the service-manifest, but listing them all here would be a small book. Better to just check the documentation and experiment a bit. So check the link and pick the ones that makes sense to your particular service.
Reflections
You should be very careful with how you define restart options. If something goes wrong and your service crash on start, Linux will keep restarting it en-mass. Automatic restart creates a loop, and it can be wise to make sure it doesn’t get stuck. I would set restart to “on-error” exclusively, so that your service has a chance to exit gracefully.
Happy coding! And a special thanks to Benjamin Morel for his helpful posts.
Augmented reality, I don’t think so
The world used to be a big place, but having worked around europe for a few years – and lately in the US, it appears to me much smaller. But the fact of the matter is, that different nationalities have different tastes and interests.

The world is not as big as it used to be
In the US right now there is a strong interest in virtual-reality. The interest was so strong that Sony jumped on the VR bandwagon early, offering a full VR kit for the Playstation 4. This has been available for two years already (or is it three?). Here in Scandinavia though, VR is not that hot. People buy it, but not in the same volume we see in the US. It is expected to pick up this fall when the dark period begins; Norway, Sweden, Denmark and Finland are largely without sunlight most of the year – and during that time people use indoor hobbies more. But right now, VR is not really a thing up here.
In parallel with VR, Microsoft picked up the gauntlet that Google threw away earlier, namely that of augmented reality. You probably remember the Google Glasses craze that hit California a decade ago right? Well Microsoft have continued to research and develop the technology which is now available for pre-order.
The problem? Microsoft Holo-lens II is a $3500 gadget that is presently aimed at business only. With emphasis on industrial design and medical applications. I don’t know about you, but forking up $3500 for what is ultimately just a curiosity is way out of my budget. There are much more important things to spend $3500 on to be frank.
Asia, the mother of implementation
While America and Europe are hotbeds of invention, it is ultimately Asia that implements, refine and eventually perfect technology. Some might disagree with that, there are always exceptions to the rule (3d printers and VR systems are very much American made), but what im talking about are “traits and inclinations” in the market. Patterns of commerce if you like.
What usually happens when something new is made, is that it costs a fortune. To push prices down companies move production to Asia, and there materials etc. are swapped out for cheaper alternatives. Some re-designing takes place — and before you know it, a product that cost $3500 when made in the US or the EU, can be picked up for $799 on Amazon. After some time, production volume has reached it’s zenith and the device that once cost an arm or a leg, can now be bought $299 or $199.
But, there are exceptions! When there is a technology that is wildly popular, like augmented reality and VR, Asia is quick to produce their own take on the technology early – trying to get a piece of the proverbial pie. This is often a good thing, especially for consumers. It can also be a good thing for the US and EU based companies – because mistakes and design-flaws they haven’t noticed yet are taken care of by proxy.
With that in mind, I ventured into the Asian markets to see what I could find.
Banggood and Alibaba
Having searched through an avalanche of cheap VR glasses for mobile phones, I finally found something that could be worth looking into. The advert was extremely thin, citing only augmented reality in english – with everything else in chinese (I presume, I must admit I would not spot the difference between chinese, japanese and korean if my life depended on it. Tibetan I can spot due to my Buddhist training, but that’s about it).
When I ran the string of characters through google, it returned this:
“VISION-800 3D Glasses Video Android 4.4
MTK6582 1G/2G 5MP AC WIFI BT4.0 2060P MIC”
Looking at the glasses they have pretty much everything you expected from an Augmented reality setup. There is a camera up front, lenses, audio jacks on both sides, a few buttons and switches – and the magic words “powered by Android Wearable”. The price was $249 so it wouldn’t break the bank either.

The Vision 800 in all their glory
I should also mention that websites like Banggood and Alibaba have pretty good return policies too. These websites are actually front-ends for hundreds (if not thousands) of smaller, local outlets. The outlets gets a place to sell their goods to the west, and Alibaba and Banggood takes a small cut of each sale.
To manage this and make it viable, they have a rating system in place. So if one of the outlets scams you – you can vote them down. 3 complaints is enough to get the outlet kicked from either site, which I presume is a dire financial loss (considering the volume these websites push on a daily basis). So there is some degree of consumer safety compared to direct order. I would never order anything directly from a tech cornershop in mainland china, because should they rip you off – there is nothing you can do about it.
Augmented? I don’t think so
When the glasses finally arrived i was surprised at how light they were. You might expect them to be top-heavy and tip forward on the ridge of your nose – but since the weight is actually the lenses, not the circuitry, they balance quite well.
But augmented reality? Im sorry, but these glasses are not even in the ballpark.
The device is running a fork of Android – but not the fork adapted for wearables! The glasses also comes with a stock mouse (cordless), and you are in fact greeted by a plain desktop. The cordless mouse does work really well though, but I thankfully had the sense to order a $5 air-mouse (read: remote control for android devices) or I would go insane trying to exit applications properly.
What you can do is download some augmented reality apps from Google Play. These will tap into the camera on the glasses, and you can play a few games. But that’s really it. I noticed that the outlet had changed the title and text for these glasses a few days before they arrived here, so the whole deal is a little bit fishy. Looking at the instruction leaflet, these glasses have been sold as “movie glasses”. I would even go so far as to say they have nothing to do with augmented reality.
Media glasses
Augmented reality aside, there are interesting uses for glasses like this. If the field of view is good enough, they could make for a novel screen “on the road”. I mean, if you plug in a hybrid USB dongle that gives you both keyboard and mouse, there is nothing in the way of using the glasses as a monitor / travel PC. You have the same apps that you enjoy on your phone; a modern browser that gives you access to cloud services etc.
The glasses also have an SD card slot which is super handy – and 2Gb onboard storage. So if you are taking a flight from Europe to Australia and want to tune out noise and watch movies – these glasses might just be the ticket.

The audio works well
I must admit it was fun to install NetFlix on these and kick back. But this is also when i started to have some issues.
The first issue is that there is no focal lense involved. You are literally looking at two tiny screens. And if you use regular glasses like I do, watching without my ordinary glasses is a complete blur. I had to use contact-lenses (which I hate) to get any use out of these. But if your eyesight is fine, you will no doubt have a better experience.
For me being 100% dependent on my regular glasses, it actually makes more sense to buy a cheap, second-hand Samsung Galaxy Edge, which were designed to be used as a proper VR display, and then permanently fixing it to a set of cheap Samsung VR casing. Even the most rudimentary VR casing offers focal lenses, so you can adjust the focus to compensate for poor eyesight.
The second issue has to do with display resolution. If you have 20/20 eyesight then high resolutions is wonderful. But in my case I would actually see better if the resolution was slightly lower. Sadly the devices seem fixed to what I can only presume is 1600×1024 (or something like that), and there are no options for changing resolution, offset display or skew the picture. Again, these are factors that only become important if you have poor eyesight.
Audio
The way they solved audio is actually quite good. On each arm of the glasses you have an audio-jack out. And the kit comes with two small earplugs. And again – if you are on a long flight and just want to snuggle up in your own world – this works really well.
If you have ear-pods like I do, you can use them via the standard BT interface. But I noticed that there was a slight lag when using these; no doubt the CPU had problems handling audio when playing a full HD movie. The lag was not there when i used the normal jack – so the culprit is probably the BT device cache.
Gaming?
I’m not a huge gamer myself, I mostly play games like Destiny in the Playstation. On the odd occasion that I jump into other games, it’s mostly retro stuff. And I have a house pretty much packed with Amiga, Silicon Graphics, and more arcade hardware than god ever intended a person to own.
Having said that, the device is capable of both native Android games – and emulation. I had no problem installing UAE (Unix Amiga Emulator), and it’s more than powerful enough to emulate an A1200 with AGA (advanced graphics architecture).
I didn’t test the casting option – because the device can display-cast to your living room TV. But somehow it seems backwards using these as a casting source – when you already have a supercomputer in your pocket. Any modern phone, be it a Samsung or Apple device, will outperform whatever is powering these glasses – so if gaming is your thing, look elsewhere.
Final words
These glasses have potential. Or perhaps better phrased – the technology holds a lot of promise. If they had opted for focal-lenses and a wider field of vision, they would have been a fantastic experience. I have no problem imagining this type of tech replacing monitors in the near future – at least for movie experiences.
I must admit it’s really tricky to hammer down a verdict. On one hand they are really fun, you can install NetFlix, browse the web and watch movies if you copy them over to an sd-card (the glasses come with a 16Gb sd-card). You have mouse control, BT and i have no problem seeing myself on a flight to hong-kong enjoying a movie marathon wearing these.
But are they worth $250 ? I would have to say no. A fair price would be something in the $70 region. If they corrected the lenses I would have no problem buying a pair at $99. And if they expanded the field of vision to cover the width of the glasses – I would absolutely pick them up at $150. But $250 for this? Sadly I can’t say they are worth the money.
I was also surprised to find pornhub as a pre-defined shortcut in the browser (I mean, who does that?). It made me laugh first, thinking it was a cheeky joke – but as a parent who could have bought these for a child, it is utterly unacceptable. It’s not the first time I have found smut on a device from Asia. But yeah, a bit creepy.
So, I would have to give them a 3 out of 6 verdict. If you have money to burn and a long flight ahead, then by all means – they will give you a comfy way of watching movies during the flight. But the technology is (for the lack of a better word) premature.
As for augmented reality – forget it. You are better off stuffing your phone inside a $100 Samsung VR casing. The official Samsung Galaxy Edge casing probably cost next to nothing by now. And for $250 you should have no problem sourcing a used Galaxy Edge phone too. Which will be 100 times better than this.
I started this post citing the inherent differences between nationalities in what they enjoy, but I must admit that through these, I can see why VR holds such potential. I can’t see myself strapping on a full suit, helmet and gloves just to play a game or do some work. But glasses like these (but not these) is absolutely in the vicinity of “practical”. Just a damn shame they didn’t do a full width LCD with focal lenses; then I would have promoted them.
Right now: a fun curiocity, good for watching the odd movie if you eyesight is perfect – but for the rest of us, it’s just not worth the dough
Quartex: Mali GPU glitches
EDIT: I did further testing after this article was written, and believe the source of this to be about heat. Even with extra fans, running games like Tyrian (asm.js) that are extremely demanding, plus resizing a graphics intensive windows constantly, the temperature reached 71 degrees C very quickly. And this was with two cabinet fans helping the built-in fan to cool the device. It is thus not unthinkable that when running solo (no extra fans) that the kernel shut the device down to not cook the chipset. Which also explains why the device wont boot properly afterwards (the device is still hot).
Glitches
Something really strange is happening on Chrome and Firefox for ARM. JavaScript is not supposed to be able to take down a system, and in this case it’s neither an attempt as such either — yet for some reason I have managed to take down the ODroid XU4 with both Chrome and Firefox lately.
ODroid XU4
I guess I should lead with that I’m not able to replicate this on x86. One of the things I really love about the ODroid XU4 is that it’s affordable, powerful and probably the only SBC I have used that runs stable on the mali GPU. As you probably know I tested at least 10 different SBC’s back in 2018, and whenever there was a mali GPU involved, the product was either haunted by instabilities or lacked drivers all together.
Since the codebase for Chrome (and I presume Firefox) is ultimately the same between platforms, it leaves a question-mark about the ODroid. It is by far the most stable SBC I have tested so far (except for the PI, which is sadly underpowered for this task), but stable doesn’t mean flawless. And to be honest, Amibian.js is pushing web tech to the very limits.
Not Mali again
The reason I suspect the mali to be the culprit behind all this, is because the “bug” if we can call it that, happens exclusively during resize. So if there is a lot going on inside a desktop-window, you can sometimes provoke the ODroid to cold-crash and reboot. You actually have to power the board down and switch it back on for it to boot properly.

Cloudripper ~ 5x ODroid XU4 [40 cores] in a PICO 5h cube
The challenge is that there is no way to debug or catch this error, because when it occurs the whole system literally goes down. There is no exception thrown, nor is the browser process terminated (not even a log entry, so it’s a clean-cut) — the system reboots on the spot. Since it fails on reboot when opening X (setting a screen-mode) I again point the finger at the GPU. Somehow a flag or lock survives the cold-reboot and that’s why you have to manually switch it off and on again.
This is the exact problem that made the NanoPI Fire useless. It only shipped with Android embedded drivers. The X drivers could hardly open a display without crashing. Such a waste of a good cpu.
x86 as head
ODroid is perfect for a low-cost Amibian.js experience, but I was unsure if it would handle the payload. Interestingly it handles it just fine and even with a high-speed action game running + background tasks we are not using 50% of the CPU even.
Ram is holding up too, with memory consumption while running Tyrian + having a few graphics viewers open, is at a reasonable 700 mb (of 2 gigabyte in total).

Tyrian jogs along at 45 fps ~ that is not bad for a $45 SBC
Right now this strange error is rare, but if it continues or grows into a problem (chrome is hardly useable at all, only firefox) then I have no option than to replace the master sbc in the cluster with something else. The x86 UP board is more than capable, but it would be a shame to break the price range because of that (excuse my language) crap mali GPU. I honestly don’t understand why board makers insist on using a mali. Every board that has a mali is haunted by problems and get poor reviews.
It will be exciting to check out the dragonboard, although I fear 1Gb memory will not be enough for smooth operation. Not without a sata interface and a good swap-file.
Android and Delphi
One alternative is to switch to Android and use Delphi to code a custom Chromium Embedded webview. I am hoping to avoid the overhead of Android, but Delphi would definitively be a bonus with Android embedded (“Android of things”).
We will see.
Amibian.js under the hood
Amibian.js is gaining momentum as more and more developers, embedded systems architects, gamers and retro computer enthusiasts discover the project. And I have to admit I’m pretty stoked about what we are building here myself!

In a life-preserver no less 😀
But, with any new technology or invention there are two common traps that people can fall into: The first trap is to gravely underestimate a technology. JavaScript certainly invites this, because only a decade ago the language was little more than a toy. Since then JavaScript have evolved to become the most widely adopted programming language in the world, and runtime engines like Google’s V8 runs JavaScript almost as fast as compiled binary code (“native” means machine code, like that produced by a C/C++ compiler, Pascal compiler or anything else that produces programs that run under Linux or Windows).
It takes some adjustments, especially for traditional programmers that havent paid attention to where browsers have gone – but long gone are the days of interpreted JavaScript. Modern JavaScript is first parsed, tokenized and compiled to bytecodes. These bytecodes are then JIT compiled (“just in time”, which means the compilation takes place inside the browser) to real machine-code using state of the art techniques (LLVM). So the JavaScript of 2018 is by no means the JavaScript of 2008.
The second trap you can fall into – is to exaggerate what a new technology can do, and attach abilities and expectations to a product that simply cannot be delivered. It is very important to me that people don’t fall into either trap, and that everyone is informed about what Amibian.js actually is and can deliver – but also what it wont deliver. Rome was not built-in a day, and it’s wise to study all the factors before passing judgement.
I have been truly fortunate that people support the project financially via Patreon, and as such I feel it’s my duty to document and explain as much as possible. I am a programmer and I often forget that not everyone understands what I’m talking about. We are all human and make mistakes.
Hopefully this post will paint a clearer picture of Amibian.js and what we are building here. The project is divided into two phases: first to finish Amibian.js itself, and secondly to write a Visual Studio clone that runs purely in the browser. Since it’s easy to mix these things up, I’m underlining this easy – just in case.
What the heck is Amibian.js?
Amibian.js is a group of services and libraries that combined creates a portable operating-system that renders to HTML5. A system that was written using readily available web technology, and designed to deliver advanced desktop functionality to web applications.
The services that make up Amibian.js was designed to piggyback on a thin Linux crust, where Linux deals with the hardware, drivers and the nitty-gritty we take for granted. There is no point trying to write a better kernel in 2018, because you are never going to catch up with Linus Torvalds. It’s must more interesting to push modern web technology to the absolute limits, and build a system that is truly portable and distributed.
The service layer is written purely in node.js (JavaScript) which guarantees the same behavior regardless of host platform. One of the benefits of using off-the-shelves web technology is that you can physically copy the whole system from one machine to the other without any changes. So if you have a running Amibian.js system on your x86 PC, and copy all the files to an ARM computer – you dont even have to recompile the system. Just fire up the services and you are back in the game.
Now before you dismiss this as “yet another web mockup” please remember what I said about JavaScript: the JavaScript in 2018 is not the JavaScript of 2008. No other language on the planet has seen as much development as JavaScript, and it has evolved from a “browser toy” – into the most important programming language of our time.
So Amibian.js is not some skin-deep mockup of a desktop (lord knows there are plenty of those online). It implements advanced technologies such as remote filesystem mapping, an object-oriented message protocol (Ragnarok), RPCS (remote procedure call invocation stack), video codec capabilities and much more — all of it done with JavaScript.
In fact, one of the demos that Amibian.js ships with is Quake III recompiled to JavaScript. It delivers 120 fps flawlessly (browser is limited to 60 fps) and makes full use of standard browser technologies (WebGL).
So indeed, the JavaScript we are talking about here is cutting edge. Most of Amibian.js is compiled as “Asm.js” which means that the V8 runtime (the code that runs JavaScript inside the browser, or as a program under node.js) will JIT compile it to highly efficient machine-code.
Which is why Amibian.js is able to do things that people imagine impossible!
Ok, but what does Amibian.js consist of?
Amibian.js consists of many parts, but we can divide it into two categories:
- A HTML5 desktop client
- A system server and various child processes
These two categories have the exact same relationship as the X desktop and the Linux kernel. The client connects to the server, invokes procedures to do some work, and then visually represent the response This is identical to how the X desktop calls functions in the kernel or one of the Linux libraries. The difference between the traditional, machine code based OS and our web variation, is that our version doesn’t have to care about the hardware. We can also assign many different roles to Ambian.js (more about that later).

Enjoying other cloud applications is easy with Amibian.js, here is Plex, a system very much based on the same ideas as Amibian.js
And for the record: I’m trying to avoid a bare-metal OS, otherwise I would have written the system using a native programming language like C or Object-Pascal. So I am not using JavaScript because I lack skill in native languages, I am using JavaScript because native code is not relevant for the tasks Amibian.js solves. If I used a native back-end I could have finished this in a couple of months, but a native server would be unable to replicate itself between cloud instances because chipset and CPU would be determining factors.
The Amibian.js server is not a single program. The back-end for Amibian.js consists of several service applications (daemons on Linux) that each deliver specific features. The combined functionality of these services make up “the amibian kernel” in our analogy with Linux. You can think of these services as the library files in a traditional system, and programs that are written for Amibian.js can call on these to a wide range of tasks. It can be as simple as reading a file, or as complex as registering a new user or requesting admin rights.
The greatest strength of Amibian.js is that it’s designed to run clustered, using as many CPU cores as possible. It’s also designed to scale, meaning that it will replicate itself and divide the work between different instances. This is where things get’s interesting, because an Amibian.js cluster doesn’t need the latest and coolest hardware to deliver good performance. You can build a cluster of old PC’s in your office, or a handful of embedded boards (ODroid XU4, Raspberry PI’s and Tinkerboard are brilliant candidates).
But why Amibian.js? Why not just stick with Linux?
That is a fair question, and this is where the roles I mentioned above comes in.
As a software developer many of my customers work with embedded devices and kiosk systems. You have companies that produce routers and set-top boxes, NAS boxes of various complexity, ticket systems for trains and busses; and all of them end up having to solve the same needs.
What each of these manufacturers have in common, is the need for a web desktop system that can be adapted for a specific program. Any idiot can write a web application, but when you need safe access to the filesystem, unified API’s that can delegate signals to Amazon, Azure or your company server, things suddenly get’s more complicated. And even when you have all of that, you still need a rock solid application model suitable for distributed computing. You might have 1 ticket booth, or 10.000 nation wide. There are no systems available that is designed to deal with web-technology on that scale. Yet 😉
Let’s look at a couple of real-life scenarios that I have encountered, I’m confident you will recognize a common need. So here are some roles that Amibian.js can assume and help deliver a solution rapidly. It also gives you some ideas of the economic possibilities.
Updated: Please note that we are talking javascript here, not native code. There are a lot of native solutions out there, but the whole point here is to forget about CPU, chipset and target and have a system floating on top of whatever is beneath.
- When you want to change some settings on your router – you login to your router. It contains a small apache server (or something similar) and you do all your maintenance via that web interface. This web interface is typically skin-deep, annoying to work with and a pain for developers to update since it’s connected to a native apache module which is 100% dependent on the firmware. Each vendor end up re-inventing the wheel over and over again.
- When you visit a large museum notice the displays. A museum needs to display multimedia, preferably on touch capable devices, throughout the different exhibits. The cost of having a developer create native applications that displays the media, plays the movies and gives visual feedback is astronomical. Which is why most museums adopt web technology to handle media presentation and interaction. Again they re-invent the wheel with varying degree of success.
- Hotels have more or less the exact same need but on a smaller scale, especially the larger hotels where the lobby have information booths, and each room displays a web interface via the TV.
- Shopping malls face the same challenge, and depending on the size they can need anything from a single to a hundred nodes.
- Schools and education spend millions on training software and programming languages every year. Amibian.js can deliver both and the schools would only pay for maintenance and adaptation – the product itself is free. Kids get the benefit of learning traditional languages and enjoying instant visual feedback! They can learn Basic, Pascal, JavaScript and C. I firmly believe that the classical languages will help make them better programmers as they evolve.
You are probably starting to see the common denominator here?
They all need a web-based desktop system, one that can run complex HTML5 based media applications and give them the same depth as a native operating-system; Which is pretty hard to achieve with JavaScript alone.
Amibian.js provides a rich foundation of more than 4000 classes that developers can use to write large, complex and media rich applications (see Smart Mobile Studio below). Just like Linux and Windows provides a wealth of libraries and features for native application development – Amibian.js aims to provide the same for cloud and embedded systems.
And as the name implies, it has roots in the past with the machine that defined multimedia, namely the Commodore Amiga. So the relation is more than just visually, Amibian.js uses the same system architecture – because we believe it’s one of the best systems ever designed.
If JavaScript is so poor, why should we trust you to deliver so much?
First of all I’m not selling anything. It’s not like this project is something that is going to make me a ton of cash. I ask for support during the development period because I want to allocate proper time for it, but when done Amibian.js will be free for everyone (LGPL). And I’m also writing it because it’s something that I need and that I havent seen anywhere else. I think you have to write software for yourself, otherwise the quality wont be there.
Secondly, writing Amibian.js in raw JavaScript with the same amount of functions and depth would take years. The reason I am able to deliver so much functionality quickly, is because I use a compiler system called Smart Mobile Studio. This saves months and years of development time, and I can use all the benefits of OOP.
Prior to starting the Amibian.js project, I spent roughly 9 years creating Smart Mobile Studio. Smart is not a solo project, many individuals have been involved – and the product provides a compiler, IDE (editor and tools), and a vast run-time library of pre-made classes (roughly 4000 ready to use classes, or building-blocks).
Unlike other development systems, Smart Mobile Studio compiles to JavaScript rather than machine-code. We have spent a great deal of time making sure we could use proper OOP (object-oriented programming), and we have spent more than 3 years perfecting a visual application framework with the same depth as the VCL or FMX (the core visual frameworks for C++ builder and Delphi).
The result is that I can knock out a large application that a normal JavaScript coder would spend weeks on – in a single day.
Smart Mobile Studio uses the object-pascal language, a dialect which is roughly 70% compatible with Delphi. Delphi is exceptionally well suited for writing large, data driven applications. It also thrives for embedded systems and low-level system services. In short: it’s a lot easier to maintain 50.000 lines of object pascal code, than 500.000 lines of JavaScript code.
Amibian.js, both the service layer and the visual HTML5 client application, is written completely using Smart Mobile Studio. This gives me as the core developer of both systems a huge advantage (who knows it better than the designer right?). I also get to write code that is truly OOP (classes, inheritance, interfaces, virtual and abstract methods, partial classes etc), because our compiler crafts something called a VMT (virtual method table) in JavaScript.
Traditional JavaScript doesn’t have OOP, it has something called prototypes. With Smart Pascal I get to bring in code from the object-pascal community, components and libraries written in Delphi or Freepascal – which range in the hundreds of thousands. Delphi alone has a massive library of code to pick from, it’s been a popular toolkit for ages (C is 3 years older than pascal).
But how would I use Amibian.js? Do I install it or what?
Amibian.js can be setup and used in 4 different ways:
- As a true desktop, booting straight into Amibian.js in full-screen
- As a cloud service, accessing it through any modern browser
- As a NAS or Kiosk front-end
- As a local system on your existing OS, a batch script will fire it up and you can use your browser to access it on https://127.0.0.1:8090
So the short answer is yes, you install it. But it’s the same as installing Chrome OS. It’s not like an application you just install on your Linux, Windows or OSX box. The whole point of Amibian.js is to have a platform independent, chipset agnostic system. Something that doesn’t care if you using ARM, x86, PPC or Mips as your CPU of preference. Developers will no doubt install it on their existing machines, Amibian.js is non-intrusive and does not affect or touch files outside its own eco-system.
But the average non-programmer will most likely setup a dedicated machine (or several) or just deploy it on their home NAS.
The first way of enjoying Amibian.js is to install it on a PC or ARM device. A disk image will be provided for supporters so they can get up and running ASAP. This disk image will be based on a thin Linux setup, just enough to get all the drivers going (but no X desktop!). It will start all the node.js services and finally enter a full-screen web display (based on Chromium Embedded) that renders the desktop. This is the method most users will prefer to work with Amibian.js.
The second way is to use it as a cloud service. You install Amibian.js like mentioned above, but you do so on Amazon or Azure. That way you can login to your desktop using nothing but a web browser. This is a very cost-effective way of enjoying Amibian.js since renting a virtual instance is affordable and storage is abundant.
The third option is for developers. Amibian.js is a desktop system, which means it’s designed to host more elaborate applications. Where you would normally just embed an external website into an IFrame, but Amibian.js is not that primitive. Hosting external applications requires you to write a security manifest file, but more importantly: the application must interface with the desktop through the window’s message-port. This is a special object that is sent to the application as a hand-shake, and the only way for the application to access things like the file-system and server-side functionality, is via this message-port.
Calling “kernel” level functions from a hosted application is done purely via the message-port mentioned above. The actual message data is JSON and must conform to the Ragnarok client protocol specification. This is not as difficult as it might sound, but Amibian.js takes security very seriously – so applications trying to cause damage will be promptly shut down.
You mention hosted applications, do you mean websites?
Both yes and no: Amibian.js supports 3 types of applications:
- Ordinary HTML5/JS based applications, or “websites” as many would call them. But like I talked about above they have to establish a dialog with the desktop before they can do anything useful.
- Hybrid applications where half is installed as a node.js service, and the other half is served as a normal HTML5 app. This is the coolest program model, and developers essentially write both a server and a client – and then deploy it as a single package.
- LDEF compiled bytecode applications, a 68k inspired assembly language that is JIT compiled by the browser (commonly called “asm.js”) and runs extremely fast. The LDEF virtual machine is a sub-project in Amibian.js
The latter option, bytecodes, is a bit like Java. A part of the Amibian.js project is a compiler and runtime system called LDEF.

Above: The Amibian.js LDEF assembler, here listing opcodes + disassembling a method
The first part of the Amibian.js project is to establish the desktop and back-end services. The second part of the project is to create the worlds first cloud based development platform. A full Visual Studio clone if you like, that allows anyone to write cloud, mobile and native applications directly via the browser (!)
Several languages are supported by LDEF, and you can write programs in Object Pascal, Basic and C. The Basic dialect is especially fun to work with, since it’s a re-implementation of BlitzBasic (with a lot of added extras). Amiga developers will no doubt remember BlitzBasic, it was used to create some great games back in the 80s and 90s. It’s well suited for games and multimedia programming and above all – very easy to learn.
More advanced developers can enjoy Object Pascal (read: Delphi) or a sub-set of C/C++.
And please note: This IDE is designed for large-scale applications, not simple snippets. The ultimate goal of Amibian.js is to move the entire development cycle to the cloud and away from the desktop. With Amibian.js you can write a cool “app” in BlitzBasic, run it right in the browser — or compile it server-side and deploy it to your Android Phone as a real, natively compiled application.
So any notion of a “mock desktop for HTML” should be firmly put to the side. I am not playing around with this product and the stakes are very real.
But why don’t you just use ChromeOS?
There are many reasons, but the most important one is chipset independence. Chrome OS is a native system, meaning that it’s core services are written in C/C++ and compiled to machine code. The fundamental principle of Amibian.js is to be 100% platform agnostic, and “no native code allowed”. This is why the entire back-end and service layer is targeting node.js. This ensures the same behavior regardless of processor or host system (Linux being the default host).
Node.js has the benefit of being 100% platform independent. You will find node.js for ARM, x86, Mips and PPC. This means you can take advantage of whatever hardware is available. You can even recycle older computers that have lost mainstream support, and use them to run Amibian.js.
A second reason is: Chrome OS might be free, but it’s only as open as Google want it to be. ChromeOS is not just something you pick up and start altering. It’s dependence on native programming languages, compiler toolchains and a huge set of libraries makes it extremely niche. It also shields you utterly from the interesting parts, namely the back-end services. It’s quite frankly boring and too boxed in for any practical use; except for Google and it’s technology partners that is.
I wanted a system that I could move around, that could run in the cloud, on cheap SBC’s. A system that could scale from handling 10 users to 1000 users – a system that supports clustering and can be installed on multiple machines in a swarm.
A system that anyone with JavaScript knowledge can use to create new and exciting systems, that can be easily expanded and serve as a foundation for rich media applications.
What is this Amiga stuff, isn’t that an ancient machine?
In computing terms yes, but so is Unix. Old doesn’t automatically mean bad, it actually means that it’s adapted and survived challenges beyond its initial design. While most of us remember the Amiga for its games, I remember it mainly for its elegant and powerful operating-system. A system so flexible that it’s still in use around the world – 33 years after the machine hit the market. That is quite an achievement.

The original Amiga OS, not bad for a 33-year-old OS! It was and continues to be way ahead of everyone else. A testament to the creativity of its authors
Amibian.js as the name implies, borrows architectural elements en-mass from Amiga OS. Quite simply because the way Amiga OS is organized and the way you approach computing on the Amiga is brilliant. Amiga OS is much more intuitive and easier to understand than Linux and Windows. It’s a system that you could learn how to use fully with just a couple of days exploring; and no manuals.
But the similarities are not just visual or architectural. Remember I wrote that hosted applications can access and use the Amibian.js services? These services implement as much of the original ROM Kernel functions as possible. Naturally I can’t port all of it, because it’s not really relevant for Amibian.js. Things like device-drivers serve little purpose for Amibian.js, because Amibian.js talks to node.js, and node talks to the actual system, which in turn handles hardware devices. But the way you would create windows, visual controls, bind events and create a modern, event-driven application has been preserved to the best of my ability.
But how does this thing boot? I thought you said server?
If you have setup a dedicated machine with Amibian.js then the boot sequence is the same as Linux, except that the node.js services are executed as background processes (daemons or services as they are called), the core server is initialized, and then a full-screen HTML5 view is set up that shows the desktop.
But that is just for starting the system. Your personal boot sequence which deals with your account, your preferences and adaptations – that boots when you login to the system.
When you login to your Amibian.js account, no matter if it’s just locally on a single PC, a distributed cluster, or via the browser into your cloud account — several things happen:
- The client (web-page if you like) connects to the server using WebSocket
- Login is validated by the server
- The client starts loading preferences files via the mapped filesystem, and then applies these to the desktop.
- A startup-sequence script file is loaded from your account, and then executed. The shell-script runtime engine is built into the client, as is REXX execution.
- The startup-script will setup configurations, create symbolic links (assigns), mount external devices (dropbox, google drive, ftp locations and so on)
- When finished the programs in the ~/WbStartup folder are started. These can be both visual and non-visual.
As you can see Amibian.js is not a mockup or “fake” desktop. It implements all the advanced features you expect from a “real” desktop. The filesystem mapping is especially advanced, where file-data is loaded via special drivers; drivers that act as a bridge between a storage service (a harddisk, a network share, a FTP host, Dropbox or whatever) and the desktop. Developers can add as many of these drivers as they want. If they have their own homebrew storage system on their existing servers, they can implement a driver for it. This ensures that Amibian.js can access any storage device, as long as the driver conforms to the driver standard.
In short, you can create, delete, move and copy files between these devices just like you do on Windows, OSX or the Linux desktop. And hosted applications that run inside their own window can likewise request access to these drivers and work with the filesystem (and much more!).
Wow this is bigger than I thought, but what is this emulation I hear about? Can Amibian.js really run actual programs?
Amibian.js has a JavaScript port of UAE (Unix Amiga Emulator). This is a fork of SAE (scripted Amiga Emulator) that has been heavily optimized for web. Not only is it written in JavaScript, it performs brilliantly and thus allows us to boot into a real Amiga system. So if you have some floppy-images with a game you love, that will run just fine in the browser. I even booted a 2 gigabyte harddisk image 🙂
But Amiga emulation is just the beginning. More and more emulators are ported to JavaScript; you have Nes, SNes, N64, PSX I & II, Sega Megadrive and even a NEO GEO port. So playing your favorite console games right in the browser is pretty straight forward!
But the really interesting part is probably QEmu. This allows you to run x86 instances directly in the browser too. You can boot up in Windows 7 or Ubuntu inside an Amibian.js window if you like. Perhaps not practical (at this point) but it shows some of the potential of the system.
I have been experimenting with a distributed emulation system, where the emulation is executed server-side, and only the graphics and sound is streamed back to the Amibian.js client in real-time. This has been possible for years via Apache Guacamole, but doing it in raw JS is more fitting with our philosophy: no native code!
I heard something about clustering, what the heck is that?
Remember I wrote about the services that Amibian.js has? Those that act almost like libraries on a physical computer? Well, these services don’t have to be on the same machine — you can place them on separate machines and thus its able to work faster.
A cluster is typically several computers connected together, with the sole purpose of having more CPU cores to divide the work on. The cool thing about Amibian.js is that it doesn’t care about the underlying CPU. As long as node.js is available it will happily run whatever service you like – with the same behavior and result.
The official Amibian.js cluster consists of 5 ODroid XU4/S SBC (single board computers). Four of these are so-called “headless” computers, meaning that they don’t have a HDMI port – and they are designed to be logged into and software setup via SSH or similar tools. The last machine is a ODroid XU4 with a HDMI out port, which serves as “the master”.
The architecture is quite simple: We allocate one whole SBC for a single service, and allow the service to copy itself to use all the CPU cores available (each SBC has 8 CPU cores). With this architecture the machine that deals with the desktop clients don’t have to do all the grunt work. It will accept tasks from the user and hosted applications, and then delegate the tasks between the 4 other machines.
Note: The number of SBC’s is not fixed. Depending on your use you might not need more than a single SBC in your home setup, or perhaps two. I have started with 5 because I want each part of the architecture to have as much CPU power as possible. So the first “official” Amibian.js setup is a 40 core monster shipping at around $250.
But like mentioned, you don’t have to buy this to use Amibian.js. You can install it on a single spare X86 PC you have, or daisy chain a couple of older PC’s on a switch for the same result.
Why Headless? Don’t you need a GPU?
The headless SBC’s in the initial design all have GPU (graphical processing unit) as well as audio capabilities. What they lack is GPIO pins and 3 additional USB ports. So each of the nodes on our cluster can handle graphics at blistering speed — but that is ultimately not their task. They serve more as compute modules that will be given tasks to finish quickly, while the main machine deals with users, sessions, traffic and security.
The 40 core cluster I use has more computing power than northern europe had in the early 80s, that’s something to think about. And the pricetag is under $300 (!). I dont know about you but I always wanted a proper mainframe, a distributed computing platform that you can login to and that can perform large tasks while I do something else. This is as close as I can get on a limited budget, yet I find the limitations thrilling and fun!
Part of the reason I have opted for a clustered design has to do with future development. While UAE.js is brilliant to emulate an Amiga directly in the browser – a more interesting design is to decouple the emulation from the output. In other words, run the emulation at full speed server-side, and just stream the display and sounds back to the Amibian.js display. This would ensure that emulation, of any platform, runs as fast as possible, makes use of multi-processing (read: multi threading) and fully utilize the network bandwidth within the design (the cluster runs on its own switch, separate from the outside world-wide-web).
I am also very interested in distributed computing, where we split up a program and run each part on different cores. This is a topic I want to investigate further when Amibian.js is completed. It would no doubt require a re-design of the LDEF bytecode system, but this something to research later.
Will Amibian.js replace my Windows box?
That depends completely on what you use Windows for. The goal is to create a self-sustaining system. For retro computing, emulation and writing cool applications Amibian.js will be awesome. But Rome was not built-in a day, so it’s wise to be patient and approach Amibian.js like you would Chrome OS. Some tasks are better suited for native systems like Linux, but more and more tasks will run just fine on a cloud desktop like Amibian.js.
Until the IDE and compilers are in place after phase two, the system will be more like an embedded OS. But when the LDEF compiler and IDE is in place, then people will start using it en-mass and produce applications for it. It’s always a bit of work to reach that point and create critical mass.

Object Pascal is awesome, but modern, native development systems are quite demanding
My personal need has to do with development. Some of the languages I use installs gigabytes onto my PC and you need a full laptop to access them. I love Amibian.js because I will be able to work anywhere in the world, as long as a browser and normal internet line is available. In my case I can install a native compiler on one of the nodes in the cluster, and have LDEF emit compatible code; voila, you can build app-store ready applications from within a browser environment.
I also love that I can set-up a dedicated platform that runs legacy applications, games – and that I can write new applications and services using modern, off the shelve languages. And should a node in the cluster break down, I can just copy the whole system over to a new, affordable SBC and keep going. No super expensive hardware to order, no absurd hosting fees, and finally a system that we all can shape and use in a plethora of systems. From a fully fledged desktop to a super advanced NAS or Router that use Amibian.js to give it’s customers a fantastic experience.
And yes, I get to re-create the wonderful reality of Amiga OS without the absurd egoism that dominates the Amiga owners to this day. I don’t even know where to begin with the present license holders – and I am so sick of the drama that rolling my own seemed the only reasonable path forward.
Well — I hope this helps clear up any misconceptions about Amibian.js, and that you find this as interesting as I do. As more and more services are pushed cloud-side, the more relevant Amibian.js will become. It is perfect as a foundation for large-scale applications, embedded systems — and indeed, as a solo platform running on embedded devices!
I cant wait to finish the services and cluster this sucker on the ODroid rack!
If you find this project interesting, head over to my Patreon website and get involved! I could really use your support, even if it’s just a $5 “high five”. Visit the project at: http://www.patreon.com/quartexNow
Nano PI Fire 3, part two
If you missed the first installment of this test, please click here to catch up. In this installment we are just going to dive straight into general use and get a feel for what can and cannot be done.
Solving the power problem
Like mentioned in the previous article, a normal mobile charger (5 volt, 2 amps) is not enough to support the nano-pi. Since I have misplaced my original PI power-supply with 5 volt / 3 amps I decided to cheat. So I plugged the power USB into my PC which will deliver as much juice as the device needs. I don’t have time to wait for a new PSU to arrive so this will have to do.
But for the record (and underlined) a proper PSU with at least 2.5 amps is essential to using this board. I suggest you order the official Raspberry PI 3b power-supply. But if you should find one with 3 amps that would be even better.
Web performance
The question on everyone’s mind (or at least mine) is: how does the Nano-PI fire 3 perform when rendering cutting edge, hardcore HTML5? Is this little device a potential candidate for running “The Smart Desktop” (a.k.a Amibian.js for those of you coming from the retro-computing scene)?
Like I suspected earlier, X (the Linux windowing framework) doesn’t have drivers that deliver hardware acceleration at all.

Lubuntu is a sexy desktop no doubt there, but it’s overkill for this device
This is quite easy to test: when selecting a rectangle on the Lubuntu desktop and moving the mouse-cursor around (holding down the left mouse button at the same time) if it lags terribly, that is a clear indicator that no acceleration exists.
And I was right on the money because there is no acceleration what so ever for the Linux distribution. It struggles hopelessly to keep up with the mouse-pointer as you move it around with an active selection; something that would be silky smooth had the GPU been tasked with the job.
But, hardware acceleration is not just about the desktop. It’s not some flag you enable and it magically effect everything, but rather several API’s at either the kernel-level or immediate driver level (modules the kernel loads), each affecting different aspects of a system.
So while the desktop “2d blitting” is clearly cpu driven, other aspects of the system can still be accelerated (although that would be weird and rare. But considering how Asus messed up the Tinkerboard I guess anything goes these days).
Asking Chrome for the hard facts
I fired up Google Chrome (which is the default browser thank god) and entered the magic url:
chrome://gpu
This is a built-in page that avails a detailed report of what Chrome learns about the current system, right down to specific GPU features used by OpenGL.
As expected, there was NO acceleration what so ever. So I was quite surprised that it managed to run Amibian.js at all. Even without hardware acceleration it outperformed the Raspberry PI 3b+ by a factor of 4 (at the very least) and my particle demo ran at a whopping 8 fps (frames per second). The original Rasperry PI could barely manage 2 fps. So the Nano-PI Fire is leagues ahead of the PI in terms of raw cpu power, which is brilliant for headless servers or computational tasks.
FriendlyCore vs Lubuntu? QT for the win
Now here is a funny thing. So far I have used the Lubuntu standard Linux image, and performance has been interesting to say the least. No hardware acceleration, impressive cpu results but still – what good is a SBC Linux distro without fast graphics? Sure, if you just want a head-less file server or host services then you don’t need a beefy GPU. But here is the twist:
Turns out the makers of the board has a second, QT oriented distro called Friendly-core. And this image has OpenGL-ES support and all the missing acceleration lacking from Lubuntu.
I was pretty annoyed with how Asus gave users the run-around with Tinkerboard downloads, but they have thankfully cleaned up their act and listened to their customers. Friendly-elec might want to learn from Asus mistakes in this area.

QT has a rich history, but it’s being marginalized by node.js and Delphi these days
Alas, Friendly-core xenial 4.4 Arm64 image turned out to be a pure embedded development image. This is why the board has a debug port (which is probably awesome if you are into QT development). So this is for QT developers that want to use the board as a single-application system where they write the code on Windows or Linux, compile and it’s all transported to the board with live debugging back to the devtools they use. In other words: not very useful for non C/C++ QT developers.
Android Lolipop
I have only used Android on a pad and the odd Samsung Galaxy phone, so this should be interesting. I Downloaded the Lolipop disk image, burned it to the sd-card and booted up.
After 20 minutes with a blank screen i gave up.
I realize that some Android distros download packages ad-hoc and install directly from a repository, so it can take some time to get started; but 15-20 minutes with a black screen? The Android logo didn’t even show up — and that should be visible almost immediately regardless of network install or not.
This is really a great shame because I wanted to test some Delphi Firemonkey applications on it, to see how well it scales the more demanding GPU tasks. And yes i did try a different SD-Card to be sure it wasnt a disk error. Same result.
Back to Lubuntu
Having spent a considerable time trying to find a “wow” factor for this board, I have to just surrender to the fact that it’s just not there. . This is not a “PI” any more than the Tinkerboard is a PI. And appending “pi” to a product name will never change that.
I can imagine the Nano-PI Fire 3 being an awesome single-application board for QT C/C++ developers though. With a dedicated debug port making it a snap to transport, execute and do live debugging directly on the hardware — but for general DIY hacking, using it for native Android development with Delphi, or node.js development with Smart Mobile Studio – or just kicking back with emulators like Mame, UAE or whatever tickles your fancy — its just too rough around the edges. Which is really a shame!
So at the end of the day I re-installed Lubuntu and figure I just have to wait until Friendly-elec get their act together and issue proper drivers for the Mali GPU. So it’s $35 straight out the window — but I can live with that. It was a risk but at that price it’s not going to break the bank.
The positive thing
The Nano-PI Fire 3 is yet another SBC in a long list that fall short of its potential. Like many others they try to use the word “PI” to channel some of the Raspberry PI enthusiasm their way – but the quality of the actual system is not even close.
In fact, using PI in their product name is setting themselves up for a fall – because customers will quickly discover that this product is not a PI, which can cause some subconscious aversion and resentment.

The Nano rendered Amibian.js running some very demanding demos 4 times as fast as the PI 3b, one can only speculate what the board could do with proper drivers for the GPU.
The only positive feature the Fire-3 clearly has to offer, is abundantly more cpu power. It is without a doubt twice as fast (if not 3 times as fast) as the Raspberry PI 3b. The fact that it can render highly demanding and complex HTML5 demos 4 times faster than the Raspberry PI 3b without hardware acceleration is impressive. This is a $35 board after all, which is the same price.
But without proper drivers for the mali, it’s a useless toy. Powerful and with great potential, but utterly useless for multimedia and everything that relies on fast 2D and 3D graphics. For UAE (Amiga emulation) you can pretty much forget it. Even if you can compile the latest UAE4Arm with SDL as its primary display framework – it wouldn’t work because SDL depends on the graphics drivers. So it’s back to square one.
But the CPU packs a punch that is without question.
Final verdict

There are a lot of stable and excellent options out there, take your time
I was planning to test UAE next but as I have outlined above: without drivers that properly expose and delegate the power of the mali, it would be a complete disaster. I’m not even sure it would build.
As such I will just leave this board as is. If it matures at some point that would be great, but my advice to people looking for a great SBC experience — get the new Raspberry PI 3b+ and enjoy learning and exploring there.
And if you are into Amibian.js or making high quality HTML5 kiosk / node.js based systems, then fork out the extra $10 and buy an ODroid XU4. If you pay $55 you can pick up the Asus Tinkerboard which is blistering fast and great value for money, despite its turbulent introduction.
Note: You cannot go wrong with the ODroid XU4. Its affordable, stable and fast. So for beginners it’s either the Raspberry PI 3b+ or the ODroid. These are the most mature in terms of software, drivers and stability.
Nano-pi Fire 3: The curse of Mali
Being able to buy SBCs (single board computers) for as little as $35 has revolutionized computing as we know it. Since the release of the Raspberry PI 1b back in 2012, single board computers have gone from being something electrical engineers work with, to something everyone can learn to use. Today you don’t need a background in electrical engineering to build a sophisticated embedded system; you just need a positive spirit, willingness to learn and a suitable SBC starter-kit.
Single board computers
If you are interested in SBC’s for whatever reason, I am sure you have picked up a few pointers about what works and doesn’t. In these times of open forums and 24/7 internet, the response-time from customers are close to instantaneous; and both positive and negative feedback should never be taken lightly. It’s certainly harder to hide a poor product in 2018, so engineers thinking they can make a quick buck selling sloppy-tech should think twice.
I have no idea how many boards have been released since 2016, but some 20+ new boards feels like a reasonable number. Which under normal circumstances would be awesome, because competition can be healthy. It can stimulate manufacturers to deliver better quality, higher performance and to use eco-friendly resources.
But it can also backfire and result in terrible quality and an unhealthy focus on profit.
The Mali SoC
The MALI graphics chipset is a name you see often in connection with SBC’s. If you read up on the Mali SoC it sounds fantastic. All that power, open architecture, partner support – surely this design should rock right? If only that were true. My experience with this chipset, which spans a variety of boards, is anything but fantastic. It’s actually a common factor in a growling list of boards that are unstable and unreliable.
I don’t have an axe to grind here, I have tried to remain optimistic and positive to every board that pass my desk. But Mali has become synonymous with awful performance and unreliable operation.
Out of the 14 odd boards I have tested since 2016, the 8 board that I count as useless all had the Mali chipset. This is quite remarkable considering Mali has an open driver architecture.
Open is not always best
If you have been into IOT for a few years you may recall the avalanche of critique that hit the Raspberry PI foundation for their choice of shipping with the Broadcom SoC? Broadcom has been a closed system with proprietary drivers written by the vendor exclusively, which made a lot of open-source advocates furious [at the time].
You know what? Going with the Broadcom chipset is the best bloody move the PI foundation ever did; I don’t think I have ever owned a SBC or embedded platform as stable as the PI, and the graphics performance you get for $35 is simply outstanding. Had they listened to their critics and used Mali on the Raspberry PI 2b, it would have been a disaster. The IOT revolution might never have occurred even.
The whole point of the Mali open driver architecture, is that developers should have easy access to documentation and examples – so they can quickly implement drivers and release their product. I don’t know what has gone wrong here, but either developers are so lazy that they just copy and paste code without properly testing it – or there are fundamental errors in the hardware itself.
To date the only board that works well with a Mali chipset, out of all the boards I have bought and tested, is the ODroid XU4. Which leads me to conclude that something has gone terribly wrong with the art of making drivers. This really should not be an issue in 2018, but the number of bankrupt mali boards tell another story.
Nano-PI Fire 3
When reading the specs on the Nano-pi fire 3 I was impressed with just how much firepower they managed to squeeze into such a tiny form-factor. Naturally I was sceptical due to the Mali, which so far only have ODroid going for it. But considering the $35 price it was worth the risk. Worst case I can recycle it as a headless server or something.
And the board is impressive! Let there be no doubt about the potential of this little thing, because from an engineering point of view its mind-blowing how much technology $35 buys you in 2018.
I don’t want to sound like a grumpy old coder, but when you have been around as many SBC’s as I have, you tend to hold back on the enthusiasm. I got all worked-up over the Asus Tinkerboard for example (read part 1 and part 2 here), and despite the absolutely knock-out specs, the mali drivers and shabby kernel work crippled an otherwise spectacular board. I still find it inconceivable how Asus, a well-respected global technology partner, could have allowed a product to ship with drivers not even worthy of public domain. And while they have updated and made improvements it’s still not anywhere near what the board could do with the right drivers.
The experience of the Nano-PI so far has been the same as many of the other boards; especially those made and sold straight from smaller, Asian manufacturers:
- Finding the right disk-image to download is unnecessarily cumbersome
- Display drivers lack hardware acceleration
- Poor help forums
- “Wiki” style documentation
- A large Linux distro that max out the system
More juice
The first thing you are going to notice with the Nano-pi is how important the power supply is. The nano ships with only one usb socket (one!) so a usb hub is the first thing you need. When you add a mouse and keyboard to that equation you have already maxed out a normal 5v 2a mobile power supply.
I noticed this after having problems booting properly when a mouse and keyboard was plugged in. I first thought it was the SD card, but no matter what card I tried – it still wouldn’t boot. It was only when I unplugged the mouse and keyboard that I could log in. Or should we say, cant log in because you don’t have a keyboard attached (sigh).
Now in most cases a Raspberry PI would run fine on 5v 2a, at least for ordinary desktop work; But the nano will be in serious trouble if you don’t give it more juice. So your first purchase should be a proper 5 volt 3 amp PSU. This is also recommended for the original Raspberry PI, but in my experience you can do quite a lot before you max out a PI.
Bluetooth
A redeeming factor for the lack of USB ports and power scaling, is that the board has Bluetooth built-in. So once you have paired and connected a BT keyboard things will be easier to work with. Personally I like keyboard and mouse to be wired. I hate having to change batteries or be disconnected at random (which always happens when you least need it). So the lack of USB ports and power delegation is negative for me, but objectively I can live with it as a trade-off for more CPU power.
Lack of accelerated graphics
It’s not hard to check if X uses the gpu or not. Select a large region of the desktop (holding the left mouse button down obviously) and watch in terror as it sluggishly tries to catch up with the cursor, repainting every cached co-ordinate. Had the GPU been used properly you wouldn’t even see the repaint of the rectangle, it would be smooth and instantaneous.
SD-card reader
I’m sorry but the sd-card reader on this puppy is the slowest I have ever used. I have never tested a device that boots so slow, and even something simple like starting chrome takes ages.
I tested with a cheap SD-card but also a more expensive class 10 card. I’m really trying to find something cool to write about, but it’s hard when boot times is worse than Raspberry PI 1b back in 2012.
1 gigabyte of ram
One thing that I absolutely hate in some of these cheap boards, is how they imagine Ubuntu to be a stamp of approval. The Raspberry PI foundation nailed it by creating a slim, optimized and blistering fast Debian distro. This is important because users don’t buy alternative boards just to throw that extra power away on Ubuntu, they buy these boards to get more cpu and gpu power (read: better value for money) for the same price.
Lubuntu is hopelessly obese for the hardware, as is the case with other cheap SBC’s as well. Something like Pixel is much more interesting. You have a slim, efficient and optimized foundation to build on (or strip down). Ubuntu is quite frankly overkill and eats up all the extra power the board supposedly delivers.
When it comes to ram, 1 gigabyte is a bit too small for desktop use. The reason I say this is because it ships with Ubuntu, why would you ship with Ubuntu unless the desktop was the focus? Which again begs the question: why create a desktop Linux device with 1 gigabyte of memory?
The nano-pi would rock with a slim distro, and hopefully someone will bake a more suitable disk-image for it.
Verdict so far
I still have a lot to test so giving you a final verdict right now would be unfair.
But I must be honest and say that I’m not that happy about this board. It’s not that the hardware is particularly awful (although the mali drivers renders it almost pointless), it’s just that it serves no point.
In order to turn this SBC into a reasonable device you have to buy parts that brings the price up to what you would pay for a ODroid XU4. And to be honest I would much rather have an ODroid XU4 than four nano-pi boards. You have plenty of USB ports, good power scaling (ODroid will start just fine on a normal charger), Bluetooth and pretty much everything you need.
For those rare projects where a single USB is enough, although I cannot for the life of me think of one right now, then sure, it may be cost-effective in quanta. But for homebrew servers, gaming rigs and/or your first SBC experience – I think you will be happier with an original Raspberry PI 3b+, a ODroid XU4 or even the Tinkerboard.
Modus operandi
Having said all that .. there is also something to say about modus-operandi. Different boards are designed for different systems. It may very well be that this system is designed to run Android as it’s primary system. So while they provide a Linux image, that may in fact only be a “bonus” distro. We shall soon see as I will test Android next.
Next up, how does it fare with the multi-threaded uae4arm? Stay tuned for more!
The Tinkerboard Strikes Back
For those that follow my blog you probably remember the somewhat devastating rating I gave the Tinkerboard earlier this year (click here for part 1, and here for part 2). It was quite sad having to give such a poor rating to what is ultimately a fine piece of hardware. I had high hopes for it – in fact I bought two of the boards because I figured there was no way it could suck with that those specs. But suck it did and while the muscle was there, the drivers were in such a state that it never emerged for the user. It was released prematurely, and I think most people that bought it agrees on this.
Since my initial review those months ago good things have happened. Asus seem to have listened to the “poonami” of negative feedback and adapted their website accordingly. Unlike the first time I visited when you literally had to dig into recursive menus (which was less than intuitive in this case) just to download the software – the disk images are now available at the bottom of the product page. So thumbs up for that (!)
They have also made the GPIO programming API a lot easier to get; downloading it is reduced to a “one liner” for C developers, which is the way it should be. And they have likewise provided wrappers for other languages, like ever popular python and scratch.
I am a bit disappointed that they don’t provide freepascal units. A lot of developers use object pascal on these board after all, because Object Pascal gives you a better balance between productivity and depth. Pascal is easier to learn (it was designed for that after all) but avoids some of the pitfalls of C/C++ while retaining all the good things. Porting over C headers is fairly easy for a good pascal programmer – but it would be cool of Asus remember that there are more languages in the world than C and python.
All of this aside: the most important change of all is what Asus has done with the drivers! They have finally put together drivers that shows off the capabilities of the hardware and unleash the speed we all hoped for when the board was first announced. And man does it show! My previous experience with the Tinkerboard was horrible; it was the text-book example of a how not to release a product (the whole release has been odd; Asus is a huge, multi-national corporation. Yet their release had basement 3 man band written all over it).
So this is fantastic news! Finally the Tinkerboard delivers and can be used for real life projects!
Smart IOT
At The Smart Company we both create and use our core product, Smart Mobile Studio, to deliver third-party solutions. As the name implies Smart is a software development system initially made for mobile applications; but it quickly grew into a much larger toolchain and is exceptionally good for making embedded applications. With embedded applications I mean things that run on kiosk systems, cash machines and stuff like that; basically anything with a touch-screen that does something.
One of the examples that ship with Smart Pascal is a fully working desktop embedded environment. Smart compiles for both ordinary browsers (JavaScript environments with a traditional HTML5 display) but also for node.js, which is JavaScript unbound by the strict rules of a browser. Developers typically use node.js to write highly scalable server software, but you are naturally not limited to that. Netflix is written 100% in Node.js, so we are talking serious firepower here.
Our embedded environment is called The Smart Desktop (also known as Amibian.js) and gives you a ready-made node.js back-end that couples with a HTML5 front-end. This is a ready to use environment that you can deploy your own applications through. Things like storage, a nice looking UI, user logon and credentials and much, much more is all implemented for you. You don’t have to use it of course, you can write your own system from scratch if you like. We created “Amibian” to demonstrate just how powerful Smart Pascal can be in the right hands.
With this in mind – my main concern when testing SBC’s (single board computers) is obviously web performance. By default JavaScript is a single core event-driven runtime system; you can spawn threads of course but its done somewhat different from how you would work in Delphi or C++. JavaScript is designed to be system friendly and a gentle giant if you like, which has turned out to be a good thing – because the way JS schedules execution makes it ideal for clustering!
Most people find it hard to believe that JavaScript can outperform native code, but the JavaScript runtimes of today is almost a whole eco system in themselves. With JIT compilers and LLVM optimization — it’s a whole new ballgame.
Making a scale
To give you a better context to see where the Tinkerboard is on a scale, I decided to set up a couple of simple tests. Nothing fancy, just running the same web applications and see how each of them perform on different boards. So I used the same 3 candidates as before, namely the Raspberry PI 3b, the Hardkernel ODroid XU4 and last but not least: the Asus Tinkerboard.
I setup the following applications to compile with the desktop system, meaning that they were compiled with the Smart project. We got plenty of web applications but for this I wanted to pack the most demanding apps in our library:
- Skid-Row intro remake using the CODEF library
- Quake 3 asm.js build
- Plex
OK let’s go through them and see where the chips land!
The Raspberry PI 3b
The Raspberry PI was aweful (click here for a video). There is no doubt that native applications like UAE4Arm runs extremely well on the PI (which contains hand optimized assembler, not exactly a fair fight)- but when it comes to modern HTML5 the PI doesn’t stand a chance. You could perhaps use a Raspberry PI 3b for simple applications which are not graphic and cpu intensive, but you can forget about anything remotely taxing.
It ran Bassoon reasonably fast, but all in all you really don’t want a raspberry when doing high quality IOT, unless its headless code and node.js perhaps. Frameworks like Johnny #5 gives you a ton of GPIO features out of the box – in fact you can target 40 embedded systems without any change to your code. But for large, high quality web front-ends, the PI just wont cut it.
- Skid-Row: 1 frame per second or less
- Quake: Can’t even start, just forget it
- Plex: Starts but it lags so much you can’t watch anything
But hey, I never expected $35 to give me a kick ass ARM experience anyways. There are 1000 things the PI does very well, but HTML is not one of them.
ODroid XU4

The ODroid packs a lot of power!
The ODroid being faster than the Raspberry PI is nothing new, but I was surprised at how much power this board delivers. I never expected it to give me a Linux experience close to that of a x86 PC; I mean we are talking about a 45€ SBC here. And it’s only 10€ more than the Raspberry PI, which is a toy at best. But the ODroid XU4 delivers a good Linux desktop; And it’s well worth the extra 10€ when compared to the PI.
Personally I don’t understand why people keep buying PI’s when there is so much better options on the market now. At least not if web technology is involved. A small server or emulator sure, but not HTML5 and browsers. The PI just cant handle it.
- Skid-Row: 4-5 frames per second
- Quake: Runs at very enjoyable speed (!)
- Plex: Runs well but you may want to pick SD or 720p to avoid lags
What really shocked me was that ODroid XU4 can run Quake.js! The PI can’t even start that because it’s so demanding. It is one of the largest and most resource hungry asm.js projects out there – but ODroid XU4 did a fantastic job.
Now it’s not a silky smooth experience, I would guess something along the lines of 17-20 fps. But you know what? Thats pretty good for a $45 board.
I have owned far worse x86 PC’s in my day.
The Tinkerboard
Before i powered up the board I was reluctant to push it too far, because I thought it would fail me once again. I did hope that something had been done by Asus to rectify the situation though, because Asus really should have done a better job before releasing it. It’s now been roughly 6 months since I bought it, and roughly 8 months since it was released here in Europe. It would have been better for them to have waited with the release. I was not alone about butchering the whole board, its been a source of frustration for those that bought it. 75€ is not much, but no-one likes to throw money out the window like that.
Long story short: I downloaded the latest Ubuntu image and burned that to an SD card (I actually first downloaded the Debian Jessie image they have, but sadly you have to do a bit of work to turn that into a desktop system – so I decided to go for Ubuntu instead). If the drivers are in order I have a feeling the Jessie image will be even faster – Ubuntu has always been a high-quality distribution, but it’s also one of the most demanding. One might even say it’s become bloated. But it does deliver a near Microsoft Windows like experience which has served the Linux community well.
But the Tinkerboard really delivers! (click here for the video) Asus have cleaned up their act and implemented the drivers properly, and you can feel that the moment the desktop comes into view. With the PI you are always fighting with lagging performance. When you start a program the whole system freezes for a while, when you quit a program the system freezes – hell when you move the mouse around the system bloody freezes! Well that is not the case with the Tinkerboard that’s for sure. The tinkerboard feels more like running vanilla Ubuntu on a normal x86 PC to be honest.
- Skid-Row: 10-15 frames per second
- Quake: Full screen 32bit graphics, runs like hell
- Plex: Plays back fullscreen HD, just awesome!
All I can say is this: if you are going to do any bit of embedded coding, regardless if you are using Smart Mobile Studio or some other devkit — this is the board to get (!)
Like already mentioned it does cost almost twice as much as the PI, but that extra 30€ buys you loads of extra power. It opens up so many avenues of code and you can explore software far more complex than both the PI and ODroid combined. With the tinkerboard you can finally deliver a state of the art product built with off the shelves web components. It’s in a league of its own.
The ‘tinker’ rocks at last
When I first bought the tinker i felt cheated. It was so frustrating because the specs were so good and the terrible performance just came down to sloppy work and Asus releasing it prematurely for cash (lets face it, they tapped into the lucrative market established by the PI foundation). By looking at the specs you knew it had the firepower to deliver so much, but it was held back by ridicules drivers.
There is still a lot that can be done to make the Tinkerboard run even faster. Like I mentioned Ubuntu is not the racecar of distributions out there. Ubuntu is fat, there is no other way of saying it. So if someone took the time to create a minimalistic Jessie image, recompile every piece with maximum llvm optimization and as few running services as possible — the tinkerboard would positively fly!
So do I recommend it? I am thrilled to say that yes, I can finally recommend the tinkerboard! It is by far the coolest board in my collection now. In fact it’s so good that I’m donating one to my daughter. She is presently using an iMac which is overkill for her needs at age 10. Now I can make a super simple menu with Netflix and Youtube, buy a nice touch-screen display and wall mount it in her room.
Well done Asus!
LDef and bytecodes
LDef, short for Language Definition format, is a standard I have been formulating for a couple of years. I have taken my experience with writing various compilers and parsers, and also my experience of writing RTL’s and combined it all into a standard.
LDef is a way for anyone to create their own programming language. Just like popular libraries and packages deals with the low-level stuff, like Gr32 which is an excellent graphics library — LDef deals with the hard stuff and leaves you with the pleasant job of defining what the language should look like.
The idea is to make a language construction kit if you like, where the underlying engine is flexible enough to express the languages we know and love today – and also powerful enough to express new ideas. For example: let’s say you want to create an awesome new game system (just as an example, it applies to any system that can be automated). You have the means and skill to create the actual engine – but how are you going to market it? You will be up against monoliths like Unity and simple “click and play” engines like ClickTeam Fusion, Game Maker and the likes.
Well, the only way to make good games is hard work. There is no two ways about it. You can fake your way only so far – so at the end of the day you want to give your users something solid.
In our example of publishing a game-engine, I think that you would stand a much better chance of attracting users if you hooked that engine up to a language. A language that is easy to use, easy to learn and with commands that are both specific and non-specific to your engine.
There are some flavours of Basic that has produced knock-out games for decades, like BlitzBasic. That language alone has produced hundreds of titles for both PC, XBox and even Nintendo. So it’s insanely fast and not a pushover.
And here is the cool part about LDEF: namely that it makes it easy for you to design your own languages. You can use one of the pre-defined languages, like object pascal or visual basic if that is what you like – but ultimately the fun begins when you start to experiment with new ideas and language features. And it’s fun when you get to that point, because all the nitty gritty is handled. You get to focus on the superficial stuff like syntax and high level functions. So you can shave off quite a bit of development time and make coding fun again!
The paradox of faster bytecodes
Bytecodes used to be to slow for anything substantial. On 16-bit machines bytecodes were used in maybe one language (that I know of) and that was the ‘E’ compiler. The E language was maybe 30 years ahead of its time and is probably the only language I can think of that fits cloud programming like hand in glove. But it was also an excellent system automation language (scripting) and really turned some heads back in the late 80s and early 90s. REXX was only recently added to OS X, some 28 years after the Amiga line of computers introduced it to the general public.

Bytecode dump of a program compiled with the node.js version of the compiler
In modern times bytecodes have resurfaced through Java and the .NET framework which for some reason caused a stir in the whole development community. I honestly never bought into the hype, but I am old enough to remember the whole story – so I’m probably not the Microsoft demographic anyways. Java hyped their virtual machine opcodes to the point of exhaustion. People really are stupid. Man did they do a number on CEO’s and heads of R&D around the world.
Anyways, end of the story was that Intel and AMD went with it and did some optimizations that could help bytecodes run faster. The stack was optimized with Java, because let’s face it – it is the proverbial assault on the hardware. And the cache was expanded on command from the emper.. eh, Microsoft. Also (if I remember correctly) the “jump to pointer” and various branch instructions were made to execute faster. I remember reading about this in Dr. Dobbs Journal and Microsoft Developer Magazine; granted it was a few years ago. What was interesting is the symbiotic relationship that exists between Intel and Microsoft, I really didn’t know just how closely knit these guys were.
Either way, bytecodes in 2017 is capable of a lot more than they ever were on 16-bit and early 32-bit systems. A cpu like Intel i5 or i7 will chew through bytecodes like a warm knife on butter. It depends on how you orchestrate the opcodes and how much work you delegate to the various instructions.
Modeled instructions
Bytecodes are cool but they have to be modeled right, or its all going to end up as a bloated, slow and limited system. You don’t want to be to low-level, otherwise what is the point of bytecodes? Bytecodes should be a part of a bigger picture, one that could some day be modeled using FPGA’s for instance.
The LDef format is very flexible. Each instruction is ultimately a single 32-bit longword (4 bytes) where each byte holds key information about the command, what data is forward in the cache and how it should be read.
The byte organization is:
- 0 – Actual opcode
- 1 – Instruction layout
Depending on the instruction layout, the next two bytes can hold different values. The instruction layout is a simple value that defines how the data for the instruction is passed.
- Constant to register
- Variable to register
- Register to register
- Register to variable
- Register to stack
- Stack to register
- Variable to variable
- Constant to variable
- Stack to variable
- Program counter (PC) to register
- Register to Program counter
- ED (exception data) to register
- Register to exception-data
As you can probably work out from the information here, this list hints to some archetectual features. Variables are first class citizens in LDef, they are allocated, managed and released using instructions. Constants can be either automatically handled and references by id (a resource chunk is linked to the class binary) or “in place” and compiled directly into the assembly as part of the instruction. For example
load R[0], "this is a test"
This line of code will take the constant “this is a test” and move it into register #0. You can choose to have the text-data stored as a proper resource which is appended to the compiled bytecode (all classes and modules have a resource chunk) or just compile “as is” and have the data read directly. The first option is faster and something you can adjust with compiler optimization options. The second option is easier to work with when you debug since you can see the data directly as a part of the debug memory dump.
And last but not least there are registers. 32 of them in number (so for the low-level coders out there you should have few limitations with regards to register mapping). All operations (like divide, multiply etc) operate on registers only. So to multiply two variables they first have to be moved into registers and the multiplication is executed there – then you can move the result to a variable afterwards.

LDef assembly code. Simple but extremely effective
The reason registers is used in my runtime system – is because you will not be able to model a FPGA with high-level concepts like “variables” should someone every try to implement this as hardware. Things like registers however is very easy to model and how actual processors work. You move things from memory into a cpu register, perform an action, and then move the result back into memory.
This is where Java made a terrible mistake. They move all data onto the stack and then call the operation. This simplifies execution of instructions since there is never any registers to keep track of, but it just murders stack-space and renders Java useless on mobile devices. The reason Google threw out classical Java (e.g Java as bytecodes) is due to this fact (and more). After the first android devices came out they quickly switched to a native compiler – because Java was too slow, to power-hungry and required too much memory (especially stack space) to function properly. Battery life was close to useless and the only way to save Java was to go native. Which is laughable because the entire point of Java was mobility, “compile once run everywhere” — yeah well, that didn’t turn out to well did it 😀
Dot net improved on this by adding a “load resource” type instruction, where each method will load in the constant data by number – and they are loaded into pre-defined slots (the variables you have used naturally). Then you can execute operations in typical “A + B to C” style (actually all of that is omitted since the compiler already knows both A, B and C). This is much more stack friendly and places performance penalty on the common language runtime (CLR).
Sadly Microsoft’s platform, like everything Microsoft does, requires a pretty large infrastructure. It’s not simple, elegant and fast – it’s more monolithic, massive and resource hungry. You don’t see .net being the first thing ported to a new platform. You typically see GCC followed by Freepascal.
LDef takes the bytecode architecture one step further. On assembly level you reference data using identifiers just like .net, and each instruction is naturally executed by the runtime-engine – but data handling is kept within the virtual realm. You are expected to use the registers as temporary holding slots for your information. And no operations are ever done directly on a variable.
The benefit of this is:
- Better payload balancing
- Easier to JIT since the architecture is closer to real assembly
- Retains important aspects of how real hardware works (with FPGA in mind)
So there are good reasons for the standard, all of them very good.
C like intermediate language
With assembler so clearly defined you would expect assembly to be the way you work. In essence that what you do, but since OOP is built into the system and there are structures you are expected to populate — structures that would be tedious to do in raw unbridled assembler, I have opted for a C++ inspired intermediate language.

The LDEF assembler kitchen sink
You would half expect me to implement pascal, but truth be told pascal parsing is more complex than C parsing, and C allows you to recycle parsers more easily, so dealing with sub structures and nested regions is less maintainance and easier to write code for.
So there is no spesific reason why I picked C++ as a intermediate language. I would prefer pascal but I also think it would cause a lot of confusion since object pascal will be the prime citizen of LDef languages. My other language, N++ also used curley brackets so I’m honestly not strict about what syntax people prefer.
Intermediate language features supported are:
- Class declarations
- Struct declarations
- Parameter to register mapping
- Before mehod code (enter)
- After method code (leave)
- Alloc section for class fields
- Alloc section for method variables
The before and after code for methods is very handy. They allow you to define code that should execute before the actual procedure. On a higher level when designing a new language, this is where you would implement custom allocation, parameter testing etc.
So if you call this method:
function testcode() { enter { writeln("this is called before the method entry"); } leave { writeln("this is called after the method exits"); } writeln("this is the method body"); }
Results in the following output:
this is called before the method entry this is the method body this is called after the method exits
When you work with designing your language, you eventually.
Truly portable
Now I have no aspirations in going into competition with neither Oracle, Microsoft or anyone in between. Like most geeks I do things I find interesting and enjoy working within a field of computing that is stimulating and personally rewarding.
Programming languages is an area where things havent really changed that much since the golden 80s. Sure we have gotten a ton of fancy new software, and the way people use languages has changed – but at the end of the day the languages we use havent really changed that much.
JavaScript is probably the only language that came out of the blue and took the world by storm, but that is due to the central role the browser holds for the internet. I sincerely doubt JavaScript would even have made a dent in the market otherwise.
LDef is the type of toolkit that can change all this. It’s not just another language, and it’s not just another bytecode engine. A lot of thought has gone into its architecture, not just notions of “how can we do this or that”, but big ideas about the future of computing and how IOT will sculpt the market within 5-8 years. And the changes will be permanent and irrevocable.
Being able to define new languages will be utmost important in the decade ahead. We don’t even know the landscape yet but we can extrapolate some ideas based on where technology is going. All of it in broad strokes of course, but still – there are some fundamental facts about computers that the timeless and havent aged a day. It’s like mathematics, the Pythagorean theorem may be 2500 years old but it’s just as valid today as it was back then. Principles never die.
I took the example of a game engine at the start of this article. That might have been a poor choice for some, but hopefully the general reader got the message: the nature of control requires articulation. Regardless if you are coding an invoice system or a game engine, factors like time, portability and ease of use will be just as valid.
There is also automation to keep your eye on. While most of it is just media hype at this point, there will be some form of AI automation. The media always exaggerates things, so I think we can safely disregard a walking, self-aware Terminator type robot replacing you at work. In my view you can disregard as much as 80% of what the media talks about (regardless of topic). But some industries will see wast improvement from automation. The oil and gas sector are the most obvious. A the moment security is as good as humans can make them – which means it is flawed and something goes wrong every day around the globe. But smart pumping stations and clever pressure measurements and handling will make a huge difference for the people who work with oil. And safer oil pipelines means lives saved and better environmental control.
The question is, how do we describe programs 20 years from now? Is our current tools up for the reality of IOT and billions of connected devices? Do we even have a language that runs equally well as a 1000 instance server-cluster as it would as a stand alone program on your desktop? When you start to look into parallel computing and multi-cluster data processing farms – languages like C# and C++ makes little sense. Node.js is close, very close, but dealing with all the callbacks and odd limitations of JavaScript is tedious (which is why we created Smart Pascal to begin with).
The future needs new things. And for that to happen we first need tools to create them. Which is where my passion is.
Node, native and beyond
When people create compilers and programming languages they often do so for a reason. It could be that their own tools are lacking (which was my initial motivation), or that they have thought of a better way to achieve something; the reasons can be many. In Microsofts case it was revenge and spite, since they were unsuccessful in stealing Java away from Sun Microsystems (Oracle now owns Java).

LDef binaries are fairly straight forward. The less fluff the better
Point is, you implement your idea using the language you know – on the platform you normally use. So for me that is object pascal on windows. I’m writing object pascal because while the native compiler and runtime is written in Delphi – it is made to compile under Freepascal for Linux and OS X.
But the primary work is done in Smart Pascal and compiled to JavaScript for node.js. So the native part is actually a back-port from Smart. And there is a good reason I’m doing it this way.
First of all I wanted a runtime and compiler system that would require very little to run. Node.js has grown fat in features over the past couple of years – but out of the box node.js is fast, portable and available almost anywhere these days. You can write some damn fast and scalable cloud servers with node (and with fast i mean FAST, as in handling thousands of online gamers all playing complex first person worlds) and you can also write some stable and rock solid system services.
Node is turning into a jack of all trades, capable of scaling and clustering way beyond what native software can do. Netflix actually re-wrote their entire service stack using node back in 2014. The old C++ and ASP approach was not able to handle the payload. And every time they did a small change it took 45 minutes to compile and get a binary to test. So yeah, node.js makes so much more sense when you start looking a big-data!
So I wanted to write LDef in a way that made it portable and easy to implement. Regardless of platform, language and features. Out of the box JavaScript is pretty naked stuff and the most advanced high-level feature LDef uses is buffers to deal with memory. everything else is forced to be simple and straight forward. No huge architecture or global system services, just a small and fast runtime and your binaries. And that’s all you need to run your compiled applications.
Ultimately, LDef will be written in LDef itself and compile itself. Needing only a small executable stub to be ported to a new platform. Most of mono C# for Linux is written in C# itself – again making it super easy to move mono between distros and operating systems. You can’t do that with the Visual Studio, at least not until Microsoft wants you to. Neither would you expect that from Apple XCode. Just saying.
The only way to achieve the same portability that mono, freepascal and C/C++ has to offer, is naturally to design the system as such from the beginning. Keep it simple, avoid (operatingsystem) globalization at all cost, and never-ever use platform bound APIs except in the runtime. Be Posix but for everything!
Current state of standard and licensing
The standard is currently being documented and a lot of work has been done in this department already. But it’s a huge project to document since it covers not only LDEF as a high-level toolkit, but stretches from the compiler to the source-code it is designed to compile to the very binary output. The standard documentation is close to a book at this stage, but that’s the way it has to be to ensure every part is understood correctly.
But the question most people have is often “how are you licensing this?”.
Well, I really want LDEF to be a free standard. However, to protect it against hijacking and abuse – a license must be obtained for financial entities (as in companies) using the LDEF toolkit and standard in commercial products.
I think the way Unreal software handles their open-source business is a great example of how things should be done. They never charge the little guy or the Indie developer – until they are successful enough to afford it. So once sales hits a defined sum, you are expected to pay a small percentage in royalties. Which is only fair since Unreal engine is central to the software to begin with.
So LDef is open source, free to use for all types of projects (with an obligation to pay a 3% royalty for commercial products that exceeds $4999 in revenue). Emphasis is on open source development. As long as the financial obligations by companies and developers using LDEF to create successful products is respected, only creativity sets the limit.
If you use LDEF to create a successful product where you make 50.000 NKR (roughly USD 5000) you are legally bound to pay 3% of your product revenue monthly for the duration of the product. Which is extremely little (3% of $5000 is $150 which is a lot less than you would pay for a Delphi license, the latter costing between upwards of USD 3000).
Freepascal online, awesome work!
Freepascal is, with the exception of gcc – the gnu c/c++ compiler toolchain, probably the most versatile and adaptable compiler in the world. Unlike most compilers outside the C camp – Freepascal is complete toolchain, and it has more than a few similarities to c/c++ in that it’s largely self-reliant.

Freepascal scales from the old mc68000 family to the latest Intel i7, impressive!
FPC is easy to port over to new (or old) platforms, and as far as languages go – object pascal has a level of productivity that I have yet to find in other languages.
Now I have no interest in language wars or fanboy stereotyping, all languages have aspects that are unique and beneficial. So this is not an attempt to promote A over B.
Targeting both old and new from a single codebase
While mainstream and work related topics never brush into this – there is actually a healthy group of people still using older platforms. Naturally this is well within the “hobby” range – where passion and curiosity are the prime motivators rather than cash. But still, it is interesting to see how little of modern software is capable of scaling back to processors 10, 15 or 20 years out of date.
Alb42 is the nickname of a very industrious individual who has taken it on himself to port freepascal back to his roots. Which just happens to be the Amiga line of computers.

Some of the best arcade games ever made are “vintage” by nature
But he has also done a lot of work for modern systems. A new Amiga was actually released this year (although most people havent noticed it) – and finding development software for a brand new platform is not easy. You have a variant of C/C++ naturally – but the honest truth is that C/C++ is not the language most people use to create desktop productivity software in. That is left to object pascal, C# and variations of Java. Delphi alone is responsible for a huge portfolio of popular software on Windows for instance. Only surpassed recently by monoliths like C#.
So getting freepascal running on Amiga OS 4 (which is the latest and modern operating system shipping with the new Amiga x5000 model) is very important to the market share of the platform.
Do it online!
Alb42 has setup an online compiler testbed where you can type in your freepascal snippets and compile to executable code on the spot. You can choose between MorphOS, Classic 68k, Aros (x86) and Amiga OS 4 (ppc). It’s a very simple setup yet it’s possible to do ad-hoc compilation and play around with the system 🙂

While humble, Alb42 has setup an interesting online kitchen sink that demonstrates how flexible and powerful object pascal really is
To give it a go, head over to: https://home.alb42.de/fpamiga/
And as always, visit his blog for both VMware images with cross compilation and binaries for your favorite retro system.
But perhaps even more important, your new x5000 (!)
Dont forget to join us at Amiga Dispatch on Facebook.
Cheers!
A day with the Tinkerboard, part 2
Please read the first article here before you continue.
Having gone through a wealth of alternative boards, alternative to the Raspberry PI 3b that is; I have to admit that I am really, really disappointed by the results. Not just with the Tinkerboard but also boards like the ODroid XU4 and the whole fruit-basket of banana, orange and pineapple PI clones.

What a waste
And please keep in mind that I have not been out to discredit anyone. I have been genuinely interested in these alternatives for many reasons, not just emulation or multimedia. I have no problem paying that extra 15£ for a system that delivers twice the power of the PI, in fact I even paid close to 150£ for the UP board. Which is also the only board that so far has delivered on its promise. Microsoft Windows and the horrors of eMMC storage considered.
Part of what I work with these days include node.js and full screen browser views, often touch enabled for POS (point of sale) type machines. This is an exciting field because for the first time in history we can now create rich, responsive and rock solid solutions using off the shelves web technology.
So while emulation of older systems is interesting for hobby purposes, the performance of vital technologies like the browser – and consequently the performance of JavaScript and the underlying JIT compilation, is naturally paramount for what I do.
So I was really looking forward to the extra power these boards advertize. Both CPU, number of cores, extra memory and last but not least – the benefits of a fast GPU. The latter directly impacts the effects we can use in our user-interface. And while effects may seem superficial, its important for touch feedback and the customers experience.
We have all come across touch devices that give little or no visual feedback. We all know what its like when you type something and the UI just hangs while talking with a server. This is why web tech is so important today. In many ways Html5 and websocket is re-defining what we think of as software.
So while I bring little pie-charts and numbers to the table, I do bring honesty and perspective. It’s not all fun and games.
Was it really fair?
It has been voiced by a couple that perhaps my way of testing things is unfair. Like I mentioned in my previous article the PI has a healthy head-start and a rich community that not only consumes and demands, but also contributes substantially to the value of the product. A well established community around a product is worth its weight in gold. The alternative boards have yet to establish that and when it comes to the Tinkerboard, it’s pretty much just getting started.
Secondly, as far as emulation goes, we have to remember that those who make these products rarely have emulation in mind (and probably not Amiga which by any standard is esoteric compared to Nintendo and Sega consoles). If we look at what people use gadgets like the Raspberry PI for, I think products like Kodi, Apache and various DIY programming experiments out-rank all the retro systems combined. And it’s clear that the people behind Tinkerboard is more media oriented than anything else; it ships with hardware support for 4k video playback after all.
Having said that, I do feel I gave both the ODroid and Tinkerboard a fair trial. I did not approach these boards as a developer, but rather as a consumer. I have focused on the experience of familiar things – like using a browser and how well JavaScript runs, does the UI lag under stress; things that any customer will face after purchase.
UAE4Arm vs FS-UAE
As far as UAE (Unix Amiga Emulator) there is one thing that might be unfair. Amibian is based around something called UAE4Arm which is a highly optimized version of the Amiga emulator. I would even go so far as to say its THE most optimized UAE version on any platform – Windows and OSX included.

The Amiga was a lot more than just a games machine, it was way ahead of both Windows and OS X for a decade
Comparing that with FS-UAE is perhaps unfair. FS-UAE is based on the older, non optimized codebase. It’s known for stable emulation and good JIT performance. Sadly the JIT engine only works on x86 and is thus disabled on ARM systems. So to be honest it can’t hold a candle against UAE4Arm. The latter uses lookup tables for everything, fast switch cases instead of long sequences of “if then else” code (easier for the compiler to optimize), and it benefits from hardcore llvm level 3 optimization. So I agree, comparing FS to UAE4Arm is like comparing a tiger with a kitten. It’s not even in the same ballpark.
Optimization on this level is almost a lost art these days. People are used to languages like C# and Java – which are hopelessly bloated (for those that have seen what raw assembler and hand optimized pascal can deliver). On the Amiga we learned how to shave off a clock-cycle here and there – and the end result really made a huge difference.
The same type of low-level, every cycle counts, type optimization is used throughout UAE4Arm. FS-Uae is safe, slow and can hardly deliver A500 speed on a Raspberry PI. Yet UAE4Arm spits out 3.2 times the power of an Amiga 4000 with a 68040 cpu. That was the flagship power machine back in the day, and this little $40 card delivers 3 times the power under emulation.
Its like comparing a muffled 1978 Lada with a pissed off 2017 Lamborghini.
Fair is fair
Having tested just about every distro I could find for the Tinker, even back-tracking to older releases in hope that the drivers there would be better suited for the board (perhaps the new drivers were released in a hurry, or a mistake was made) the last straw really was Android. My hope was that Android could have better drivers since it sees a lot of development. It has giants like Google and Samsung behind it – so the last glimmer of light was that Asus had focused on Android rather than Linux.

Will Android save the say and give the Tinker it’s due credit?
I downloaded the latest image, burnt it and booted – thinking that this should finally put things right. Well it didn’t. Quite the opposite.
The first boot took almost an hour. Yes, an hour. I suspect this is because it did a network restore – downloading packages and requirements from the Internet. But that is still no excuse. I have tried Android on the PI it and it was up and running in less than 10 minutes; and that includes having to set some initial user information.
When the Android desktop finally came into view, it was stripped down with no repository package manager. In other words you would have to manually install packages. But that would have been OK had it not been for the utterly sluggish performance.
Android on the PI is no race-car, but at least it doesn’t freeze and lock-up every five second. Clicking on a link in the browser froze the whole system, and you were asked if you wanted to terminate the process again and again while Android figured out the meaning of life or whatever. Every single time.
I’m sorry but this is the worst experience I have ever had with a board. The ODroid XU4 at least tries to deliver a good experience. I had no problems finding Linux editions that ran well (especially gaming systems, Kodi and some homebrew distros) – and the chrome build for the ODroid gave far better results than both the PI and Tinkerboard. It was not the power-house it had been hyped up to be, but at least it delivered a normal desktop experience (even the PI locks up when facing demanding tasks, but the Tinker has an 8 core cpu, twice the ram and a supposedly superior GPU. With the capacity of running 8 threads and doing task scheduling in batches of 8 – the Tinkerboard should remain both stable and interactive under pressure).
I am sorry but I cannot recommend the Tinkerboard at all. The ODroid gives you a slight edge over the PI and if someone baked a more optimized Linux distro for it (it ships with a variant of Ubuntu) then ODroid XU4 would kick serious ass. But the Tinkerboard is a complete disaster.
Do not waste your money on the Tinkerboard. It may have the hardware but the software to utilize and express that power is simply not there! It has been the most pointless purchase I have done this year.
A common waste
The common denominator between these alternatives is the notion that more power equals more stuff. Which ultimately results in biting over more than they can chew. A distro like Pixel is lightweight, optimized and the PI foundation has spent a considerable amount of time making sure the drivers run as fast as possible.

Saying it doesnt make it so, except in Weird Science
What these alternative hardware suppliers don’t seem to get is that – what is the point of having a cpu that is twice as fast as the PI, when you waste it on a more demanding version of Linux? Both the ODroid and Tinkerboard ship with Ubuntu clones. And while Ubuntu is wonderful to work with on a high-end x86 machine, 32 gigabytes of ram and a i7 CPU growling away under your desk, it is without a doubt the worst possible OS you could pick for a tiny ARM board.
To put things into perspective: The Tinkerboard has the “theoretical” cpu power of a Samsung Galaxy S4 mobile phone (2012). Yet in their infinite wisdom they decided Ubuntu is what is really going to show off this fart of a device.
It makes absolutely no sense from a consumer’s perspective. As a user you want that extra power for running programs. You want to use the extra power on your stuff. Yet these guys seem to think that wasting all the extra power on infrastructure is going to score points.
So if anyone from the Tinkerboard or ODroid camp reads this, please remember the old saying “If you cannot innovate, emulate”. Make a fast, stable and small Jesse distro like Pixel. Focus on optimization. Compile whatever you can with maximum optimization. Shave off every clock cycle you can. Hand carve the drivers in assembler if that’s what it takes.
Because the sad result is, that right now the Raspberry PI is outperforming you with lesser hardware. There is no point in buying a Tinkerboard because it actually delivers a worse experience. It has bitten off more than it can chew and it’s trying to be something it’s not.
A day with the Tinkerboard, part 1
Anyone into IOT mini computers, be it as a hobby or professionally – have no doubt heard about the Tinkerboard. The Tinkerboard is an alternative to the Raspberry PI (aren’t they all) produced by hardware giant Asus. Everything from its form-factor (size) to its color coordinated GPIO pins is more or less the same as the PI. What separates the two is that the Tinker gives you double the CPU power, double the ram and pretty much double of everything. If we compare it with the Raspberry PI 3b that is.
But you don’t get double for free. The Tinkerboard retails on Amazon for USD 60 which means something in the ballpark of 55£ for those that live in the UK. For the rest of us that live in far-away places like Norway, prices and taxes may vary.
Before I continue I want to write a few words about pricing. The word “expensive” is one I keep hearing when it comes to the Tinkerboard, but that is extremely unfair considering what you get for your money.
The Raspberry PI 3b is an awesome product and packs a lot of goodies into a credit-card size computer. The same is true for the Tinkerboard, but they have packed even more stuff into the same space! The difference in price is just 15£ – but those 15£ gives you the power to emulate demanding platforms like Nintendo Wii, Sega Dreamcast and Playstation (if you are into retro gaming). Platforms the Raspberry PI 3b dont stand a chance emulating (at least not smoothly).
For serious applications all that extra cpu power can be the difference between success and failure. In many ways the Tinker feels a bit like working on a low-end laptop. If you are building a movie server for your living-room the Tinkerboard will ble able to convert and stream movies much smoother and better than the PI. And if you are looking for a device to power your next kiosk system or POS device, again the extra memory and cpu power is going to make all the difference. And with just 15£ between the two models, I cannot imagine why anyone would pick the PI if gaming or movie streaming is on the menu.
Oh and it has built-in wi-fi and bluetooth support! Here are the specs as listed on Amazon:
- High performance Quad Core ARM SOC 1.8GHz with 2GB of RAM -The Tinker board features the Rock chip RK3288 Soc with Mali – T764 GPU and 2GB of Dual Channel DDR3 memory
- Non shared GBit LAN, Shielded Wi-Fi with upgradable antenna support
- Highly compatible PCB & Topology, Tinker board offer extensive compatibility with SBC accessories & chassis
- HD Audio & HD & UHD Video Support – Tinker board supports HD audio 192/24bit audio along with accelerated HD & UHD ( 4K ) video playback support* requires use of Rock chip video player in TinkerOS
- DIY Friendly Design – Tinker board features multiple DIY friendly use features including a color coded GPIO header, silkscreen PCB and color coded pull tabs
That sounds pretty nice doesnt it 🙂
One does not simply take on the mighty PI
Being an alternative to the Raspberry PI is not as easy as it sounds. You would imagine that it all boils down to the specs right? So if you beef up the cpu, ram and gpu everyone will throw away their PI’s and buy your stuff? If only that was true. In which case the Tinkerboard would be awesome since it outperforms the PI on every level.
But it’s not that simple. There is more to a product than just raw power.
One of the cool things about the Raspberry PI is the community. The knowledge base of the product if you like. In order to build a community and be a success, everything has to click. I mean just look at the third-party eco-system that exists around the PI; all those companies selling add-ons and widgets, sensors and remote controls. The PI itself is quite dull. It’s all that extra stuff that makes it so fun to play with. And then you have things like excellent customer service, a forum where people help each other out, well written documentation — the value of the product suddenly exceeds whatever you paid for it to begin with.
And the Tinkerboard is not alone in trying to get a piece of the action. It competes with boards like the ODroid XU4, the x86 based UP boards, Latte Panda, The Pine boards and a whole fruit-basket of banana, orange and whatever PI clones. All of them trying to get a bite of the phenomenon that is the Raspberry PI.
So Asus got their work cut out for them.
First impressions
Having unpacked the board and plugged in all the cables (which is exactly as dull as it sounds) the first thing you do is look for software. Which naturally should be a one-click operation on their website.
Well, the first thing that annoyed me was exactly that: the Asus website. The Tinkerboard is not something that belongs on a sterile, streamlined corporate site. It belongs on website that should be easy to navigate, with plenty of information that hobbyists and “tinkers” need to get started. The Asus website simply lacks the warmth and friendliness of the PI foundation website. I even had to google around a couple of times just to find the disk images. These should be readily available on the site, on the first page even – to make sure customers can get up and running quickly.

It’s a little bit clangy and a little bit jammy
The presentation of the board was ok, but you really shouldnt have to use an external search engine to locate essential data for something you just bought. Unlike the PI foundation, Asus seem to have put little effort into what customers get once they have bought the board. As for community I dont even know if there is one.
When you finally manage to find the bloody download page, these are your options (there are 17 downloads in total, counting older OS images, config files and schematics):
- TinkerOS 1.8
- TinkerOS 1.6 Beta (my initial download)
- Android, also beta
And don’t expect a polished Linux distro like Pixel either. You get a clean distro with some typical stuff pre-loaded (python, perl for some reason, and various tidbits) but nothing close to the quality of Pixel.
Note: Why they would list older beta releases in their downloads is beyond me. It only adds to the confusion of a sterile list of items. And customers who just got their boards will be eager to try it out and hence make a mistake like I did. Beta and unstable releases should be in a separate category, while the stable and up-to-date images are listed first.
The second thing that really irritated me was something as simple as adjusting the keyboard layout and setting the locale. These are simply not present in preferences at all. Once again you have to google around until you find the terminal commands to manually set these things, which utterly defeats booting into a desktop environment to begin with. This might seem trivial for someone who knows Linux well, but to a novice it can take hours. Things this fundamental should be available in preferences or at the very least as a script. Like raspi-config on the PI.
But OK. If we look away from these superficial things and focus on the actual board, things are not half bad.
It’s the insides that counts
The first thing you are going to notice is how much faster the system is compared to the PI. When you start an application on the PI that is demanding, like libre-office or Lazarus, the whole system can freeze-up for a few seconds while the CPU go through the roof. I was pleased to find that this is not the case here (at least not as disruptive). You can start a program and the system continues to be responsive while things load.
What made me furious though was the quality of the graphics drivers. Performance wise the Tinkerboard is twice (actually a bit more) as fast as the PI on all fronts – and the GPU is supposed to deliver a vastly superior graphics experience. Yet when I ran Amibian.js (the Smart Pascal JavaScript desktop system) in Chrome – the CSS hardware effects were painfully slow. I even questioned if GPU was used at all (Note: which I later confirmed was the case, so much for downloading a beta image by mistake). I was amazed at the cpu power of the Tinkerboard because despite having to do everything purely with the CPU (moving pixels in memory, allocating device independent bitmaps, rasterizing layers and complex compositions in X) and it was more than usable.
But it sure as hell was not the superior graphics experience I was hoping for.
I also noticed that when moving a normal desktop window (X window that is) rapidly around the screen, it would lag behind and re-trace your steps until it caught up with the mouse pointer. This was very odd since most composition engines would have eliminated these movements as much as possible – and the GPU would do all the work. Again, I blame the drivers and pondered what could possibly make a GPU perform so badly.

Im supriced it booted at all, nothing is working
So while the system is notably faster than the PI, I get the sense that the drivers and tooling is not really polished. The hardware no doubt got juice but without drivers being carefully written for the board – most of the extra juice you pay for is wasted.
In the end I opened up chrome and typed “chrome://gpu” to see what was going on. And not surprisingly I was correct. It wasnt using the GPU at all and I had wasted hours on something that should have been clearly marked “Lacks hardware acceleration” in bold, italic, underlined big red letters. Like the PI foundation did when they shipped Raspberry PI 2 without GPU drivers. But they promptly delivered and it was ok, because you were informed up front.
Back to scratch
After a trip back to google I downloaded the right (or at least working) image, namely the TinkerOS 1.8 stable. It booted up, resized the partition to fill the whole SD card automatically (and other boring bits).
The first thing I did was start chrome and use the “chrome://gpu” to get some statistics. And thankfully most of the items were now working and active.

Thats better, although i ponder why GPU DIB’s are not active
Emulation, the name of the game
While I have serious tests to perform there is no doubt that what I was most eager to play with was UAE (the Unix Amiga Emulator). Like most retro-heads I have a ton of Raspberry PI’s and ODroid’s around the house with the sole purpose of emulating older game or computer systems. And I think everyone knows that my favorite computer in the whole wide world is the Commodore Amiga.
On the PI, Gunnar’s native Amibian distro gives you awesome performance. If you overclock the ram, gpu and cpu using safe numbers – the PI 3b spits out roughly 3.2 times the power of a Mc68040 based Amiga 4000. In other words 3 times the power of a high-end Amiga from the early 90s. That is quite an achievement for a $40 board. You can only imagine my expectations for a board running more than twice as fast as the PI does (and costing that 15£ more).

Boots fine but you can forget about using it. The PI is 100 times faster than this
One snag though was that UAE4Arm which is the highly optimized version of the Amiga Emulator doesnt exist for the Tinkerboard. In theory it should compile without problems since the latest version uses SDL exclusively. So there is no odd, deprecated UI toolkit like the old UAE4Arm used.
The only thing available was fs-uae. And while that is a very popular emulator on Windows and Mac, it’s sadly not optimized as much as UAE4Arm is. The latter uses lookup tables to the extreme, all if, then, else statements have been replaced by fast switches — and it’s just optimized beyond anything else running on ARM.
The result on the PI in phenomenal, but sadly fs-uae was all I could play with. I will try to compile UAE4Arm on the Tinkerboard later though. But right now im to busy.
Not optimized at all
It is hard to pinpoint exactly where the problem is, but I suspect the older codebase of fs-uae combined with poorly written drivers is what resulted in the absolutely sluggish behavor.
Dont get me wrong, if all you want to do is play some A500 or A1200 games the Tinker is more than capable. But if you want to run the same high-performance desktop as Amibian gives you on the PI – you can pretty much forget it.
I set the system to emulate an A4000, activated the JIT engine and set my desktop to match the display of the desktop. This should run like hell — yet it was slow as a snail. You could literally see the mouse-pointer slowly trying to keep up as I moved it across the desktop.
One interesting thing though. When I ran fs-uae on the first image, the beta image where everything was crap, it actually ran exceptionally well. But here on the so-called stable and “up to date” version, it was useless for anything but clean floppy gaming.
There are many factors involved in this. The drivers sits at the bottom just above the kernel and exposes the hardware so it can be used through a common API. On top of this you have stuff like OpenGL and on top of that again you have SDL which simplifies typical gaming or graphics tasks. This may sound like a long road for code to travel, but it’s actually very fast. If the drivers expose the power of the GPU that is.
I can only conclude at this point that the Tinkerboard has the firepower, I have seen the result of tests performed by others (and tested a retrogaming image) – and you feel the difference almost immediately when you boot the device. But the graphics drivers seems.. crippled somehow. It’s easy to point the finger at Asus and say they have done a terrible job at writing drivers here, but it can also be a bottleneck elsewhere in the system.
This is something Raspberry PI has got so right. Every part of the system has been optimized for the hardware, which makes the PI feel smooth even under a lot of stress. It delegates all the hard graphical stuff to the GPU – and the GPU just deals with it.
Sadly this is not the case with the Tinkerboard so far. And it really is such a shame because the specs speak for themselves.
Android
Next time: We will be giving Android a test and hopefully that will have drivers that are more up to date than whatever we have seen so far..
Smart Pascal: stop sharing beta code
I was a bit sad to hear that our RTL, despite the testing team being a very closed group of people, had made its way into the hands of others. People who then blog or comment that its unstable or that to many changes has been added.
When the phrase “early beta” and “our first sprint is to test the low-level functionality, like pointers, memory allocation and manipulation, streams, compression, codecs and type conversion” is clearly stated, then why on earth would someone start at the other end (the visual high level) which is due for changes weeks into the future?
Since this is doing the rounds I figured it is only fair that I clear up a few things:
An async run-time-library that should deliver identical behavior on 9 operating systems and a plethora of browsers, is not something you slap together over a weekend. Many of the solutions in SMS are the result of months of hard work; research and testing on both PC, Mac, various mobile devices and six different embedded boards.
I don’t know how many hours I have put into this project these past 6 years, but its more than I care to admit.
Obviously an early beta is not going to give you 100% compatibility. Especially not since the whole point of the first sprint was for our group to write, test and make sure the low-level methods are rock solid before we move on.
The second sprint is due soon, here we will look at databases, data binding, node.js and writing system level services (via the PM2 process manager).
The last sprint would focus on the visual, high level aspect of the RTL. Things like custom controls, layout, non visual components, event objects, messaging and user interface code.
I’m really gutted that we didn’t even make it through the first sprint before the code was shared. I have no idea who did this, and I frankly don’t care. But from now on I will only throw ball with people I trust and have a friendship with.
People doing reviews and publishing opinions on an upgrade that is presently 1/3 into the final release candidate – I just feel that is unfair.
Polyfills are not shims, and they are not dangerous
It was voiced that the new RTL loads in a couple of files that were considered dangerous by google. Actually that cannot be further from the truth.
But stop and think about this: why would I link to external JavaScript files for things like mutation-events and observers? If these things are available in browsers (which they have been for a long time) why would I need external files?
The answer is that these files are polyfills. And there is a huge difference between what is known as a shim, and a polyfill.
A polyfill is a small JavaScript library that checks if the browser supports the features you want. But if it doesn’t – then the polyfill will register its own code to compensate for the lack of a function or feature. In other words a polyfill is a full implementation (!)
So even if Google, Mozilla or Microsoft remove support for node-observers or mutation-events tomorrow, your application wont be affected by it. Because the polyfill libraries are just that: full library implementations with no dependencies.
In the early beta I shipped out both the observer and mutation-events library has referenced. This has already been changed, and unless you include SmartCL.Observer.pas or SmartCL.MutationEvents.pas – these will no longer be linked automatically. That was never the plan either.
Had one of you (I have come across 3 different places with comments and info on the RTL around the web) bothered to just ask me, I would have included you in the beta process and explained this.
The executables have become so large!
This is another statement I stumbled over: “The new RTL is huge and it generates large JavaScript binaries!”.
First of all you were using an early beta, which means things will be in flux while each level of the RTL is being tested, probed and verified. The pascal uses section in some units referece code they no longer use. Cleaning up that is due for sprint 3. But again in our first sprint focus was on memory, binary data, type conversion, streams, compression and various low-level stuff – not high level code!
As for size that is actually wrong: our namespacing and unit fragmentation is there for a reason. By dividing files into 3 namespaces, fragmenting large units with many classes into more files, our linker is able to trim away unused code more efficiently; as well as optimize your code beyond anything people have seen so far.
Rule of thumb: when you have finished writing your application, always switch on the optimization like you see above for the final build. So yes, you are expected to use packing, obfuscation etc. before deployment. Not just to protect your code, but also to make it load and run faster.
Since you will either use your applications on a mobile device, an embedded device or just online like a website (or desktop via nodewebkit compilation) – you can further speed up loading by activating gzip compression on the server. You will be surprised just how fast and small your applications are when you tune them correctly.
Practical example
The Smart Embedded and Cloud desktop consists of roughly 40+ unit files. Since this desktop can emulate older systems, like the 68k Amiga, Atari, Super Nintendo and much more (more about those later) I have made linking these in optional (read: you have to include the units in order to link the code in, UAE.Core.pas being the primary file).
Without the 68k emulation layer (and various other emulators, which is just added for fun and inspiration) and Aros ROM files (the bios file for the emulator), we are looking at code size in the ballpark of 4.7 megabyte. Which is nothing compared to high-end native solutions.
If you include the 68k emulation layer, the end result is a 10 megabyte index.html file. Again this is small potatoes if you work with embedded systems or kiosk booths.
But since the new RTL design was created to be heavily optimized, far better than the older RTL ever was, the end result after compression is 467 kilobyte (!). When you compress the version that includes the 68k emulation layer – it optimizes down to 1.5 megabyte of code.
- No 68k emulation layer = 4.7 megabyte uncompressed
- No 68k emulation, compressed = 457 kilobyte (!)
- 68k emulation added = 10.2 megabyte uncompressed
- 68k emulation included, compressed = 1.5 megabyte

From 4.7 megabyte to 467 kilobyte
This level of compression and optimization was planned. I have re-arranged the whole RTL in order to create the best possible circumstance for the compression and obfuscations modules – and I believe the results speaks for itself.
Is the new RTL larger? Indeed it is, with plenty of new classes, controls and libraries. The more features you want to enjoy, the more infrastructure must be in place. That is just how it is, regardless of language.
But you should also know that the new and longer inheritance chain has little or no impact on execution speed. This is what the compiler options “de-virtualization” and “smart linking” are all about. In most cases the code for TW3OwnedObject and TW3OwnedErrorObject are linked into TW3Component. Here is the inheritance chain as of writing:
- TW3OwnedObject
- TW3OwnedErrorObject
- TW3Component
- TW3TagObj
- TW3TagContainer
- TW3MovableControl
- TW3CustomControl
- TW3MovableControl
- TW3TagContainer
- TW3TagObj
- TW3Component
- TW3OwnedErrorObject
Why did I change this? There are many reasons. First of all I wanted to make the RTL more compatible with Delphi, where names like TComponent and TCustomControl are known and understood by most. So when facing TW3Component or TW3CustomControl, most Delphi developers will intuitively know what these are all about. TW3Component is also the base class for non visual, drag & drop components.
TW3OwnedErrorObject = class(TW3OwnedObject) private FLastError: string; FOptions: TW3ErrorObjectOptions; protected property ErrorOptions: TW3ErrorObjectOptions read FOptions; function GetLastError: string; procedure SetLastErrorF(const ErrorText: string; const Values: Array of const); procedure SetLastError(const ErrorText: string); virtual; procedure ClearLastError; public property LastError: string read FLastError; constructor Create(const AOwner: TObject); override; destructor Destroy; override; end;
There are also other benefits, like uniform error handling. While this might seem useless in an exception based language, there are places where an exception is not something you want to throw. Like in a system level service running under node.js.
In such cases you should catch whatever exception occures and call SetLastError() with the description. The caller will then check if something went wrong and can take measures.
Embedded, a different beast
One of the things Smart Pascal does exceptionally well is embedded platforms. Regardless of what you have been hired to do, be it a kiosk booth in a museum, a touch based vending machine, a parking payment terminal or just some awesome IOT project, Smart Pascal makes this childs play. And the reason its so easy to write powerful web applications is precisely due to the broad scope of our RTL.
Smart Pascal is not a tool for creating websites (well you can, but that’s not what it was designed for). It is a tool to write high quality, complex and feature rich applications.
The distinction between a “web application” and “website” may seem subtle at first, but it all boils down to depth. The Smart RTL now contains a huge amount of classes and third party libraries that simplify writing modern applications with depth.
Other frameworks that (for some reason) are very popular, like JQuery, JQuery UI and ExtJS lacks this depth. They look great, but thats it. I recently catched up with a project that had taken the developers a whole year to create in pure JS. It took me 3 weeks to finish what they spent over 12 months implementing. And that has to do with depth and proper inheritance – not that absurd prototype cloning JavaScript coders use.
To sum up: if you want to enjoy the same features and power in the browser as you do with Delphi or Lazarus on your desktop – then you cant expect Smart to produce binaries smaller than it’s native counterparts.
But more importantly, on embedded systems like the Raspberry PI, the UP x86 board or the new and sexy Tinkerboard — do you really think its going to matter if your application is 4, 8 or 10 megabyte? By now you should have realized that our compressor and obfuscator bakes that down to 460kb or 1.5 megabyte — both peanuts compared to similar system written in Java, C# or Delphi.
Ask and ye shall recieve
I dont really care who shared my RTL, but I am sad that it happened. Im not a difficult person when it comes to these things – all you have to do is ask.
Also remember that the RTL is a copyrighted product. It belongs to a registered financial entity (read: company) and is not subject for distribution.
FMX4Linux is coming, and we cant wait!
When Embarcadero announced Linux support for their Tokyo release of Delphi, my soul literally left my body for a moment. Could it really be true? After all these years have Embarcadero done what many said would be impossible?
I must admit that people saying something is impossible has lost its sting for me. Over the past 3 years I have one of these impossible things on a weekly basis, yet people are just as shocked every single time. But this time – I was the one in a state of excitement.
As an outspoken (an understatement perhaps) active blogger, my skin has grown thick over the years. I also get to see a lot of cool tech long before it’s commercially available and mainstream – so it takes more for me to be swayed and dazzled. And you grow a healthy instinct for separating bullshit from true technical achievements too.
But yes, I admit it – this time Embarcadero surprised me in every positive way imaginable. If you follow my blog you know that I call it as I see it and hold little back. But this was a purely positive experience.
No I know what you are going to say; everyone knew about this right? Yeah me too. But my mind has been elsewhere lately with work and projects, so I didn’t catch the release buzz from the closed forums (well, “closed” is a matter of perspective, I have friends from Russia to the United States, and from china to the Sudan) and for the first time since the Borland days – Embarcadero got the drop on me.
I’m the kind of guy that runs on passion. Delphi, Smart Pascal and even object pascal as a general language is not just work for me – its something I love to use. I relax and enjoy myself when coding. So you can imagine my reaction when my boss sent me a message with “Download Tokyo and give me a report”. It was close to midnight but I was out of that bed faster than bacon on toast, ran into my home office wearing nothing but boxers and a Commodore t-shirt – and threw myself into Embarcadero Developer Network’s download section.
From hero to zero in 2 seconds
I think it was around 07:00 the next morning, one hour before I was due for work that the magic phrase “command line and system services [daemons] only” hit home. And yes I “kinda” knew that before – but maybe, just maybe Embarcadero had thrown in visual applications in the 11th hour. I even had a friendly bet with Jim McKeeth that a FMX UI solution would appear less than 24 hours after release (more about that later).

Now that is a beautiful thing
Either way, it was quite the anti-climax after all that work in VMWare installing Delphi from scratch, waiting, hoping and praying. First of all because I remember watching an in-depth technical review about Firemonkey by David Intersimone a few years back; the one where he describes the Firemonkey architecture in detail. Especially how the abstraction layer between the visual control framework and the actual os made it possible for FMX to quickly adapt to new environments. Firemonkey is a complex and highly adaptable framework, but its biggest strength is paradoxically enough its simplicity. A simplicity achieved through abstracting the UI from the rendering functionality. At least as much as possible, you still have to deal with OS level windowing, security and all of that – so it’s no walk in the park either.
In the presentation (sadly I don’t have a link to this one) David went to great lengths to explain that regardless of operating system, as long as someone implemented a driver class that exposed the set of features FMX needs – Firemonkey would run as long as the compiler supported the instruction set. Visual engines could be DirectX, OpenGL, Cairo or whatever makes sense on that particular platform. As long as the “bridge class” talking with the operating system is there – Firemonkey can run on a toaster if so be.
So Firemonkey has the same abstraction concept that we use in the VJL (Visual JavaScript Component Library) for Smart Pascal (differences not withstanding). If it’s Windows you are running on, DirectX is used; If it’s OS X then Apple’s implementation of OpenGL runs the show – and if you are on Linux you can pick between OpenGL and Cairo. I must admit I havent looked too closely at Cairo, but I know it was designed to make advanced composition and UI rendering more efficient.
Why it Tokyo didn’t ship with visual application support is beyond me, but considering the timing of what happened next, I have made my own conclusions. It doesnt really matter to be honest – the point is we got it and it rocks!
FMX4Linux to the rescue!
Remember the wager I mentioned with Jim McKeeth? It wasnt a serious wager, I just commented and said “a Linux FMX solution will appear within 24hrs, you can bet on it” and added a smiley. I had no idea who or how, I just knew it’s going to turn up.
Because one thing that is a sure bet – it’s that the Delphi community is a group made up of highly resourceful, inventive and clever people. And I was pretty sure that it wouldn’t take many hours before someone came up with a patch or framework to fill the void. And right I was. Less than 24 hours later and it was fact rather than conjecture.
So less than 24 hours after Delphi Tokyo hit the shelves. Eugene Kryukov and Alexey Sharagin presented “FMX for Linux”. Giving you both the missing rendering back-end that talks to the system – and some kind of “widget mapping” (as far as I can understand, I wont pretend to know how they did it) that renders your UI according to GTK. So it’s not just a simple “patch”, theme or class that calls a handful of external routines; it’s a full visual implementation of FMX for Linux. Impressive? Oh yeah, and then some!
Let us explore!
Over the next few days I will be giving you an in-depth look at how this system works. I’m also going to test drive Html Components and see if we can get that running under Linux as well. Since FMX for Linux is still in development we have to take height for that – but being able to target Ubuntu is pretty cool! And Remobjects, glorious Remobjects SDK, if that works out of the box I will dance the jig and upload it to YouTube, I swear to cow!

Linux is about to feel the full onslaught of object pascal
There is little doubt what the next Smart Pascal IDE will be based on, and when you combine FMX for Linux, HTML components, Remobjects SDK and TMS into one – you got serious firepower to play with that will give even the most hardened C/C++ QT developer reason to be scared. And that is before the onslaught of Data Abstract and Remobjects C# native compiler.
Oh man next weekend is going to be the best ever!
Let’s not forget ARM targets

The “PI killer” has arrived!
On a second note we will also be looking at the my latest embedded toys – namely the Asus Tinkerboard. I just got two of them in the mail today. Its going to be exciting to see how it fares against the Raspberry PI, ODroid XU4 and the Intel Atom based UP board.
We will also be testing how these two cards can be clustered together using node.js to work as one – and see how that impacts performance for our node.js based Smart Desktop project. These are exciting times indeed!
To make things even more interesting I will be pitching the Tinkerboard against the ODroid XU4 (original version, not the pussy passive save the environment unicorn they push now) against both the x86 UP v1 and Raspberry 3b. Although I think the Raspberry PI is in for the beating of its life when the ODroid and Tinker is overclocked to blood lust mode!

The Smart desktop has a powerful node.js back-end that packs a punch
So, when LLVM optimized JavaScript runs Mc68040 machine-code at 4 times the speed of a high-end Amiga 4000, I will be content.
So let’s do another “that’s impossible” shall we 🙂
Smart-Pascal: A brave new world, 2022 is here
Trying to explain what Smart Mobile Studio does and the impact it can have on your development cycle is very hard. The market is rampant with superficial frameworks that promises you the world, and investors have been taken for a ride by hyped up, one-click “app makers” more than once.
I can imagine that being an investor is a bit like panning for gold. Things that glitter the most often turn out to be worthless – yet fortunes may hide beneath unpolished and rugged surfaces.
Software will disrupt most traditional industries in the next 5-10 years.
Uber is just a software tool, they don’t own any cars, yet they are now the
biggest taxi company in the world. -Source: R.M.Goldman, Ph.d
So I had enough. Instead of trying to tell people what I can do, I decided I’m going to show them instead. As the american’s say: “talk is cheap”. And a working demonstration is worth a thousand words.
Care to back that up with something?
A couple of weeks ago I published a video on YouTube of our Smart Pascal based desktop booting up in VMWare. The Amiga forums went off the chart!
For those that havent followed my blog or know nothing about the desktop I’m talking about, here is a short summary of the events so far:
Smart Mobile Studio is a compiler that takes pascal, like that made popular in Delphi or Lazarus, and compiles it JavaScript instead of machine-code.
This product has shipped with an example of a desktop for years (called “Quartex media desktop”). It was intended as an example of how you could write a front-end for kiosk machines and embedded devices. Systems that could use a touch screen as the interface between customer and software.
You have probably seen those info booths in museums, universities and libraries? Or the ticket machines in subways, train-stations or even your local car-wash? All of those are embedded systems. And up until recently these have been small and expensive computers for running Windows applications in full-screen. Applications which in turn talk to a server or local database.
Smart Mobile Studio is able to deliver the exact same (and more) for a fraction of the price. A company in Oslo replaced their $300 per-board unit – with off the shelves $35 Raspberry Pi mini-computers. They then used Smart Pascal to write their client software and ran it in a fullscreen-browser. The Linux distribution was changed to boot straight into Firefox in full-screen. No Linux desktop, just a web display.
The result? They were able to cut production cost by $265 per unit.
Right, back to the desktop. I mentioned the Amiga community. This is a community of coders and gamers that grew up with the old Commodore machines back in the 80s and 90s. A new Amiga is now on the way (just took 20+ years) – and the look and feel of the new operating-system, Amiga OS 4.1, is the look and feel I have used in The Smart Desktop environment. First of all because I grew up on these machines myself, and secondly because the architecture of that system was extremely cost-effective. We are talking about a system that delivered pre-emptive multitasking in as little as 512Kb of memory (!). So this is my “ode to OS 4” if you will.
And the desktop has caused quite a stir both in the Delphi community, cloud community and retro community alike. Why? Because it shows some of the potential cloud technology can give you. Potential that has been under their nose all this time.
And even more important: it demonstrate how productive you can be in Smart Pascal. The operating system itself, both visual and non-visual parts, was put together in my spare time over 3 weeks. Had I been able to work on it daily (as a normal job) I would have knocked it out in a week.
A desktop as a project type
All programming languages have project types. If you open up Delphi and click “new” you are greeted with a rich menu of different projects you can make. From low-level DLL files to desktop applications or database servers. Delphi has it all.

Delphi offers a wide range of projects types you can create
The same is true for visual studio. You click “new solution” and can pick from a wide range of different projects. Web projects, servers, desktop applications and services.
Smart Pascal is the only system where you click “new project” and there is a type called “Smart desktop” and “Smart desktop application”. In other words, the power to create a full desktop is now an integrated part of Smart Pascal.
And the desktop is unique to you. You get to shape it, brand it and make it your own!
Let us take a practical example
Imagine a developer given the task to move the company’s aging invoice and credit system from the Windows desktop – to a purely web-based environment.
The application itself is large and complex, littered with legacy code and “quick fixes” going back decades. Updating such a project is itself a monumental task – but having to first implement concepts like what a window is, tasks, user space, cloud storage, security endpoints, look and feel, back-end services and database connectivity; all of that before you even begin porting the invoice system itself ? The cost is astronomical.
And it happens every single day!
In Smart Pascal, the same developer would begin by clicking “new project” and selecting “Smart desktop”. This gives him a complete desktop environment that is unique to his project and company.
A desktop that he or she can shape, adjust, alter and adapt according to the need of his employer. Things like file-type recognition, storage, getting that database – all of these things are taken care of already. The developer can focus on his task, namely to deliver a modern implementation of their invoice and credit software – not waste months trying to force JavaScript frameworks do things they simply lack the depth to deliver.
Once the desktop has the look and feel in order, he would have to make a simple choice:
- Should the whole desktop represent the invoice system or ..
- Should the invoice system be implemented as a secondary application running on the desktop?
If it’s a large and dedicated system where the users have no need for other programs running, then implementing the invoice system inside the desktop itself is the way to go.
If however the customer would like to expand the system later, perhaps add team management, third-party web-services or open-office like productivity (a unified intranet if you like) – then the second option makes more sense.
On the brink of a revolution
The developer of 2022 is not limited to the desktop. He is not restricted to a particular operating system or chip-set. Fact is, cloud has already reduced these to a matter of preference. There is no strategic advantage of using Windows over Linux when it comes to cloud software.
Where a traditional developer write and implement a solution for a particular system (for instance Microsoft Windows, Apple OS X or Linux) – cloud developers deliver whole eco systems; constellations of software constructed from many parts, both micro-services developed in-house but also services from others; like Amazon or Azure.
All these parts co-operate and can be combined through established end-point standards, much like how components are used in Delphi or Visual Studio today.
Access to products written in Smart is through the browser, or sometimes through a “paper thin” native host (Cordova Phonegap, Delphi and C/C++) that expose system level functionality. These hosts wrap your application in a native, executable container ready for Appstore or Google Play.
Now the visual content is typically the same, and is only adapted for a particular device. The real work is divided between the client (which is now very much capable) and your server back-end.
So people still write code in 2022, but the software behaves differently and is designed to function as a group (cluster). And this requires a shift in the way we think.
Scaling a solution from processing 100 invoices a minute to handling 100.000 invoices a minute – is no longer a matter of code, but of architecture. This is where the traditional, native only approach to software comes up short, while more flexible approaches like node.js is infinitely more capable.
What has emerged up until now is just the tip of the ice-berg.
Over the next five to eight years, everything is going to change. And the changes will be irrevocable and permanent.

The Smart Desktop back-end running as a system service on a Raspberry PI
As the Americans say, talk is cheap – and I’m done talking. I will do this with you, or without you. Either way it’s happening.
Nightly-build of the desktop can be tested here: http://quartexhq.myasustor.com/
Calendar
M | T | W | T | F | S | S |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | ||
6 | 7 | 8 | 9 | 10 | 11 | 12 |
13 | 14 | 15 | 16 | 17 | 18 | 19 |
20 | 21 | 22 | 23 | 24 | 25 | 26 |
27 | 28 | 29 | 30 | 31 |
Recent
The vatican vault
- March 2023
- February 2023
- December 2022
- October 2022
- January 2022
- October 2021
- March 2021
- November 2020
- September 2020
- July 2020
- June 2020
- April 2020
- March 2020
- February 2020
- January 2020
- November 2019
- October 2019
- September 2019
- August 2019
- July 2019
- June 2019
- May 2019
- April 2019
- March 2019
- February 2019
- January 2019
- December 2018
- November 2018
- October 2018
- September 2018
- August 2018
- July 2018
- June 2018
- May 2018
- April 2018
- March 2018
- February 2018
- January 2018
- December 2017
- November 2017
- October 2017
- August 2017
- July 2017
- June 2017
- May 2017
- April 2017
- March 2017
- February 2017
- January 2017
- December 2016
- November 2016
- October 2016
- September 2016
- August 2016
- July 2016
- June 2016
- May 2016
- April 2016
- March 2016
- January 2016
- December 2015
- November 2015
- October 2015
- September 2015
- August 2015
- June 2015
- May 2015
- April 2015
- March 2015
- February 2015
- January 2015
- December 2014
- November 2014
- October 2014
- September 2014
- August 2014
- July 2014
- June 2014
- May 2014
- April 2014
- March 2014
- February 2014
- January 2014
- December 2013
- November 2013
- October 2013
- September 2013
- August 2013
- July 2013
- June 2013
- May 2013
- February 2013
- August 2012
- June 2012
- May 2012
- April 2012
You must be logged in to post a comment.