Posts Tagged ‘ODroid’

RemObjects Elements + ODroid N2 = true

August 7, 2019 Leave a comment

Since the release of Raspberry PI back in 2012 the IOT and Embedded market has exploded. The price of the PI SBC (single board computer) enabled ordinary people without any engineering background to create their own software and hardware projects; and with that the IOT revolution was born.

Almost immediately after the PI became a success, other vendors wanted a piece of the pie (pun intended), and an avalanche of alternative mini computers started surfacing in vast quantities. Yet very few of these so-called “pi killers” actually stood a chance. The power of the Raspberry PI is not just price, it’s actually the ecosystem around the product. All those shops selling electronic parts that you can use in your IOT projects for example.


The ODroid N2, one of the fastest SBCs in it’s class

The ODroid family of single-board computers stands out as unique in this respect. Where other boards have come and gone, the ODroid family of boards have remained stable, popular and excellent alternatives to the Raspberry PI. Hardkernel, the maker of Odroid boards and its many peripherals, are not looking for a “quick buck” like others have. Instead they have slowly and steadily perfected their hardware,  software, and seeded a great community.

ODroid is very popular at RemObjects, and when we added 64-bit ARM Linux support a couple of weeks back, it was the ODroid N2 board we used for testing. It has been a smooth ride all the way.


As I am typing this, a collection of ODroid XU4s is humming away inside a small, desktop cluster I have built. This cluster is made up of 5 x ODroid XU4 boards, with an additional ODroid N2 acting as the head (the board that controls the rest via the network).


My ODroid Cluster in all its glory

Prior to picking ODroid for my own projects, I took the time to test the most popular boards on the market. I think I went through eight or ten models, but none of the other were even close to the quality of ODroid. It’s very easy to confuse aggressive marketing with quality. You can have the coolest hardware in the world, but if it lacks proper drivers and a solid Linux distribution, it’s for all means and purposes a waste of time.

Since IOT is something that i find exciting on a personal level, being able to target 64-bit ARM Linux has topped my wish-list for quite some time. So when our compiler wizard Carlo Kok wanted to implement support for 64-bit ARM-Linux, I was thrilled!

We used the ODroid N2 throughout the testing phase, and the whole process was very smooth. It took Carlo roughly 3 days to add support for 64-bit ARM Linux and it hit our main channel within a week.

I must stress that while ODroid N2 is one of our verified SBCs, the code is not explicitly about ODroid. You can target any 64-bit ARM SBC providing you use a Debian based Linux (Ubuntu, Mint etc). I tested the same code on the NanoPI board and it ran on the first try.

Why is this important?

The whole point of the Elements compiler toolchain, is not just to provide alternative compilers; it’s also to ensure that the languages we support become first class citizens, side by side with other archetypical languages. For example, if all you know is C# or Java, writing kernel drivers has been our of limits. If you are operating with traditional Java or .Net, you have to use a native bridge (like the service host under Windows). Your only other option was to code that particular piece in traditional C.


With Elements you can pick whatever language you know and target everything

With Elements that is no longer the case, because our compilers generates llvm optimized machine-code; code that in terms of speed, access and power stand side by side with C/C++. You can even import C/C++ header files and work directly with the existing infrastructure. There is no middleware, no service host, no bytecodes and no compromise.

Obviously you can compile to bytecodes too if you like (or WebAssembly), but there are caveats to watch out for when using bytecodes on SBCs. The garbage collector can make or break your product, because when it kicks in -it causes CPU spikes. This is where Elements step-up and delivers true native compilation. For all supported targets.

More boards to come

This is just my personal blog, so for the full overview of boards I am testing there will be a proper article on our official RemObjects blog-space. Naturally I can’t test every single board on the market, but I have around 10 different models which covers the common boards used by IOT and embedded projects.

But for now at least, you can check off the ODroid N2 (64-bit) and NanoPI-Fire 2 (32-bit)

Repository updates

February 25, 2019 2 comments

As most know by now, I was running a successful campaign on Patreon until recently. I know that some are happy with Patreon, but hopefully my experience will be a wakeup call about the total lack of rights you as a creator have – should Patreon decide they don’t understand what you are doing (which I can only presume was the case, because I was never given a reason at all). You can read more about my experience with Patreon by clicking here.

Setting up repositories

Having to manually build a package for each tier that I have backers for would be a disaster. It was time-consuming and repetitive enough to create packages on Patreon, and I don’t have time to reverse engineer Patreon either. Which I might do in the future and release as open-source just to give them a kick in the groin back.

To make it easier for my backers to get the code they want, I have isolated each project and sub-project in separate repositories on BitBucket. This covers Delphi, Smart Pascal, LDEF and everything else.


The CloudRipper architecture is coming along nicely. Here running on ODroid XU4

I’m just going to continue with the Tiers I originally made on Patreon, and use my blog as the news-center for everything. Since I tend to blog about things from a personal point of view, be it for Delphi, JavaScript or Smart Pascal — I doubt people will notice the difference.

So far the following repositories have been setup:

  • Amibian.js Server (Quartex Web OS)
  • Amibian.js Client
  • HexLicense
  • TextCraft (source-code parser for Delphi and Smart Pascal)
  • UAE.js (a fork of SAE, the JS implementation of UAE)

I need to clean up the server repository a bit, because right now it contains both the server-code and various sub projects. The LDEF assembler program for example, is also under that repository — and it belongs in its own repository as a unique sub-project.

The following repositories will be setup shortly:

  • Tweening library for Delphi and Smart Pascal
  • PixelRage graphics library
  • ByteRage bugger library
  • LDEF (containing both Delphi and Smart Pascal code)
  • LDEF Assembler

It’s been extremely busy days lately so I need to do some thinking about how we can best organize things. But rest assured that everyone that backs the project, or a particular tier, will get access to what they support.

Support and backing

I have been looking at various ways to do this, but since most backers have just said they want Paypal, I decided to go for that. So donations can be done directly via paypal. One of the new features in Paypal is repeated payments, so setting up a backer-plan should be easy enough. I am notified whenever someone gives a donation, so it’s pretty easy to follow-up on.



Updates used to be monthly, but with the changes they will be ad-hoc, meaning that I will commit directly. I do have local backups and a local git server, so for parts of the project the commits will be issued at the end of each month.

While all support is awesome, here are the tiers I used on Patreon:

  • $5 – “high-five”, im not a coder but I support the cause
  • $10 – Tweening animation library
  • $25 – License management and serial minting components
  • $35 – Rage libraries: 2 libraries for fast graphics and memory management
  • $45 – LDef assembler, virtual machine and debugger
  • $50 – Amibian.js (pre compiled) and Ragnarok client / server library
  • $100 – Amibian.js binaries, source and setup
  • $100+ All the above and pre-made disk images for ODroid XU4 and x86 on completion of the Amibian.js project (12 month timeline).

So to back the project like before, all you do is:

  1. Register with Bitbucket (free user account)
  2. Setup donation and inform me of your Bitbucket user-name
  3. I add you on BitBucket so you are granted access rights

Easy. Fast and reliable.


Those that have been following the Amibian.js project might have noticed that a fair bit of QTX units have appeared in the code? QTX is a run-time library compatible with Smart Mobile Studio and DWScript. Eventually the code that makes up Amibian.js will become a whole new RTL. This RTL has nothing to do with Smart Mobile Studio and ships with its own license.


QTX approaches the DOM in more efficient way. Its faster, smaller and more powerful

Backers at $45 or beyond access to this code automatically. If you use Smart Mobile Studio then this is a must. It introduces a ton of classes that doesn’t exist in Smart Pascal, and also introduces a much faster and clean visual component framework.

If you want to develop visual applications using QTX and DWScript,  then that is OK,  providing the license is respected (LGPL, non commercial use).

Well, stay tuned for more info and news!

Amibian.js under the hood

December 5, 2018 2 comments

Amibian.js is gaining momentum as more and more developers, embedded systems architects, gamers and retro computer enthusiasts discover the project. And I have to admit I’m pretty stoked about what we are building here myself!


In a life-preserver no less 😀

But, with any new technology or invention there are two common traps that people can fall into: The first trap is to gravely underestimate a technology. JavaScript certainly invites this, because only a decade ago the language was little more than a toy. Since then JavaScript have evolved to become the most widely adopted programming language in the world, and runtime engines like Google’s V8 runs JavaScript almost as fast as compiled binary code (“native” means machine code, like that produced by a C/C++ compiler, Pascal compiler or anything else that produces programs that run under Linux or Windows).

It takes some adjustments, especially for traditional programmers that havent paid attention to where browsers have gone – but long gone are the days of interpreted JavaScript. Modern JavaScript is first parsed, tokenized and compiled to bytecodes. These bytecodes are then JIT compiled (“just in time”, which means the compilation takes place inside the browser) to real machine-code using state of the art techniques (LLVM). So the JavaScript of 2018 is by no means the JavaScript of 2008.

The second trap you can fall into – is to exaggerate what a new technology can do, and attach abilities and expectations to a product that simply cannot be delivered. It is very important to me that people don’t fall into either trap, and that everyone is informed about what Amibian.js actually is and can deliver – but also what it wont deliver. Rome was not built-in a day, and it’s wise to study all the factors before passing judgement.

I have been truly fortunate that people support the project financially via Patreon, and as such I feel it’s my duty to document and explain as much as possible. I am a programmer and I often forget that not everyone understands what I’m talking about. We are all human and make mistakes.

Hopefully this post will paint a clearer picture of Amibian.js and what we are building here. The project is divided into two phases: first to finish Amibian.js itself, and secondly to write a Visual Studio clone that runs purely in the browser. Since it’s easy to mix these things up, I’m underlining this easy – just in case.

What the heck is Amibian.js?

Amibian.js is a group of services and libraries that combined creates a portable operating-system that renders to HTML5. A system that was written using readily available web technology, and designed to deliver advanced desktop functionality to web applications.

The services that make up Amibian.js was designed to piggyback on a thin Linux crust, where Linux deals with the hardware, drivers and the nitty-gritty we take for granted. There is no point trying to write a better kernel in 2018, because you are never going to catch up with Linus Torvalds. It’s must more interesting to push modern web technology to the absolute limits, and build a system that is truly portable and distributed.


Above: Amibian.js is created in Smart Pascal and compiled to JavaScript

The service layer is written purely in node.js (JavaScript) which guarantees the same behavior regardless of host platform. One of the benefits of using off-the-shelves web technology is that you can physically copy the whole system from one machine to the other without any changes. So if you have a running Amibian.js system on your x86 PC, and copy all the files to an ARM computer – you dont even have to recompile the system. Just fire up the services and you are back in the game.

Now before you dismiss this as “yet another web mockup” please remember what I said about JavaScript: the JavaScript in 2018 is not the JavaScript of 2008. No other language on the planet has seen as much development as JavaScript, and it has evolved from a “browser toy” – into the most important programming language of our time.

So Amibian.js is not some skin-deep mockup of a desktop (lord knows there are plenty of those online). It implements advanced technologies such as remote filesystem mapping, an object-oriented message protocol (Ragnarok), RPCS (remote procedure call invocation stack), video codec capabilities and much more — all of it done with JavaScript.

In fact, one of the demos that Amibian.js ships with is Quake III recompiled to JavaScript. It delivers 120 fps flawlessly (browser is limited to 60 fps) and makes full use of standard browser technologies (WebGL).


Click on picture above to watch Amibian.js in action on YouTube

So indeed, the JavaScript we are talking about here is cutting edge. Most of Amibian.js is compiled as “Asm.js” which means that the V8 runtime (the code that runs JavaScript inside the browser, or as a program under node.js) will JIT compile it to highly efficient machine-code.

Which is why Amibian.js is able to do things that people imagine impossible!

Ok, but what does Amibian.js consist of?

Amibian.js consists of many parts, but we can divide it into two categories:

  • A HTML5 desktop client
  • A system server and various child processes

These two categories have the exact same relationship as the X desktop and the Linux kernel. The client connects to the server, invokes procedures to do some work, and then visually represent the response This is identical to how the X desktop calls functions in the kernel or one of the Linux libraries. The difference between the traditional, machine code based OS and our web variation, is that our version doesn’t have to care about the hardware. We can also assign many different roles to Ambian.js (more about that later).


Enjoying other cloud applications is easy with Amibian.js, here is Plex, a system very much based on the same ideas as Amibian.js

And for the record: I’m trying to avoid a bare-metal OS, otherwise I would have written the system using a native programming language like C or Object-Pascal. So I am not using JavaScript because I lack skill in native languages, I am using JavaScript because native code is not relevant for the tasks Amibian.js solves. If I used a native back-end I could have finished this in a couple of months, but a native server would be unable to replicate itself between cloud instances because chipset and CPU would be determining factors.

The Amibian.js server is not a single program. The back-end for Amibian.js consists of several service applications (daemons on Linux) that each deliver specific features. The combined functionality of these services make up “the amibian kernel” in our analogy with Linux. You can think of these services as the library files in a traditional system, and programs that are written for Amibian.js can call on these to a wide range of tasks. It can be as simple as reading a file, or as complex as registering a new user or requesting admin rights.

The greatest strength of Amibian.js is that it’s designed to run clustered, using as many CPU cores as possible. It’s also designed to scale, meaning that it will replicate itself and divide the work between different instances. This is where things get’s interesting, because an Amibian.js cluster doesn’t need the latest and coolest hardware to deliver good performance. You can build a cluster of old PC’s in your office, or a handful of embedded boards (ODroid XU4, Raspberry PI’s and Tinkerboard are brilliant candidates).

But why Amibian.js? Why not just stick with Linux?

That is a fair question, and this is where the roles I mentioned above comes in.

As a software developer many of my customers work with embedded devices and kiosk systems. You have companies that produce routers and set-top boxes, NAS boxes of various complexity, ticket systems for trains and busses; and all of them end up having to solve the same needs.

What each of these manufacturers have in common, is the need for a web desktop system that can be adapted for a specific program. Any idiot can write a web application, but when you need safe access to the filesystem, unified API’s that can delegate signals to Amazon, Azure or your company server, things suddenly get’s more complicated. And even when you have all of that, you still need a rock solid application model suitable for distributed computing. You might have 1 ticket booth, or 10.000 nation wide. There are no systems available that is designed to deal with web-technology on that scale. Yet 😉

Let’s look at a couple of real-life scenarios that I have encountered, I’m confident you will recognize a common need. So here are some roles that Amibian.js can assume and help deliver a solution rapidly. It also gives you some ideas of the economic possibilities.

Updated: Please note that we are talking javascript here, not native code. There are a lot of native solutions out there, but the whole point here is to forget about CPU, chipset and target and have a system floating on top of whatever is beneath.

  • When you want to change some settings on your router – you login to your router. It contains a small apache server (or something similar) and you do all your maintenance via that web interface. This web interface is typically skin-deep, annoying to work with and a pain for developers to update since it’s connected to a native apache module which is 100% dependent on the firmware. Each vendor end up re-inventing the wheel over and over again.
  • When you visit a large museum notice the displays. A museum needs to display multimedia, preferably on touch capable devices, throughout the different exhibits. The cost of having a developer create native applications that displays the media, plays the movies and gives visual feedback is astronomical. Which is why most museums adopt web technology to handle media presentation and interaction. Again they re-invent the wheel with varying degree of success.
  • Hotels have more or less the exact same need but on a smaller scale, especially the larger hotels where the lobby have information booths, and each room displays a web interface via the TV.
  • Shopping malls face the same challenge, and depending on the size they can need anything from a single to a hundred nodes.
  • Schools and education spend millions on training software and programming languages every year. Amibian.js can deliver both and the schools would only pay for maintenance and adaptation – the product itself is free. Kids get the benefit of learning traditional languages and enjoying instant visual feedback! They can learn Basic, Pascal, JavaScript and C. I firmly believe that the classical languages will help make them better programmers as they evolve.

You are probably starting to see the common denominator here?

They all need a web-based desktop system, one that can run complex HTML5 based media applications and give them the same depth as a native operating-system; Which is pretty hard to achieve with JavaScript alone.

Amibian.js provides a rich foundation of more than 4000 classes that developers can use to write large, complex and media rich applications (see Smart Mobile Studio below). Just like Linux and Windows provides a wealth of libraries and features for native application development – Amibian.js aims to provide the same for cloud and embedded systems.

And as the name implies, it has roots in the past with the machine that defined multimedia, namely the Commodore Amiga. So the relation is more than just visually, Amibian.js uses the same system architecture – because we believe it’s one of the best systems ever designed.

If JavaScript is so poor, why should we trust you to deliver so much?

First of all I’m not selling anything. It’s not like this project is something that is going to make me a ton of cash. I ask for support during the development period because I want to allocate proper time for it, but when done Amibian.js will be free for everyone (LGPL). And I’m also writing it because it’s something that I need and that I havent seen anywhere else. I think you have to write software for yourself, otherwise the quality wont be there.

Secondly, writing Amibian.js in raw JavaScript with the same amount of functions and depth would take years. The reason I am able to deliver so much functionality quickly, is because I use a compiler system called Smart Mobile Studio. This saves months and years of development time, and I can use all the benefits of OOP.

Prior to starting the Amibian.js project, I spent roughly 9 years creating Smart Mobile Studio. Smart is not a solo project, many individuals have been involved – and the product provides a compiler, IDE (editor and tools), and a vast run-time library of pre-made classes (roughly 4000 ready to use classes, or building-blocks).


Writing large-scale node.js services in Smart is easy, fun and powerful!

Unlike other development systems, Smart Mobile Studio compiles to JavaScript rather than machine-code. We have spent a great deal of time making sure we could use proper OOP (object-oriented programming), and we have spent more than 3 years perfecting a visual application framework with the same depth as the VCL or FMX (the core visual frameworks for C++ builder and Delphi).

The result is that I can knock out a large application that a normal JavaScript coder would spend weeks on – in a single day.

Smart Mobile Studio uses the object-pascal language, a dialect which is roughly 70% compatible with Delphi. Delphi is exceptionally well suited for writing large, data driven applications. It also thrives for embedded systems and low-level system services. In short: it’s a lot easier to maintain 50.000 lines of object pascal code, than 500.000 lines of JavaScript code.

Amibian.js, both the service layer and the visual HTML5 client application, is written completely using Smart Mobile Studio. This gives me as the core developer of both systems a huge advantage (who knows it better than the designer right?). I also get to write code that is truly OOP (classes, inheritance, interfaces, virtual and abstract methods, partial classes etc), because our compiler crafts something called a VMT (virtual method table) in JavaScript.

Traditional JavaScript doesn’t have OOP, it has something called prototypes. With Smart Pascal I get to bring in code from the object-pascal community, components and libraries written in Delphi or Freepascal – which range in the hundreds of thousands. Delphi alone has a massive library of code to pick from, it’s been a popular toolkit for ages (C is 3 years older than pascal).

But how would I use Amibian.js? Do I install it or what?

Amibian.js can be setup and used in 4 different ways:

  • As a true desktop, booting straight into Amibian.js in full-screen
  • As a cloud service, accessing it through any modern browser
  • As a NAS or Kiosk front-end
  • As a local system on your existing OS, a batch script will fire it up and you can use your browser to access it on

So the short answer is yes, you install it. But it’s the same as installing Chrome OS. It’s not like an application you just install on your Linux, Windows or OSX box. The whole point of Amibian.js is to have a platform independent, chipset agnostic system. Something that doesn’t care if you using ARM, x86, PPC or Mips as your CPU of preference. Developers will no doubt install it on their existing machines, Amibian.js is non-intrusive and does not affect or touch files outside its own eco-system.

But the average non-programmer will most likely setup a dedicated machine (or several) or just deploy it on their home NAS.

The first way of enjoying Amibian.js is to install it on a PC or ARM device. A disk image will be provided for supporters so they can get up and running ASAP. This disk image will be based on a thin Linux setup, just enough to get all the drivers going (but no X desktop!). It will start all the node.js services and finally enter a full-screen web display (based on Chromium Embedded) that renders the desktop. This is the method most users will prefer to work with Amibian.js.

The second way is to use it as a cloud service. You install Amibian.js like mentioned above, but you do so on Amazon or Azure. That way you can login to your desktop using nothing but a web browser. This is a very cost-effective way of enjoying Amibian.js since renting a virtual instance is affordable and storage is abundant.

The third option is for developers. Amibian.js is a desktop system, which means it’s designed to host more elaborate applications. Where you would normally just embed an external website into an IFrame, but Amibian.js is not that primitive. Hosting external applications requires you to write a security manifest file, but more importantly: the application must interface with the desktop through the window’s message-port. This is a special object that is sent to the application as a hand-shake, and the only way for the application to access things like the file-system and server-side functionality, is via this message-port.

Calling “kernel” level functions from a hosted application is done purely via the message-port mentioned above. The actual message data is JSON and must conform to the Ragnarok client protocol specification. This is not as difficult as it might sound, but Amibian.js takes security very seriously – so applications trying to cause damage will be promptly shut down.

You mention hosted applications, do you mean websites?

Both yes and no: Amibian.js supports 3 types of applications:

  • Ordinary HTML5/JS based applications, or “websites” as many would call them. But like I talked about above they have to establish a dialog with the desktop before they can do anything useful.
  • Hybrid applications where half is installed as a node.js service, and the other half is served as a normal HTML5 app. This is the coolest program model, and developers essentially write both a server and a client – and then deploy it as a single package.
  • LDEF compiled bytecode applications, a 68k inspired assembly language that is JIT compiled by the browser (commonly called “asm.js”) and runs extremely fast. The LDEF virtual machine is a sub-project in Amibian.js

The latter option, bytecodes, is a bit like Java. A part of the Amibian.js project is a compiler and runtime system called LDEF.


Above: The Amibian.js LDEF assembler, here listing opcodes + disassembling a method

The first part of the Amibian.js project is to establish the desktop and back-end services. The second part of the project is to create the worlds first cloud based development platform. A full Visual Studio clone if you like, that allows anyone to write cloud, mobile and native applications directly via the browser (!)

Several languages are supported by LDEF, and you can write programs in Object Pascal, Basic and C. The Basic dialect is especially fun to work with, since it’s a re-implementation of BlitzBasic (with a lot of added extras). Amiga developers will no doubt remember BlitzBasic, it was used to create some great games back in the 80s and 90s. It’s well suited for games and multimedia programming and above all – very easy to learn.

More advanced developers can enjoy Object Pascal (read: Delphi) or a sub-set of C/C++.

And please note: This IDE is designed for large-scale applications, not simple snippets. The ultimate goal of Amibian.js is to move the entire development cycle to the cloud and away from the desktop. With Amibian.js you can write a cool “app” in BlitzBasic, run it right in the browser — or compile it server-side and deploy it to your Android Phone as a real, natively compiled application.

So any notion of a “mock desktop for HTML” should be firmly put to the side. I am not playing around with this product and the stakes are very real.

But why don’t you just use ChromeOS?

There are many reasons, but the most important one is chipset independence. Chrome OS is a native system, meaning that it’s core services are written in C/C++ and compiled to machine code. The fundamental principle of Amibian.js is to be 100% platform agnostic, and “no native code allowed”. This is why the entire back-end and service layer is targeting node.js. This ensures the same behavior regardless of processor or host system (Linux being the default host).

Node.js has the benefit of being 100% platform independent. You will find node.js for ARM, x86, Mips and PPC. This means you can take advantage of whatever hardware is available. You can even recycle older computers that have lost mainstream support, and use them to run Amibian.js.

A second reason is: Chrome OS might be free, but it’s only as open as Google want it to be. ChromeOS is not just something you pick up and start altering. It’s dependence on native programming languages, compiler toolchains and a huge set of libraries makes it extremely niche. It also shields you utterly from the interesting parts, namely the back-end services. It’s quite frankly boring and too boxed in for any practical use; except for Google and it’s technology partners that is.

I wanted a system that I could move around, that could run in the cloud, on cheap SBC’s. A system that could scale from handling 10 users to 1000 users – a system that supports clustering and can be installed on multiple machines in a swarm.

A system that anyone with JavaScript knowledge can use to create new and exciting systems, that can be easily expanded and serve as a foundation for rich media applications.

What is this Amiga stuff, isn’t that an ancient machine?

In computing terms yes, but so is Unix. Old doesn’t automatically mean bad, it actually means that it’s adapted and survived challenges beyond its initial design. While most of us remember the Amiga for its games, I remember it mainly for its elegant and powerful operating-system. A system so flexible that it’s still in use around the world – 33 years after the machine hit the market. That is quite an achievement.


The original Amiga OS, not bad for a 33-year-old OS! It was and continues to be way ahead of everyone else. A testament to the creativity of its authors

Amibian.js as the name implies, borrows architectural elements en-mass from Amiga OS. Quite simply because the way Amiga OS is organized and the way you approach computing on the Amiga is brilliant. Amiga OS is much more intuitive and easier to understand than Linux and Windows. It’s a system that you could learn how to use fully with just a couple of days exploring; and no manuals.

But the similarities are not just visual or architectural. Remember I wrote that hosted applications can access and use the Amibian.js services? These services implement as much of the original ROM Kernel functions as possible. Naturally I can’t port all of it, because it’s not really relevant for Amibian.js. Things like device-drivers serve little purpose for Amibian.js, because Amibian.js talks to node.js, and node talks to the actual system, which in turn handles hardware devices. But the way you would create windows, visual controls, bind events and create a modern, event-driven application has been preserved to the best of my ability.

But how does this thing boot? I thought you said server?

If you have setup a dedicated machine with Amibian.js then the boot sequence is the same as Linux, except that the node.js services are executed as background processes (daemons or services as they are called), the core server is initialized, and then a full-screen HTML5 view is set up that shows the desktop.

But that is just for starting the system. Your personal boot sequence which deals with your account, your preferences and adaptations – that boots when you login to the system.

When you login to your Amibian.js account, no matter if it’s just locally on a single PC, a distributed cluster, or via the browser into your cloud account — several things happen:

  1. The client (web-page if you like) connects to the server using WebSocket
  2. Login is validated by the server
  3. The client starts loading preferences files via the mapped filesystem, and then applies these to the desktop.
  4. A startup-sequence script file is loaded from your account, and then executed. The shell-script runtime engine is built into the client, as is REXX execution.
  5. The startup-script will setup configurations, create symbolic links (assigns), mount external devices (dropbox, google drive, ftp locations and so on)
  6. When finished the programs in the ~/WbStartup folder are started. These can be both visual and non-visual.

As you can see Amibian.js is not a mockup or “fake” desktop. It implements all the advanced features you expect from a “real” desktop. The filesystem mapping is especially advanced, where file-data is loaded via special drivers; drivers that act as a bridge between a storage service (a harddisk, a network share, a FTP host, Dropbox or whatever) and the desktop. Developers can add as many of these drivers as they want. If they have their own homebrew storage system on their existing servers, they can implement a driver for it. This ensures that Amibian.js can access any storage device, as long as the driver conforms to the driver standard.

In short, you can create, delete, move and copy files between these devices just like you do on Windows, OSX or the Linux desktop. And hosted applications that run inside their own window can likewise request access to these drivers and work with the filesystem (and much more!).

Wow this is bigger than I thought, but what is this emulation I hear about? Can Amibian.js really run actual programs?

Amibian.js has a JavaScript port of UAE (Unix Amiga Emulator). This is a fork of SAE (scripted Amiga Emulator) that has been heavily optimized for web. Not only is it written in JavaScript, it performs brilliantly and thus allows us to boot into a real Amiga system. So if you have some floppy-images with a game you love, that will run just fine in the browser. I even booted a 2 gigabyte harddisk image 🙂

But Amiga emulation is just the beginning. More and more emulators are ported to JavaScript; you have Nes, SNes, N64, PSX I & II, Sega Megadrive and even a NEO GEO port. So playing your favorite console games right in the browser is pretty straight forward!

But the really interesting part is probably QEmu. This allows you to run x86 instances directly in the browser too. You can boot up in Windows 7 or Ubuntu inside an Amibian.js window if you like. Perhaps not practical (at this point) but it shows some of the potential of the system.

I have been experimenting with a distributed emulation system, where the emulation is executed server-side, and only the graphics and sound is streamed back to the Amibian.js client in real-time. This has been possible for years via Apache Guacamole, but doing it in raw JS is more fitting with our philosophy: no native code!

I heard something about clustering, what the heck is that?

Remember I wrote about the services that Amibian.js has? Those that act almost like libraries on a physical computer? Well, these services don’t have to be on the same machine — you can place them on separate machines and thus its able to work faster.


Above: The official Amibian.js cluster, 4 x ODroid XU4s SBC’s in a micro-rack

A cluster is typically several computers connected together, with the sole purpose of having more CPU cores to divide the work on. The cool thing about Amibian.js is that it doesn’t care about the underlying CPU. As long as node.js is available it will happily run whatever service you like – with the same behavior and result.

The official Amibian.js cluster consists of 5 ODroid XU4/S SBC (single board computers). Four of these are so-called “headless” computers, meaning that they don’t have a HDMI port – and they are designed to be logged into and software setup via SSH or similar tools. The last machine is a ODroid XU4 with a HDMI out port, which serves as “the master”.

The architecture is quite simple: We allocate one whole SBC for a single service, and allow the service to copy itself to use all the CPU cores available (each SBC has 8 CPU cores). With this architecture the machine that deals with the desktop clients don’t have to do all the grunt work. It will accept tasks from the user and hosted applications, and then delegate the tasks between the 4 other machines.

Note: The number of SBC’s is not fixed. Depending on your use you might not need more than a single SBC in your home setup, or perhaps two. I have started with 5 because I want each part of the architecture to have as much CPU power as possible. So the first “official” Amibian.js setup is a 40 core monster shipping at around $250.

But like mentioned, you don’t have to buy this to use Amibian.js. You can install it on a single spare X86 PC you have, or daisy chain a couple of older PC’s on a switch for the same result.

Why Headless? Don’t you need a GPU?

The headless SBC’s in the initial design all have GPU (graphical processing unit) as well as audio capabilities. What they lack is GPIO pins and 3 additional USB ports. So each of the nodes on our cluster can handle graphics at blistering speed — but that is ultimately not their task. They serve more as compute modules that will be given tasks to finish quickly, while the main machine deals with users, sessions, traffic and security.

The 40 core cluster I use has more computing power than northern europe had in the early 80s, that’s something to think about. And the pricetag is under $300 (!). I dont know about you but I always wanted a proper mainframe, a distributed computing platform that you can login to and that can perform large tasks while I do something else. This is as close as I can get on a limited budget, yet I find the limitations thrilling and fun!

Part of the reason I have opted for a clustered design has to do with future development. While UAE.js is brilliant to emulate an Amiga directly in the browser – a more interesting design is to decouple the emulation from the output. In other words, run the emulation at full speed server-side, and just stream the display and sounds back to the Amibian.js display. This would ensure that emulation, of any platform, runs as fast as possible, makes use of multi-processing (read: multi threading) and fully utilize the network bandwidth within the design (the cluster runs on its own switch, separate from the outside world-wide-web).

I am also very interested in distributed computing, where we split up a program and run each part on different cores. This is a topic I want to investigate further when Amibian.js is completed. It would no doubt require a re-design of the LDEF bytecode system, but this something to research later.

Will Amibian.js replace my Windows box?

That depends completely on what you use Windows for. The goal is to create a self-sustaining system. For retro computing, emulation and writing cool applications Amibian.js will be awesome. But Rome was not built-in a day, so it’s wise to be patient and approach Amibian.js like you would Chrome OS. Some tasks are better suited for native systems like Linux, but more and more tasks will run just fine on a cloud desktop like Amibian.js.

Until the IDE and compilers are in place after phase two, the system will be more like an embedded OS. But when the LDEF compiler and IDE is in place, then people will start using it en-mass and produce applications for it. It’s always a bit of work to reach that point and create critical mass.


Object Pascal is awesome, but modern, native development systems are quite demanding

My personal need has to do with development. Some of the languages I use installs gigabytes onto my PC and you need a full laptop to access them. I love Amibian.js because I will be able to work anywhere in the world, as long as a browser and normal internet line is available. In my case I can install a native compiler on one of the nodes in the cluster, and have LDEF emit compatible code; voila, you can build app-store ready applications from within a browser environment.


I also love that I can set-up a dedicated platform that runs legacy applications, games – and that I can write new applications and services using modern, off the shelve languages. And should a node in the cluster break down, I can just copy the whole system over to a new, affordable SBC and keep going. No super expensive hardware to order, no absurd hosting fees, and finally a system that we all can shape and use in a plethora of systems. From a fully fledged desktop to a super advanced NAS or Router that use Amibian.js to give it’s customers a fantastic experience.

And yes, I get to re-create the wonderful reality of Amiga OS without the absurd egoism that dominates the Amiga owners to this day. I don’t even know where to begin with the present license holders – and I am so sick of the drama that rolling my own seemed the only reasonable path forward.

Well — I hope this helps clear up any misconceptions about Amibian.js, and that you find this as interesting as I do. As more and more services are pushed cloud-side, the more relevant Amibian.js will become. It is perfect as a foundation for large-scale applications, embedded systems — and indeed, as a solo platform running on embedded devices!

I cant wait to finish the services and cluster this sucker on the ODroid rack!

If you find this project interesting, head over to my Patreon website and get involved! I could really use your support, even if it’s just a $5 “high five”. Visit the project at:

UP board, first impressions for emulation

December 21, 2016 2 comments

To get the most out of this post please read my previous post, Embedded boards, finally. For general use and emulating classical Amiga, also read my next article: UP board, a perfect embedded board.

In the previous post I went through the hardware specs for both the ODroid XU4 ARM embedded board, as well as the x86 based UP SoC board. Based on their specs I also defined some predictions regarding the performance I expect them to deliver, what tasks they would be best suited for – and what they would mean for you as a developer – or indeed an emulation enthusiast

In this post we are going to start digging into the practical side of things. And what is probably on everyone’s mind is: will the UP board be powerful enough to emulate and run Amiga OS 4.1 final edition? What about retrogaming, HTML5 and general use?

Well, let’s dig into it and find out!

Note: If emulation is not your cup of tea, I urge you to reconsider. I am a dedicated developer and can tell you that emulation is one of the best methods to find out what your hardware is capable of. Nothing is as demanding for a computer as emulating a completely different cpu and chipset, especially one as elaborate and complex as the Commodore Amiga. But I also use a Smart Pascal demo as a general performance test -so even if gaming is not you thing, the board and its capabilities might be.

EMMC storage, a tragedy without end

EMMC is cheap and easily available

EMMC is cheap and easily available

The UP-board uses something called EMMC storage; this is quite common for embedded devices. Your TV tuner probably has one of these, your $100+ router and in all likelihood so your NAS or laser printer. To make a long story short this storage medium is flexible, cheap and easy for vendors to add to their custom chipset or SoC. It is marketed as a reasonable alternative to SSD, but sadly these two technologies have absolutely nothing in common except perhaps -that they both are devices used to store data. But that’s where any similarities stop; and truth be told the same would be the case for pen and pencil.

EMMC is an appalling technology, honestly. It works for some products, in the sense that you would gladly wear a cardboard box if the alternative is to walk around naked in public. For devices where responsiveness and efficiency is not the most pressing factor (like routers, TV tuners, set-top boxes and similar devices) it can even work well, but for devices where responsiveness and data throughput is the most important thing of all (like a desktop or emulation) EMMC is a ridiculous choice.

Just imagine, a powerful x86 embedded board with 4 gigabytes of ram, 4 x USB 3 ports, outstanding graphical performance, excellent audio capabilities – and all of it for $80?

Honestly, there are many cases where embedded boards are better off without them. I used to hate that the Raspberry PI 3 didn’t ship with EMMC, but that was before I got the pleasure of trying the storage medium myself and experience how utter shit it truly is. It reminds of ZIP disks, remember those? Lumpy overgrown floppy disks that went extinct in the 90’s without realizing it?

In terms of speed EMMC sits somewhere between USB 2 and your average SD card. The EMMC disk on the UP-board falls in the usable category at best. It works ok-ish with a modern operative system like Windows 10 or Ubuntu, but just like the Latte Panda and similar devices are haunted by a constant lag or dullness whenever IO is involved, the same is true for the UP-board. It saturates the whole user experience and its like the computer is constantly procrastinating. Downloading a file that should take 5 minutes suddenly takes 20 minutes. Copying a large file over the network, like a bluray HD file is borderline absurd. It took over half an hour! It normally takes less that 20 seconds on my desktop PC.

I might get flamed for this but my Raspberry PI 3 actually performed better when using a high-speed USB stick. I did the exact same test:

  • Download the same file from my NAS right here at home
  • The data goes straight from my NAS to the router and straight to disk
  • On the PI i used a Sandisk 256 gigabyte USB pen-drive

I don’t have the exact number at hand, but we are talking 10-12 minutes tops, not half an hour. The PI is hopelessly inferior to the UP-board in every aspect, but at least the PI foundation was smart enough to choose low-price over completeness. The people behind UP could learn a thing or two here.

Top the x86 UP board, left bottom a Raspberry PI 3, bottom right the ODroid XU4

Top the x86 UP board, left bottom a Raspberry PI 3, bottom right the ODroid XU4

I simply cannot understand what they were thinking. Consider the following factors:

  • The speed of running a server is now determined by the read/write performance of your storage device. Forget about factors like memory, number of active threads, socket capacity or network throughput.
  • Should the operating system start paging, for example if you run a program that exceed the memory Windows has left (Windows eats 1 gigabyte of ram for breakfast) you are screwed. The system will jitter and lag while Windows desperately map regions of the pagefile to a fixed memory address, perform some work, then page it back again before switching to the next process.

I really wish the architects of the UP-board had just ditched EMMC completely because it creates more problems than it solves. The product would be in a different league had they instead given it four USB 3 ports (it presently have only one USB 3 port, the rest are USB 2). While I can only speculate, I imagine the EMMC unit costs between $20 to $40 depending on model (8 Gb, 16Gb or 32Gb). The entire kickstarter project would be way more successful if they had cut that cost completely. Just imagine, a powerful x86 embedded board with 4 gigabytes of ram, 4 x USB 3 ports, outstanding graphical performance, excellent audio capabilities – and all of it for $80?

It would be nothing short of a revolution.


When it comes to graphics the board just rocks! This is where you notice how slow Raspberry PI 3 truly is. Raspberry PI 3 (RPI3) ships with a fairly decent GPU, and that GPU has been instrumental for the success of the whole endeavour. Without it all you have is a slow and boring ARM processor barely capable of running Linux. Try to compile something on the RPI3 (like node.js from source like I did) and it will quickly burst your bubble.

The UP board ships with a very capable little graphics chipset:

  • Intel® HD 400 Graphics
  • 12 EU GEN 8 up to 500MHz
  • Support DirectX 11 through 12
  • Supports Open GL* 4.2, Open CL*1.2, OGL ES3.0
  • Built in H.264, HEVC, VP8 encoding/decoding

The demo I use when testing boards is a JavaScript demo. You can watch it yourself here.

The particles javascript canvas demo was coded in Smart Mobile Studio and push the HTML5 graphics engine to the edge

The particles JavaScript canvas demo was coded in Smart Mobile Studio and push the HTML5 graphics engine to the edge

Here are some figures for this particular demo. It will help you get a feel for the performance:

  • Raspberry PI 2b, 1 frames per second
    • Overclocked 2 frames per second
  • Raspberry PI 3b, 3 frames per second
    • Overclocked 7 frames per second
  • UP-board 18 frames per second
    • Overclocked: not overclocked yet

As you can see, out of the box the UP board delivers 6 times the graphics throughput of a Raspberry PI 3b. And remember, this is a browser demo. I use this because webkit (the browser engine) involves so many factors, from floating point math to memory management, sub pixel rendering to GPU powered CSS3 effects.

What really counts though is that the PI’s CPU utilization is always at 100% from beginning to end; this demo just exhausts the RPI3 completely. This is because this JavaScript demo does not use the GPU to draw primitives. It uses the GPU to display a pixel buffer, but drawing pixels is purely done by the processor.

Where the RPI3 went through the roof and is almost incapable to responding while running that demo in full-screen, the UP-board hardly breaks a sweat.

It’s strolls along with a CPU utilization at 23%, which is nothing short of fantastic! Needless to say, its allocating only a fraction of its potential for the task. It is possible under Windows to force an application to run at a higher priority. The absolute highest mode is tp_critical, which means your program hogs the whole processor. Then you have tp_highest, tp_higher, tp_high, to_normal – and then some slow settings (tp_idle for example) that is irrelevant here.

Still, six times faster than the Raspberry PI 3b yet only utilizing 23% of its available horsepower? That means we got some awesome possibilities at our fingertips! This board is perfect for the dedicated retrogamer or people who want to emulate more modern consoles, emulation that is simply too much for a Raspberry PI 3 or ODroid to handle. Personally I’m looking forward to the following consoles:

  • Playstation 1 and 2
  • Nintendo Wii
  • XBox one
  • Nintendo Gamecube
  • Sega Saturn

Keep in mind that we are running Windows 10, not some esoteric homebrew Linux distro. You can run the very best emulators the scene has to offer. Hyperspin? No problem. Playstation 2 emulation? No problem. Sega Saturn emulation? A walk in the park. And before you object to Saturn, this console is actually a very difficult chipset to emulate. It has not one but 3 risc processors, a high quality sound chip and a distributed graphics chipset. It represents one of the most complex architectures even to this day.

As you can imagine, emulating such a complex piece of hardware requires a PC with some punch in it. But that’s not a problem for the UP board. It will happily emulate the Saturn while playing you favorite MP3’s and stream a movie from NetFlix.

Conclusion: the UP-board delivers a really good graphics experience. It has support for the latest DirectX API and drivers; OpenGL (and associated libraries) is likewise not a problem. This board will deliver top retro gaming experiences for years to come.

Amiga Forever

Right, since I own Amiga forever from Cloanto and recently bought Amiga OS 4.1 to use exclusively with that – it made more sense for me to just install Amiga forever directly on the UP-board and copy the pre-installed disk image over. I do have the ordinary and latest build of UAE, but I wanted to just see how it performed.


This is where I first noticed just how darn slow EMMC is. Even simple things like downloading Amiga Forever from Cloanto’s website, not to mention copying the OS 4.1 disk image over the local network took 10-15 minutes each (!). This is something that on my ordinary work PC would have taken seconds.

At this point I became painfully aware of its limitations. This is just a low-priced x86 embedded board after all, it’s not going to cure cancer or do your dishes. Just like the Raspberry PI 3 suffers whenever you perform disk IO – so will all embedded boards bound to less than optimal storage devices. So the CPU is awesome, memory is great, USB 3 port blazing, graphics and GPU way ahead of the competition; in short: the UP board is a fantastic platform! So don’t read my negative points about EMMC storage as being my verdict on the board itself.

Now let’s look at probably the most advanced emulation task in the world: to emulate the Commodore Amiga with a PPC accelerator.

Emulating Amiga OS 4.1, is the UP board capable of it?

If we look away from the staggering achievement of emulating not one chipset, but two (both 68k and PPC) side by side, I can tell you that AmigaOS 4.1 works. I write works because in all honesty, you are not going to enjoy using Amiga OS 4.1 on this board. Right now I would categorize it as interesting and impressive, but it’s not fast and it’s not enjoyable. Then again, OS 4.1 and PPC emulation is probably the hardest and most demanding emulation there is. It is a monumental achievement that it even works, let alone boots up a $150 embedded computer the size of a credit card.

Next generation Amiga, it works but the board is not powerful enough to deliver an enjoyable experience. Classic Amiga's however is the best you can imagine.

Next generation Amiga, it works but the board is not powerful enough to deliver an enjoyable experience. Classic Amiga’s however is the best you can imagine.

So if you are considering the UP-board solely to emulate a next generation Amiga, then you either wait for the “UP 2 board” that will be released quite soon (march 2017) or look at alternatives. I honestly predicted that it would pull this off, but sadly that was not the case. It lacks that little extra, just a little bit more horse-power and it will be a thing of beauty.

Storage blues

Had I known how slow the EMMC storage was I would have gone for the UDOO ultra instead, which is a big step up both is power and price. It retails at a hefty $100 above the UP board, but it also gives you a much faster CPU, 8 gigabyte memory and hopefully – a faster variation of EMMC storage. But truth be told I sincerely doubt the disk IO on the EMMC is significantly faster than for the UP-board.

Either way, if fast and furious PPC emulation is your thing, then $250 for the ODOO Ultra is still half the price of the A1222 PPC board. I mean, the A1222 next generation Amiga motherboard is just that: a motherboard. You don’t get a nice reasonably priced Amiga in a sexy case or anything like that. You get the motherboard and then you need to buy everything else on the side. $500 buys you a lot of x86 power these days so there is no way in hell im buying a PPC based Amiga motherboard for that price. Had it been on sale 10-15 years ago it would have been a revolution, but those specs in 2017? PPC based? Not gonna happen.

So if you really want to enjoy OS 4.1 and use it as a real desktop, then I have to just say go for the real deal. Get the A1220 if you really want to run OS 4.1 as your primary desktop. I think it’s both a waste of time and money, but for someone who loves the idea of a next generation Amiga, just get it over with and fork out the $500 for the real thing.

Having said all this, emulating OS 4.1 on the UP board is not terrible, but it’s not particularly usable either. If you are just curious and perhaps fire up OS 4 on a rare occasion – it may be enough to satisfy your curiosity; but if you are a serious user or software developer then you can forget about the UP-board. Here it’s not just the EMMC that is a factor, the CPU simply don’t have the juice.

Classic Amiga is a whole different story. Traditional UAE emulating an Amiga 4000 or 1200 is way beyond anything the Raspberry or ODroid can deliver. The same goes for retrogaming or using the board for software development.

Unless you are prepared to do a little adaptation that is.

Overcoming the limitations

Getting around the slow boot time (for OS 4) is in many ways simple, as is giving the CPU a bit of kick to remove some of the dullness that is typical for embedded boards. The rules are simple:

  • Files that are assessed often, especially large files should be dumped on a USB thumb-drive. Make sure you buy a USB 3.0 compliant drive and that its class 10 (the fastest read/write speed). And naturally, use the USB 3.0 socket for this one
  • Adding a fan and then doing some mild tweaking in CPU-Z and GPU-Z overclocking tools for Windows. As mention in other articles, you don’t want to mess around with overclocking if you don’t have the basic setup. Lack of cooling will burn your SoC in a couple of minutes. There is also much to be said about restraint. I find that mild overclocking does wonders, it also helps preserve the cpu for years to come (as opposed to running the metal like a hound from hell, burning it to a crisp within a year or less).
  • Drop using the EMMC completely and install Windows on a dedicated USB 3.0 external harddisk. But again, is it a full PC you want? Or is it a nice little embedded board to play with?

Since Amiga forever and Amiga OS 4.1 was where I really had problems, the first thing I did was to get the Amiga system files out of the weird RP9 file format. Its is basically just a zip-file containing the harddisk image, configuration files and everything you may need to run that Amiga virtual machine. But on a system with limited IO capacity, that idea is useless.

Once I got the harddisk HDF file exported, I mounted that and also added harddisk folder. Then its just a matter of copying over the whole Amiga OS disk to the folder. This means that the Amiga files will now reside on the PC drive directly rather than in some exotic structured storage file mimicking a harddisk from the mid 90’s.

As expected, booting from Windows to Workbench went from 1 minute (yes, I kid you not!) to 20 seconds. Still a long wait by any measure — but this can be easily solved. It become clear to me that maybe, just maybe, the architect of an otherwise excellent embedded board had a slightly different approach to storage than we do.

I know for a fact that it’s quite common to use EMMC purely as a boot device, and then distribute IO payload to external drives or USB sticks. Some also do the opposite and place the OS on a high-speed USB stick (as mentioned above) and use the EMMC to store their work. With “work” I am here referring to documents, email, perhaps some music, images and your garden variety assortment of data.

Add overclocking to the mix and you can squeeze out much better performance of this fantastic little machine. I still cant believe this tiny little thing is capable of running Windows 10 and Ubuntu so well.

Final verdict

I could play with this board all day long, but I think people get the picture by now.

The UP-board is fantastic and I love it. I was a bit let down by not having enough juice to run Amiga OS  4.1 final edition, but in all honesty it was opportunistic to expect that from an Intel Atom processor. I’m actually impressed that it could run it at all.

I was also extremely annoyed with the EMMC storage device (a topic I have exhausted in this article I think), and in the end I just disabled the whole bloody thing and settled on a high-quality USB 3 stick with plenty of space. So it’s not the end of the world, but it does feel like I have just thrown $50 down the toilet for a feature I will probably never use. But who knows, when I use it to run my own programs and design a full system – perhaps it wont matter as much.

Is it worth $150 for high-end model? I cannot get myself to say yes. It is worth it compared to the high-end ARM boards that go for $80-$120, especially since x86 runs Windows, and that opens up for a whole universe of software that Linux just don’t have; at least not with the same level of user-friendlyness.

Having said that, there are two new x86 boards just around the corner, both cheaper and more powerful. So would I buy this board again if I could return it? No.

I love what I can do with it, and its way ahead of Raspberry PI or the ODroid boards I have gotten used to and love, but the EMMC storage just ruins the whole thing.

Like I wrote – perhaps it will grow on me, but right now I feel it’s overpriced compared to what I would have gotten elsewhere.

UP-board for software developers

So far I have focused exclusively on retrogaming and emulating the next generation PPC based Amiga systems. This is an extremely demanding task, way beyond what a normal Windows application would ever expect of the system. So I havent really written anything substantial about what UP has to offer software developers or system integrators.

Using this board to deliver custom applications written in Delphi, C++ or Smart Pascal [node.js] is a whole different ballgame. The criteria in my line of work are very different and it’s rare that I would push the board like I have done here. It may happen naturally, perhaps if I’m coding a movie streaming server that needs to perform conversion on demand. Even a SVN or Github server can get a CPU spike if enough people download a repository in ZIP format (where a previously made file is not in the cache). But if you have ever worked with embedded boards you should know what to avoid and that you cannot write the code like you would for the commercial desktop market.

The UP board is more than suitable for products like a NAS, personal cloud server or kiosk systems running node.js and rendering with webkit. Native Delphi or C++ applications perform even better.

The UP board is more than suitable for products like a NAS, personal cloud server or kiosk systems running node.js and rendering with webkit. Native Delphi or C++ applications perform even better.


If you need an affordable, powerful and highly versatile embedded board for your Delphi, C++ or Smart Pascal [node.js] products, then I can recommend the UP boards without any hesitation!. The board rocks and has more than enough juice to cover a wide range of appliances and custom devices.

As a Delphi or Smart Pascal centric board it’s absolutely brilliant. If you work with kiosk systems, information booths or media servers along the lines of PLEX or Asustor, then no other board on the market gives you the same bang for your bucks. There is simply no traditional embedded retailer than can offer anything close to UP in the $90- $150 range.

If we compare it to traditional embedded boards, for instance a similar configuration sold by Advantec, you save a whopping 50% by getting the UP board instead (!)

Take the MIO-2261N-S6A1E embedded board. This has roughly the same specs (or in the same ballpark if you will), but if you shop at Advantec you have to fork out 215 euros for the motherboard alone! No ram, no storage – just the actual motherboard. You don’t even get a power supply.

What you do get however, is a kick ass sata interface, but you still have to get a drive.

If we try to match that board to what UP gives you for $150 (and that is the high-end UP board, not the cheap model) you hit the 300 euro mark straight away, just by getting the ram chips and a power supply. And should you add a tiny SSD disk to that equation, you have now reached a price tag of 350 euros ($366). So the UP-board is not just competitive, it’s trend setting!

So even though I would refrain from getting the UP board purely for emulating next generation PPC Amiga computers, I will most definitively be using it for work! It is more than capable of becomming a kick-ass NAS, a fast and responsive multimedia center, a web server for small businesses, a node.js service stack or cloud machine – and when it comes to kiosk systems, the UP-board is perfect!

So for developers I give it 4 out of 6 stars!