Archive

Archive for the ‘embedded’ Category

Amibian.js under the hood

December 5, 2018 2 comments

Amibian.js is gaining momentum as more and more developers, embedded systems architects, gamers and retro computer enthusiasts discover the project. And I have to admit I’m pretty stoked about what we are building here myself!

intro

In a life-preserver no less ūüėÄ

But, with any new technology or invention there are two common traps that people can fall into: The first trap is to gravely underestimate a technology. JavaScript certainly invites this, because only a decade ago the language was little more than a toy. Since then JavaScript have evolved to become the most widely adopted programming language in the world, and runtime engines like Google’s V8 runs JavaScript almost as fast as compiled binary code (“native” means machine code, like that produced by a C/C++ compiler, Pascal compiler or anything else that produces programs that run under Linux or Windows).

It takes some adjustments, especially for traditional programmers that havent paid attention to where browsers have gone – but long gone are the days of interpreted JavaScript. Modern JavaScript is first parsed, tokenized and compiled to bytecodes. These bytecodes are then JIT compiled (“just in time”, which means the compilation takes place inside the browser) to real machine-code using state of the art techniques (LLVM). So the JavaScript of 2018 is by no means the JavaScript of 2008.

The second trap you can fall into – is to exaggerate what a new technology can do, and attach abilities and expectations to a product that simply cannot be delivered.¬†It is very important to me that people don’t fall into either trap, and that everyone is informed about what Amibian.js actually is and can deliver – but also what it wont deliver. Rome was not built-in a day, and it’s wise to study all the factors before passing judgement.

I have been truly fortunate that people support the project financially via Patreon, and as such I feel it’s my duty to document and explain as much as possible. I am a programmer and I often forget that not everyone understands what I’m talking about. We are all human and make mistakes.

Hopefully this post will paint a clearer picture of Amibian.js and what we are building here. The project is divided into two phases: first to finish Amibian.js itself, and secondly to write a Visual Studio clone that runs purely in the browser. Since it’s easy to mix these things up, I’m underlining this easy – just in case.

What the heck is Amibian.js?

Amibian.js is a group of services and libraries that combined creates a portable operating-system that renders to HTML5. A system that was written using readily available web technology, and designed to deliver advanced desktop functionality to web applications.

The services that make up Amibian.js was designed to piggyback on a thin Linux crust, where Linux deals with the hardware, drivers and the nitty-gritty we take for granted. There is no point trying to write a better kernel in 2018, because you are never going to catch up with Linus Torvalds. It’s must more interesting to push modern web technology to the absolute limits, and build a system that is truly portable and distributed.

smart_ass

Above: Amibian.js is created in Smart Pascal and compiled to JavaScript

The service layer is written purely in node.js (JavaScript) which guarantees the same behavior regardless of host platform. One of the benefits of using off-the-shelves web technology is that you can physically copy the whole system from one machine to the other without any changes. So if you have a running Amibian.js system on your x86 PC, and copy all the files to an ARM computer – you dont even have to recompile the system. Just fire up the services and you are back in the game.

Now before you dismiss this as “yet another web mockup” please remember what I said about JavaScript: the JavaScript in 2018 is not the JavaScript of 2008. No other language on the planet has seen as much development as JavaScript, and it has evolved from a “browser toy” – into the most important programming language of our time.

So Amibian.js is not some skin-deep mockup of a desktop (lord knows there are plenty of those online). It implements advanced technologies such as remote filesystem mapping, an object-oriented message protocol (Ragnarok), RPCS (remote procedure call invocation stack), video codec capabilities and much more — all of it done with JavaScript.

In fact, one of the demos that Amibian.js ships with is Quake III recompiled to JavaScript. It delivers 120 fps flawlessly (browser is limited to 60 fps) and makes full use of standard browser technologies (WebGL).

utube

Click on picture above to watch Amibian.js in action on YouTube

So indeed, the JavaScript we are talking about here is cutting edge. Most of Amibian.js is compiled as “Asm.js” which means that the V8 runtime (the code that runs JavaScript inside the browser, or as a program under node.js) will JIT compile it to highly efficient machine-code.

Which is why Amibian.js is able to do things that people imagine impossible!

Ok, but what does Amibian.js consist of?

Amibian.js consists of many parts, but we can divide it into two categories:

  • A HTML5 desktop client
  • A system server and various child processes

These two categories have the exact same relationship as the X desktop and the Linux kernel. The client connects to the server, invokes procedures to do some work, and then visually represent the response This is identical to how the X desktop calls functions in the kernel or one of the Linux libraries. The difference between the traditional, machine code based OS and our web variation, is that our version doesn’t have to care about the hardware. We can also assign many different roles to Ambian.js (more about that later).

smartdesk

Enjoying other cloud applications is easy with Amibian.js, here is Plex, a system very much based on the same ideas as Amibian.js

And for the record: I’m¬†trying to avoid a bare-metal OS, otherwise I would have written the system using a native programming language like C or Object-Pascal. So I am not using JavaScript because I lack skill in native languages, I am using JavaScript because native code is not relevant for the tasks Amibian.js solves. If I used a native back-end I could have finished this in a couple of months, but a native server would be unable to replicate itself between cloud instances because chipset and CPU would be determining factors.

The Amibian.js server is not a single program. The back-end for Amibian.js consists of several service applications (daemons on Linux) that each deliver specific features. The combined functionality of these services make up “the amibian kernel” in our analogy with Linux. You can think of these services as the library files in a traditional system, and programs that are written for Amibian.js can call on these to a wide range of tasks. It can be as simple as reading a file, or as complex as registering a new user or requesting admin rights.

The greatest strength of Amibian.js is that it’s designed to run clustered, using as many CPU cores as possible. It’s also designed to scale, meaning that it will replicate itself and divide the work between different instances. This is where things get’s interesting, because an Amibian.js cluster doesn’t need the latest and coolest hardware to deliver good performance. You can build a cluster of old PC’s in your office, or a handful of embedded boards (ODroid XU4, Raspberry PI’s and Tinkerboard are brilliant candidates).

But why Amibian.js? Why not just stick with Linux?

That is a fair question, and this is where the roles I mentioned above comes in.

As a software developer many of my customers work with embedded devices and kiosk systems. You have companies that produce routers and set-top boxes, NAS boxes of various complexity, ticket systems for trains and busses; and all of them end up having to solve the same needs.

What each of these manufacturers have in common, is the need for a web desktop system that can be adapted for a specific program. Any idiot can write a web application, but when you need safe access to the filesystem, unified API’s that can delegate signals to Amazon, Azure or your company server, things suddenly get’s more complicated. And even when you have all of that, you still need a rock solid application model suitable for distributed computing. You might have 1 ticket booth, or 10.000 nation wide. There are no systems available that is designed to deal with web-technology on that scale. Yet ūüėČ

Let’s look at a couple of real-life scenarios that I have encountered, I’m confident you will recognize a common need. So here are some roles that Amibian.js can assume and help deliver a solution rapidly. It also gives you some ideas of the economic possibilities.

Updated: Please note that we are talking javascript here, not native code. There are a lot of native solutions out there, but the whole point here is to forget about CPU, chipset and target and have a system floating on top of whatever is beneath.

  • When you want to change some settings on your router – you login to your router. It contains a small apache server (or something similar) and you do all your maintenance via that web interface. This web interface is typically skin-deep, annoying to work with and a pain for developers to update since it’s connected to a native apache module which is 100% dependent on the firmware. Each vendor end up re-inventing the wheel over and over again.
  • When you visit a large museum notice the displays. A museum needs to display multimedia, preferably on touch capable devices, throughout the different exhibits. The cost of having a developer create native applications that displays the media, plays the movies and gives visual feedback is astronomical. Which is why most museums adopt web technology to handle media presentation and interaction. Again they re-invent the wheel with varying degree of success.
  • Hotels have more or less the exact same need but on a smaller scale, especially the larger hotels where the lobby have information booths, and each room displays a web interface via the TV.
  • Shopping malls face the same challenge, and depending on the size they can need anything from a single to a hundred nodes.
  • Schools and education spend millions on training software and programming languages every year. Amibian.js can deliver both and the schools would only pay for maintenance and adaptation – the product itself is free. Kids get the benefit of learning traditional languages and enjoying instant visual feedback! They can learn Basic, Pascal, JavaScript and C. I firmly believe that the classical languages will help make them better programmers as they evolve.

You are probably starting to see the common denominator here?

They all need a web-based desktop system, one that can run complex HTML5 based media applications and give them the same depth as a native operating-system; Which is pretty hard to achieve with JavaScript alone.

Amibian.js provides a rich foundation of more than 4000 classes that developers can use to write large, complex and media rich applications (see Smart Mobile Studio below). Just like Linux and Windows provides a wealth of libraries and features for native application development – Amibian.js aims to provide the same for cloud and embedded systems.

And as the name implies, it has roots in the past with the machine that defined multimedia, namely the Commodore Amiga. So the relation is more than just visually, Amibian.js uses the same system architecture – because we believe it’s one of the best systems ever designed.

If JavaScript is so poor, why should we trust you to deliver so much?

First of all I’m not selling anything. It’s not like this project is something that is going to make me a ton of cash. I ask for support during the development period because I want to allocate proper time for it, but when done Amibian.js will be free for everyone (LGPL). And I’m also writing it because it’s something that I need and that I havent seen anywhere else. I think you have to write software for yourself, otherwise the quality wont be there.

Secondly, writing Amibian.js in raw JavaScript with the same amount of functions and depth would take years. The reason I am able to deliver so much functionality quickly, is because I use a compiler system called Smart Mobile Studio. This saves months and years of development time, and I can use all the benefits of OOP.

Prior to starting the Amibian.js project, I spent roughly 9 years creating Smart Mobile Studio. Smart is not a solo project, many individuals have been involved – and the product provides a compiler, IDE (editor and tools), and a vast run-time library of pre-made classes (roughly 4000 ready to use classes, or building-blocks).

amibian_shell

Writing large-scale node.js services in Smart is easy, fun and powerful!

Unlike other development systems, Smart Mobile Studio compiles to JavaScript rather than machine-code. We have spent a great deal of time making sure we could use proper OOP (object-oriented programming), and we have spent more than 3 years perfecting a visual application framework with the same depth as the VCL or FMX (the core visual frameworks for C++ builder and Delphi).

The result is that I can knock out a large application that a normal JavaScript coder would spend weeks on – in a single day.

Smart Mobile Studio uses the object-pascal language, a dialect which is roughly 70% compatible with Delphi. Delphi is exceptionally well suited for writing large, data driven applications. It also thrives for embedded systems and low-level system services. In short: it’s a lot easier to maintain 50.000 lines of object pascal code, than 500.000 lines of JavaScript code.

Amibian.js, both the service layer and the visual HTML5 client application, is written completely using Smart Mobile Studio. This gives me as the core developer of both systems a huge advantage (who knows it better than the designer right?). I also get to write code that is truly OOP (classes, inheritance, interfaces, virtual and abstract methods, partial classes etc), because our compiler crafts something called a VMT (virtual method table) in JavaScript.

Traditional JavaScript doesn’t have OOP, it has something called prototypes. With Smart Pascal I get to bring in code from the object-pascal community, components and libraries written in Delphi or Freepascal – which range in the hundreds of thousands. Delphi alone has a massive library of code to pick from, it’s been a popular toolkit for ages (C is 3 years older than pascal).

But how would I use Amibian.js? Do I install it or what?

Amibian.js can be setup and used in 4 different ways:

  • As a true desktop, booting straight into Amibian.js in full-screen
  • As a cloud service, accessing it through any modern browser
  • As a NAS or Kiosk front-end
  • As a local system on your existing OS, a batch script will fire it up and you can use your browser to access it on https://127.0.0.1:8090

So the short answer is yes, you install it. But it’s the same as installing Chrome OS. It’s not like an application you just install on your Linux, Windows or OSX box. The whole point of Amibian.js is to have a platform independent, chipset agnostic system. Something that doesn’t care if you using ARM, x86, PPC or Mips as your CPU of preference. Developers will no doubt install it on their existing machines, Amibian.js is non-intrusive and does not affect or touch files outside its own eco-system.

But the average non-programmer will most likely setup a dedicated machine (or several) or just deploy it on their home NAS.

The first way of enjoying Amibian.js is to install it on a PC or ARM device. A disk image will be provided for supporters so they can get up and running ASAP. This disk image will be based on a thin Linux setup, just enough to get all the drivers going (but no X desktop!). It will start all the node.js services and finally enter a full-screen web display (based on Chromium Embedded) that renders the desktop. This is the method most users will prefer to work with Amibian.js.

The second way is to use it as a cloud service. You install Amibian.js like mentioned above, but you do so on Amazon or Azure. That way you can login to your desktop using nothing but a web browser. This is a very cost-effective way of enjoying Amibian.js since renting a virtual instance is affordable and storage is abundant.

The third option is for developers. Amibian.js is a desktop system, which means it’s designed to host more elaborate applications. Where you would normally just embed an external website into an IFrame, but Amibian.js is not that primitive. Hosting external applications requires you to write a security manifest file, but more importantly: the application must interface with the desktop through the window’s message-port. This is a special object that is sent to the application as a hand-shake, and the only way for the application to access things like the file-system and server-side functionality, is via this message-port.

Calling “kernel” level functions from a hosted application is done purely via the message-port mentioned above. The actual message data is JSON and must conform to the Ragnarok client protocol specification. This is not as difficult as it might sound, but Amibian.js takes security very seriously – so applications trying to cause damage will be promptly shut down.

You mention hosted applications, do you mean websites?

Both yes and no: Amibian.js supports 3 types of applications:

  • Ordinary HTML5/JS based applications, or “websites” as many would call them. But like I talked about above they have to establish a dialog with the desktop before they can do anything useful.
  • Hybrid applications where half is installed as a node.js service, and the other half is served as a normal HTML5 app. This is the coolest program model, and developers essentially write both a server and a client – and then deploy it as a single package.
  • LDEF compiled bytecode applications, a 68k inspired assembly language that is JIT compiled by the browser (commonly called “asm.js”) and runs extremely fast. The LDEF virtual machine is a sub-project in Amibian.js

The latter option, bytecodes, is a bit like Java. A part of the Amibian.js project is a compiler and runtime system called LDEF.

patron_asm2

Above: The Amibian.js LDEF assembler, here listing opcodes + disassembling a method

The first part of the Amibian.js project is to establish the desktop and back-end services. The second part of the project is to create the worlds first cloud based development platform. A full Visual Studio clone if you like, that allows anyone to write cloud, mobile and native applications directly via the browser (!)

Several languages are supported by LDEF, and you can write programs in Object Pascal, Basic and C. The Basic dialect is especially fun to work with, since it’s a re-implementation of BlitzBasic (with a lot of added extras). Amiga developers will no doubt remember BlitzBasic, it was used to create some great games back in the 80s and 90s. It’s well suited for games and multimedia programming and above all – very easy to learn.

More advanced developers can enjoy Object Pascal (read: Delphi) or a sub-set of C/C++.

And please note: This IDE is designed for large-scale applications, not simple snippets. The ultimate goal of Amibian.js is to move the entire development cycle to the cloud and away from the desktop. With Amibian.js you can write a cool “app” in BlitzBasic, run it right in the browser — or compile it server-side and deploy it to your Android Phone as a real, natively compiled application.

So any notion of a “mock desktop for HTML” should be firmly put to the side. I am not playing around with this product and the stakes are very real.

But why don’t you just use ChromeOS?

There are many reasons, but the most important one is chipset independence. Chrome OS is a native system, meaning that it’s core services are written in C/C++ and compiled to machine code. The fundamental principle of Amibian.js is to be 100% platform agnostic, and “no native code allowed”. This is why the entire back-end and service layer is targeting node.js. This ensures the same behavior regardless of processor or host system (Linux being the default host).

Node.js has the benefit of being 100% platform independent. You will find node.js for ARM, x86, Mips and PPC. This means you can take advantage of whatever hardware is available. You can even recycle older computers that have lost mainstream support, and use them to run Amibian.js.

A second reason is: Chrome OS might be free, but it’s only as open as Google want it to be. ChromeOS is not just something you pick up and start altering. It’s dependence on native programming languages, compiler toolchains and a huge set of libraries makes it extremely niche. It also shields you utterly from the interesting parts, namely the back-end services. It’s quite frankly boring and too boxed in for any practical use; except for Google and it’s technology partners that is.

I wanted a system that I could move around, that could run in the cloud, on cheap SBC’s. A system that could scale from handling 10 users to 1000 users – a system that supports clustering and can be installed on multiple machines in a swarm.

A system that anyone with JavaScript knowledge can use to create new and exciting systems, that can be easily expanded and serve as a foundation for rich media applications.

What is this Amiga stuff, isn’t that an ancient machine?

In computing terms yes, but so is Unix. Old doesn’t automatically mean bad, it actually means that it’s adapted and survived challenges beyond its initial design. While most of us remember the Amiga for its games, I remember it mainly for its elegant and powerful operating-system. A system so flexible that it’s still in use around the world – 33 years after the machine hit the market. That is quite an achievement.

image2

The original Amiga OS, not bad for a 33-year-old OS! It was and continues to be way ahead of everyone else. A testament to the creativity of its authors

Amibian.js as the name implies, borrows architectural elements en-mass from Amiga OS. Quite simply because the way Amiga OS is organized and the way you approach computing on the Amiga is brilliant. Amiga OS is much more intuitive and easier to understand than Linux and Windows. It’s a system that you could learn how to use fully with just a couple of days exploring; and no manuals.

But the similarities are not just visual or architectural. Remember I wrote that hosted applications can access and use the Amibian.js services? These services implement as much of the original ROM Kernel functions as possible. Naturally I can’t port all of it, because it’s not really relevant for Amibian.js. Things like device-drivers serve little purpose for Amibian.js, because Amibian.js talks to node.js, and node talks to the actual system, which in turn handles hardware devices. But the way you would create windows, visual controls, bind events and create a modern, event-driven application has been preserved to the best of my ability.

But how does this thing boot? I thought you said server?

If you have setup a dedicated machine with Amibian.js then the boot sequence is the same as Linux, except that the node.js services are executed as background processes (daemons or services as they are called), the core server is initialized, and then a full-screen HTML5 view is set up that shows the desktop.

But that is just for starting the system. Your personal boot sequence which deals with your account, your preferences and adaptations – that boots when you login to the system.

When you login to your Amibian.js account, no matter if it’s just locally on a single PC, a distributed cluster, or via the browser into your cloud account — several things happen:

  1. The client (web-page if you like) connects to the server using WebSocket
  2. Login is validated by the server
  3. The client starts loading preferences files via the mapped filesystem, and then applies these to the desktop.
  4. A startup-sequence script file is loaded from your account, and then executed. The shell-script runtime engine is built into the client, as is REXX execution.
  5. The startup-script will setup configurations, create symbolic links (assigns), mount external devices (dropbox, google drive, ftp locations and so on)
  6. When finished the programs in the ~/WbStartup folder are started. These can be both visual and non-visual.

As you can see Amibian.js is not a mockup or “fake” desktop. It implements all the advanced features you expect from a “real” desktop. The filesystem mapping is especially advanced, where file-data is loaded via special drivers; drivers that act as a bridge between a storage service (a harddisk, a network share, a FTP host, Dropbox or whatever) and the desktop. Developers can add as many of these drivers as they want. If they have their own homebrew storage system on their existing servers, they can implement a driver for it. This ensures that Amibian.js can access any storage device, as long as the driver conforms to the driver standard.

In short, you can create, delete, move and copy files between these devices just like you do on Windows, OSX or the Linux desktop. And hosted applications that run inside their own window can likewise request access to these drivers and work with the filesystem (and much more!).

Wow this is bigger than I thought, but what is this emulation I hear about? Can Amibian.js really run actual programs?

Amibian.js has a JavaScript port of UAE (Unix Amiga Emulator). This is a fork of SAE (scripted Amiga Emulator) that has been heavily optimized for web. Not only is it written in JavaScript, it performs brilliantly and thus allows us to boot into a real Amiga system. So if you have some floppy-images with a game you love, that will run just fine in the browser. I even booted a 2 gigabyte harddisk image ūüôā

But Amiga emulation is just the beginning. More and more emulators are ported to JavaScript; you have Nes, SNes, N64, PSX I & II, Sega Megadrive and even a NEO GEO port. So playing your favorite console games right in the browser is pretty straight forward!

But the really interesting part is probably QEmu. This allows you to run x86 instances directly in the browser too. You can boot up in Windows 7 or Ubuntu inside an Amibian.js window if you like. Perhaps not practical (at this point) but it shows some of the potential of the system.

I have been experimenting with a distributed emulation system, where the emulation is executed server-side, and only the graphics and sound is streamed back to the Amibian.js client in real-time. This has been possible for years via Apache Guacamole, but doing it in raw JS is more fitting with our philosophy: no native code!

I heard something about clustering, what the heck is that?

Remember I wrote about the services that Amibian.js has? Those that act almost like libraries on a physical computer? Well, these services don’t have to be on the same machine — you can place them on separate machines and thus its able to work faster.

47470965_10155861938320906_4959664457727868928_n

Above: The official Amibian.js cluster, 4 x ODroid XU4s SBC’s in a micro-rack

A cluster is typically several computers connected together, with the sole purpose of having more CPU cores to divide the work on. The cool thing about Amibian.js is that it doesn’t care about the underlying CPU. As long as node.js is available it will happily run whatever service you like – with the same behavior and result.

The official Amibian.js cluster consists of 5 ODroid XU4/S SBC (single board computers). Four of these are so-called “headless” computers, meaning that they don’t have a HDMI port – and they are designed to be logged into and software setup via SSH or similar tools. The last machine is a ODroid XU4 with a HDMI out port, which serves as “the master”.

The architecture is quite simple: We allocate one whole SBC for a single service, and allow the service to copy itself to use all the CPU cores available (each SBC has 8 CPU cores). With this architecture the machine that deals with the desktop clients don’t have to do all the grunt work. It will accept tasks from the user and hosted applications, and then delegate the tasks between the 4 other machines.

Note: The number of SBC’s is not fixed. Depending on your use you might not need more than a single SBC in your home setup, or perhaps two. I have started with 5 because I want each part of the architecture to have as much CPU power as possible. So the first “official” Amibian.js setup is a 40 core monster shipping at around $250.

But like mentioned, you don’t have to buy this to use Amibian.js. You can install it on a single spare X86 PC you have, or daisy chain a couple of older PC’s on a switch for the same result.

Why Headless? Don’t you need a GPU?

The headless SBC’s in the initial design all have GPU (graphical processing unit) as well as audio capabilities. What they lack is GPIO pins and 3 additional USB ports. So each of the nodes on our cluster can handle graphics at blistering speed — but that is ultimately not their task. They serve more as compute modules that will be given tasks to finish quickly, while the main machine deals with users, sessions, traffic and security.

The 40 core cluster I use has more computing power than northern europe had in the early 80s, that’s something to think about. And the pricetag is under $300 (!). I dont know about you but I always wanted a proper mainframe, a distributed computing platform that you can login to and that can perform large tasks while I do something else. This is as close as I can get on a limited budget, yet I find the limitations thrilling and fun!

Part of the reason I have opted for a clustered design has to do with future development. While UAE.js is brilliant to emulate an Amiga directly in the browser – a more interesting design is to decouple the emulation from the output. In other words, run the emulation at full speed server-side, and just stream the display and sounds back to the Amibian.js display. This would ensure that emulation, of any platform, runs as fast as possible, makes use of multi-processing (read: multi threading) and fully utilize the network bandwidth within the design (the cluster runs on its own switch, separate from the outside world-wide-web).

I am also very interested in distributed computing, where we split up a program and run each part on different cores. This is a topic I want to investigate further when Amibian.js is completed. It would no doubt require a re-design of the LDEF bytecode system, but this something to research later.

Will Amibian.js replace my Windows box?

That depends completely on what you use Windows for. The goal is to create a self-sustaining system. For retro computing, emulation and writing cool applications Amibian.js will be awesome. But Rome was not built-in a day, so it’s wise to be patient and approach Amibian.js like you would Chrome OS. Some tasks are better suited for native systems like Linux, but more and more tasks will run just fine on a cloud desktop like Amibian.js.

Until the IDE and compilers are in place after phase two, the system will be more like an embedded OS. But when the LDEF compiler and IDE is in place, then people will start using it en-mass and produce applications for it. It’s always a bit of work to reach that point and create critical mass.

tomes

Object Pascal is awesome, but modern, native development systems are quite demanding

My personal need has to do with development. Some of the languages I use installs gigabytes onto my PC and you need a full laptop to access them. I love Amibian.js because I will be able to work anywhere in the world, as long as a browser and normal internet line is available. In my case I can install a native compiler on one of the nodes in the cluster, and have LDEF emit compatible code; voila, you can build app-store ready applications from within a browser environment.

 

I also love that I can set-up a dedicated platform that runs legacy applications, games – and that I can write new applications and services using modern, off the shelve languages. And should a node in the cluster break down, I can just copy the whole system over to a new, affordable SBC and keep going. No super expensive hardware to order, no absurd hosting fees, and finally a system that we all can shape and use in a plethora of systems. From a fully fledged desktop to a super advanced NAS or Router that use Amibian.js to give it’s customers a fantastic experience.

And yes, I get to re-create the wonderful reality of Amiga OS without the absurd egoism that dominates the Amiga owners to this day. I don’t even know where to begin with the present license holders – and I am so sick of the drama that rolling my own seemed the only reasonable path forward.

Well — I hope this helps clear up any misconceptions about Amibian.js, and that you find this as interesting as I do. As more and more services are pushed cloud-side, the more relevant Amibian.js will become. It is perfect as a foundation for large-scale applications, embedded systems — and indeed, as a solo platform running on embedded devices!

I cant wait to finish the services and cluster this sucker on the ODroid rack!

If you find this project interesting, head over to my Patreon website and get involved! I could really use your support, even if it’s just a $5 “high five”. Visit the project at:¬†http://www.patreon.com/quartexNow

Support my work on Patreon, get awesome stuff

September 2, 2018 3 comments

For well over a decade now I have tried my best to be of service to the Delphi community. I run six pascal forums on Facebook, I teach Delphi for free in my spare time and I help people solve problems, find jobs and get inspired.

“to utterly re-write the traditional development toolchain and create
a desktop environment and development studio that is unbound
by chipset, cpu and platform”

I am about to embark on the biggest journey I have ever undertaken, namely to deliver a technological platforms that combined will give both users and developers unprecedented advantages.

patreon

Support my work by becoming a patron

The challenge with new and awesome technology, is that it can be difficult to convey. The full implications of something revolutionary needs a little bit of gestation, maturity and overview before the “OMG” factor hits home. But thankfully the Delphi and Smart Pascal community is amongst the most learned, creative and innovative I have ever seen. Not to mention the Amiga retro scene that also have supported me – a group made up of hardware wizards, FPGA programmers and hackers that eat assembly code for breakfast.

I won’t dazzle you with empty promises or quick fixes. Every part of what I present here is rooted in code I have running in my lab. I hope that the doors Smart Mobile Studio have opened, the work I have done on the RTL and the products I have made – that they at least have earned me your patience; and that you will read this and see if it’s worthy of your support.

Context

When we released Smart Mobile Studio 3.0 we made a live web desktop demo to showcase some of the potential the technology has to offer. What was not mentioned was that this in fact was not a mockup or slap-dash demo intended to impress you with Quake III or the Bassoon music tracker. It has deeper roots and is a re-formation on the Quartex Desktop API that has been an essential part of Smart Mobile Studio since the beginning.

The desktop, codename Amibian.js, is actually a platform that is a part of a larger, loftier goal. One that was outlined to investors as early as 2013. Sadly I was unable to secure funds for it, despite the fact that two companies are using the prototype for kiosk and embedded systems already (city kiosk terminals in Spain running on ODroid XU4 ARM boards, and also an educational platform for schools in New Zealand).

The goal, to cut it short, is quite simply:¬†to utterly re-write the traditional development toolchain and create a desktop environment and development studio that is unbound by chipset, cpu and platform. In other words, to re-implement and build a “visual studio” environment that lives completely in the cloud, that can be accessed by any modern browser, on any operating system, anywhere in the world.

I’m not talking about Notepad or Ace here, I am talking about a complete IDE with form designer, database designer, cloud endpoints, multi language support and above all – the ability to compile and deploy both virtual and native applications through established build services. All of it JavaScript, all of it running on Node.js, Electron or HTML5.

You wont be drag & dropping components, you will be dropping entire ecosystems.

Smart Mobile Studio, new tools for a new age

When I started some eight years ago, this would have been impossible. There were no compilers that could take a complex language like object pascal or C++ and successfully express that as JavaScript. JavaScript on its own, at least compared to C++ or Delphi, is quite poor. Things we take for granted like classes, linear inheritance, virtual and abstract methods (requires a VMT), interfaces (and more) simply does not exist. There have been some advances lately of course, but JavaScript is and will always be, a prototype based runtime system.

For eight years the Smart Mobile Studio team have worked to create the ecosystem needed to make large-scale application development for JSVM (Javascript virtual machine, the browser, Phonegap, NodeJS and more) a reality. We have forged the compiler, the support code and an RTL spanning thousands of classes from scratch.

If is now possible to write JS based applications that rival native applications both in scope and complexity. This has without a doubt been one of the hardest tasks I have ever been involved in.

With Smart Mobile Studio in place and the foundation stone set – we can finally get to work on the real product. Namely a cloud forge unlike any other.

The Amibian desktop environment

The desktop platform that forms the basis of my work Рwas nicknamed Amibian due to its visual inspiration from Amiga OS 4.1, a modern but somewhat obscure operating system for PPC computers. But while there are cunning visual similarities, Amibian.js is a very different beast under the hood.

First of all Amibian.js is written from scratch to be cloud oriented. The Ragnarok message server at the heart of the system, is capable of delegating hundreds of users each dispatching high data volume simultaneously. It is a server system that is designed from scratch to be clustered, scalable and distributed.

devkit

The Ragnarok message protocol performs brilliantly, here testing IO messages live

You can run it together with the client, forming an OS much like ChromeOS, on something as small as a Tinkerboard ($70 embedded board) or scale it to a 100 node Amazon cluster. If node.js can be installed, Amibian can run. CPU or chipset is quite frankly irrelevant.

This is the foundation that the next generation IDE and compiler toolchain will be built on. A toolchain that doesn’t care if you prefer Linux, Windows, OSX or Android.

If you have a HTML5 compliant browser, you can create full-scale applications with the same level of depth as Delphi, and target 8 operating systems and more than 50 embedded devices.

What does that mean for Delphi users

Like Smart Mobile Studio, Amibian is not meant to compete with Delphi. It is designed to complement and extend Delphi – allowing Delphi developers to reach avenues where native code might be impractical or less cost-effective.

The new compiler is based around the LDEF virtual machine specification that I drafted spring 2018. It is written in Smart Pascal and runs on every system that node.js supports (which as of writing is substantial). LDEF is a bytecode specification designed to make native code generation easy. Unlike .Net or Java, LDEF is a register based virtual machine. It is a cross-section of how ARM, x86 and MC68000¬†CPU’s work in real life. It has stacks, registers, condition flags, data control, program control, absolute and relative addressing; and of course instructions that all CPUs support.

patron_asm1

The LDEF assembler is implemented completely in Smart Pascal. The picture shows the testbed with a visual coding editor. The assembler is meant to run under node.js server-side but can also be hosted on a website or post compiled into a native executable

When executing this bytecode under JavaScript, the runtime uses the subset of JavaScript called “Asm.js” out of the box. AsmJS is more mature than WebAssembly and less restrictive (modules are not sandboxed from the DOM). So to make it short: the code runs close to native courtesy of JIT optimization.

LDef is modular, meaning that parser, compiler, assembler and codegen (the part that converts bytecodes it to something else) are separate modules. Writing a WebAssembly codegen, x86 codegen or ARM codegen can be done separately without breaking the existing tooling.

patron_asm2

Having assembled the code (see picture above) the list command dumps the bytecodes to the console in readable fashion. It is then disassembled using the “dasm” command.

The LDEF prototype has been completely written in Smart Pascal, but a port is underway for Delphi and C++ builder. This gives Delphi developers the benefit of using bytecode libraries in their code. If you install Delphi server-side, you can use Amibian as a pure web front end for Delphi (!)

Create applications anywhere, on anything

Since everything is JavaScript you are no longer bound to chipset or CPU. You can set up Amibian on Amazon or Azure, an office server or an affordable, off the shelves SBC (single board computer). You can daisy chain 10 older PC’s into a cluster and get 5 more years out of the hardware; the compiler is made in JS; it doesn’t care if the real CPU is outdated. It cares about bytes and endian-ness, that’s it.

Screenshot

Early implementation of the desktop, here running native 68k (Amiga) code directly. Both x86 and PPC runtimes are now possible – the days of cloud are here

You can be on holiday in Spain armed only with an iPad and a BlueTooth keyboard, and should inspiration strike, you can login and write your application without even installing an app on your iPad. You just need a modern browser to start writing applications.

Patreon Tiers

Depending on your level of support, you get access to different parts of my work. As of writing I have 4 frameworks that is being maintained and that I want to continue to maintain for those that support me:

  • $5: High five! Support the work as a nice gesture
  • $10: Access to and support for developing my tweening library for VCL
  • $25:¬†License management for VCL and FMX, full source code access to Hexlicense and support for porting Ironwood to Delphi + a new REST based registration server
  • $35:¬†Rage libraries, get full access to the ByteRage database framework, Pixelrage graphics library and support their evolution. The timeline includes SQL and condition parsing which will not be covered by the current running tutorial. Want a clean Delphi alternative to SQLite? Well, let me make it for you.
  • $45:¬†LDEF assembler and virtual machine. Get full source code access to the Smart Pascal assembler (runs on node.js) and the Delphi port as soon as it rolls off the assembly line (pun intended). Enjoy proper documentation for instructions, bytecode format and enjoy both the native and web assembler application!¬†As a bonus, this level gives you access to video tutorials and recordings dealing with LDEF, HexLicense, Tweening and everything else.
  • $50: Amibian and Ragnarok: Amibian.js client, server and development toolchain.
    This is the motherload and you get to enjoy all of it before anyone else.

    • Full access to beta builds, updates, new features – all of it before anyone else!
    • Explore the Ragnarok client / server message API
    • Follow my video tutorials and let me help you dig into Smart Pascal and node.js
    • Ask questions and get a deeper understanding of both Smart Mobile Studio, Amibian.js and LDEF.
    • Have a front seat reserved as we unleash the power of Delphi, Smart Pascal and JavaScript on the world.
  • $100:¬†Amibian Embedded Setup:¬†For the true Amibian.js supporters! You get all the perks of previous tiers, but with an added bonus of pre-made Amibian.js disk images for the ODroid XU4 and the Asus Tinkerboard once LDEF and the IDE has been implemented.These disk images starts the Ragnarok server as a daemon (Linux Service) during the boot sequence. The system then continues booting into a full-screen webview that renders the Amibian.js desktop. There is no Linux desktop involved.
    This is by far the most cost effective setup for Kiosk and Embedded work with either a touch display or keyboard access.

    As an extra perk this version of Amibian.js contains an optimized version of uae.js (Amiga emulation) and is capable of executing ADF disks and harddisk images directly in their own window.

    With the service layer now fully developed, combined with truly platform independent compiler technology Рwe have in fact created an interesting alternative to ChromeOS. One with a minimal footprint that is cost effective and easy to expand. A system that you have full control over and can change, rebrand, modify and enjoy!

    Congratulations! You have helped bring Amibian.js and a whole new development paradigm into this world!

If this wets your appetite then head over to my Patreon site and show your support! I start shipping code to those that support me next week, so get onboard and let’s make it happen!

Final words

26229892_10155095303530906_800744220432589611_nPatreon is not the same as a kickstarter or a formal investment, I think this is important to underline. I hope however that you find my work interesting and that you would like to see this realized.

LDEF is not just a fancy bytecode runtime, it is also a framework that other developers can use to make new languages. The whole point of this is to blow the old limitations away and to really push technology to the maximum.

Being able to write system services that work the same on all operating-systems, and then deploy entire ecosystems¬†– this used to be science fiction. Now it’s not.

I want to thank those that have become patrons – it really means so much! If enough support my work I can allocate more time for implementing the tools the community needs and be of greater service to everyone.

Thank you for your time

Jon Lennart Aasenden

Nano PI Fire 3, part two

July 18, 2018 Leave a comment

If you missed the first installment of this test, please click here to catch up. In this installment we are just going to dive straight into general use and get a feel for what can and cannot be done.

Solving the power problem

pi-powerLike mentioned in the previous article, a normal mobile charger (5 volt, 2 amps) is not enough to support the nano-pi. Since I have misplaced my original PI power-supply with 5 volt / 3 amps I decided to cheat. So I plugged the power USB into my PC which will deliver as much juice as the device needs. I don’t have time to wait for a new PSU to arrive so this will have to do.

But for the record (and underlined) a proper PSU with at least 2.5 amps is essential to using this board. I suggest you order the official Raspberry PI 3b power-supply. But if you should find one with 3 amps that would be even better.

Web performance

The question on everyone’s mind (or at least mine) is: how does the Nano-PI fire 3 perform when rendering cutting edge, hardcore HTML5? Is this little device a potential candidate for running “The Smart Desktop” (a.k.a Amibian.js for those of you coming from the retro-computing scene)?

Like I suspected earlier, X (the Linux windowing framework) doesn’t have drivers that deliver hardware acceleration at all.

shot_desktop-1024x819-2-1024x819

Lubuntu is a sexy desktop no doubt there, but it’s overkill for this device

This is quite easy to test: when selecting a rectangle on the Lubuntu desktop and moving the mouse-cursor around (holding down the left mouse button at the same time) if it lags terribly, that is a clear indicator that no acceleration exists.

And I was right on the money because there is no acceleration what so ever for the Linux distribution. It struggles hopelessly to keep up with the mouse-pointer as you move it around with an active selection; something that would be silky smooth had the GPU been tasked with the job.

But, hardware acceleration is not just about the desktop. It’s not some flag you enable and it magically effect everything, but rather several API’s at either the kernel-level or immediate driver level (modules the kernel loads), each affecting different aspects of a system.

So while the desktop “2d blitting” is clearly cpu driven, other aspects of the system can still be accelerated (although that would be weird and rare. But considering how Asus messed up the Tinkerboard I guess anything goes these days).

Asking Chrome for the hard facts

I fired up Google Chrome (which is the default browser thank god) and entered the magic url:

chrome://gpu

This is a built-in page that avails a detailed report of what Chrome learns about the current system, right down to specific GPU features used by OpenGL.

As expected, there was NO acceleration what so ever. So I was quite surprised that it managed to run Amibian.js at all. Even without hardware acceleration it outperformed the Raspberry PI 3b+ by a factor of 4 (at the very least) and my particle demo ran at a whopping 8 fps (frames per second). The original Rasperry PI could barely manage 2 fps. So the Nano-PI Fire is leagues ahead of the PI in terms of raw cpu power, which is brilliant for headless servers or computational tasks.

FriendlyCore vs Lubuntu? QT for the win

Now here is a funny thing. So far I have used the Lubuntu standard Linux image, and performance has been interesting to say the least. No hardware acceleration, impressive cpu results but still – what good is a SBC Linux distro without fast graphics? Sure, if you just want a head-less file server or host services then you don’t need a beefy GPU. But here is the twist:

Turns out the makers of the board has a second, QT oriented distro called Friendly-core. And this image has OpenGL-ES support and all the missing acceleration lacking from Lubuntu.

I was pretty annoyed with how Asus gave users the run-around with Tinkerboard downloads, but they have thankfully cleaned up their act and listened to their customers. Friendly-elec might want to learn from Asus mistakes in this area.

Qtwebenginebrowser

QT has a rich history, but it’s being marginalized by node.js and Delphi these days

Alas, Friendly-core xenial 4.4 Arm64 image turned out to be a pure embedded development image. This is why the board has a debug port (which is probably awesome if you are into QT development). So this is for QT developers that want to use the board as a single-application system where they write the code on Windows or Linux, compile and it’s all transported to the board with live debugging back to the devtools they use. In other words: not very useful for non C/C++ QT developers.

Android Lolipop

2000px-Android_robot.svgI have only used Android on a pad and the odd Samsung Galaxy phone, so this should be interesting. I Downloaded the Lolipop disk image, burned it to the sd-card and booted up.

After 20 minutes with a blank screen i gave up.

I realize that some Android distros download packages ad-hoc and install directly from a repository, so it can take some time to get started; but 15-20 minutes with a black screen? The Android logo didn’t even show up — and that should be visible almost immediately regardless of network install or not.

This is really a great shame because I wanted to test some Delphi Firemonkey applications on it, to see how well it scales the more demanding GPU tasks. And yes i did try a different SD-Card to be sure it wasnt a disk error. Same result.

Back to Lubuntu

Having spent a considerable time trying to find a “wow” factor for this board, I have to just surrender to the fact that it’s just not there. . This is not a “PI” any more than the Tinkerboard is a PI. And appending “pi” to a product name will never change that.

I can imagine the Nano-PI Fire 3 being an awesome single-application board for QT C/C++ developers though. With a dedicated debug port making it a snap to transport, execute and do live debugging directly on the hardware — but for general DIY hacking, using it for native Android development with Delphi, or node.js development with Smart Mobile Studio – or just kicking back with emulators like Mame, UAE or whatever tickles your fancy — its just too rough around the edges. Which is really a shame!

So at the end of the day I re-installed Lubuntu and figure I just have to wait until Friendly-elec get their act together and issue proper drivers for the Mali GPU. So it’s $35 straight out the window — but I can live with that. It was a risk but at that price it’s not going to break the bank.

The positive thing

The Nano-PI Fire 3 is yet another SBC in a long list that fall short of its potential. Like many others they try to use the word “PI” to channel some of the Raspberry PI enthusiasm their way – but the quality of the actual system is not even close.

In fact, using PI in their product name is setting themselves up for a fall – because customers will quickly discover that this product is not a PI, which can cause some subconscious aversion and resentment.

37013365_10155541149775906_3122577366065348608_o

The Nano rendered Amibian.js running some very demanding demos 4 times as fast as the PI 3b, one can only speculate what the board could do with proper drivers for the GPU.

The only positive feature the Fire-3 clearly has to offer, is abundantly more cpu power. It is without a doubt twice as fast (if not 3 times as fast) as the Raspberry PI 3b. The fact that it can render highly demanding and complex HTML5 demos 4 times faster than the Raspberry PI 3b without hardware acceleration is impressive. This is a $35 board after all, which is the same price.

But without proper drivers for the mali, it’s a useless toy. Powerful and with great potential, but utterly useless for multimedia and everything that relies on fast 2D and 3D graphics. For UAE (Amiga emulation) you can pretty much forget it. Even if you can compile the latest UAE4Arm with SDL as its primary display framework – it wouldn’t work because SDL depends on the graphics drivers. So it’s back to square one.

But the CPU packs a punch that is without question.

Final verdict

Top the x86 UP board, left bottom a Raspberry PI 3, bottom right the ODroid XU4

There are a lot of stable and excellent options out there, take your time

I was planning to test UAE next but as I have outlined above: without drivers that properly expose and delegate the power of the mali, it would be a complete disaster. I’m not even sure it would build.

As such I will just leave this board as is. If it matures at some point that would be great, but my advice to people looking for a great SBC experience — get the new Raspberry PI 3b+ and enjoy learning and exploring there.

And if you are into Amibian.js or making high quality HTML5 kiosk / node.js based systems, then fork out the extra $10 and buy an ODroid XU4. If you pay $55 you can pick up the Asus Tinkerboard which is blistering fast and great value for money, despite its turbulent introduction.

Note: You cannot go wrong with the ODroid XU4. Its affordable, stable and fast. So for beginners it’s either the Raspberry PI 3b+ or the ODroid. These are the most mature in terms of software, drivers and stability.

Nano-pi Fire 3: The curse of Mali

July 5, 2018 1 comment

Being able to buy SBCs (single board computers) for as little as $35 has revolutionized computing as we know it. Since the release of the Raspberry PI 1b back in 2012, single board computers have gone from being something electrical engineers work with, to something everyone can learn to use. Today you don’t need a background in electrical engineering to build a sophisticated embedded system; you just need a positive spirit, willingness to learn and a suitable SBC starter-kit.

Single board computers

If you are interested in SBC’s for whatever reason, I am sure you have picked up a few pointers about what works and doesn’t. In these times of open forums and 24/7 internet, the response-time from customers are close to instantaneous; and both positive and negative feedback should never be taken lightly. It’s certainly harder to hide a poor product in 2018, so engineers thinking they can make a quick buck selling sloppy-tech should think twice.

Fire3_02-900x630I have no idea how many boards have been released since 2016, but some 20+ new boards feels like a reasonable number. Which under normal circumstances would be awesome, because competition can be healthy. It can stimulate manufacturers to deliver better quality, higher performance and to use eco-friendly resources.

But it can also backfire and result in terrible quality and an unhealthy focus on profit.

The Mali SoC

The MALI graphics chipset is a name you see often in connection with SBC’s. If you read up on the Mali¬†SoC it sounds fantastic. All that power, open architecture, partner support – surely this design should rock right? If only that were true. My experience with this chipset, which spans a variety of boards, is anything but fantastic. It’s actually a common factor in a growling list of boards that are unstable and unreliable.

I don’t have an axe to grind here, I have tried to remain optimistic and positive to every board that pass my desk. But Mali has become synonymous with awful performance and unreliable operation.

Out of the 14 odd boards I have tested since 2016, the 8 board that I count as useless all had the Mali chipset. This is quite remarkable considering Mali has an open driver architecture.

Open is not always best

If you have been into IOT for a few years you may recall the avalanche of critique that hit the Raspberry PI foundation for their choice of shipping with the Broadcom SoC? Broadcom has been a closed system with proprietary drivers written by the vendor exclusively, which made a lot of open-source advocates furious [at the time].

Raspberry_Pi_3_LargeYou know what? Going with the Broadcom chipset is the best bloody move the PI foundation ever did; I don’t think I have ever owned a SBC or embedded platform as stable as the PI, and the graphics performance you get for $35 is simply outstanding. Had they listened to their critics and used Mali on the Raspberry PI 2b, it would have been a disaster. The IOT revolution might never have occurred even.

The whole point of the Mali open driver architecture, is that developers should have easy access to documentation and examples – so they can quickly implement drivers and release their product. I don’t know what has gone wrong here, but either developers are so lazy that they just copy and paste code without properly testing it – or there are fundamental errors in the hardware itself.

To date the only board that works well with a Mali chipset, out of all the boards I have bought and tested, is the ODroid XU4. Which leads me to conclude that something has gone terribly wrong with the art of making drivers. This really should not be an issue in 2018, but the number of bankrupt mali boards tell another story.

Nano-PI Fire 3

When reading the specs on the Nano-pi fire 3 I was impressed with just how much firepower they managed to squeeze into such a tiny form-factor. Naturally I was sceptical due to the Mali, which so far only have ODroid going for it. But considering the $35 price it was worth the risk. Worst case I can recycle it as a headless server or something.

And the board is impressive! Let there be no doubt about the potential of this little thing, because from an engineering point of view its mind-blowing how much technology $35 buys you in 2018.

Fire3_en_03

I don’t want to sound like a grumpy old coder, but when you have been around as many SBC’s as I have, you tend to hold back on the enthusiasm. I got all worked-up over the Asus¬†Tinkerboard for example (read part 1 and part 2 here), and despite the absolutely knock-out specs, the mali drivers and shabby kernel work crippled an otherwise spectacular board. I still find it inconceivable how Asus, a well-respected¬†global technology partner, could have allowed a product to ship with drivers not even worthy of public domain. And while they have updated and made improvements it’s still not anywhere near what the board could do with the right drivers.

The experience of the Nano-PI so far has been the same as many of the other boards; especially those made and sold straight from smaller, Asian manufacturers:

  • Finding the right disk-image to download is unnecessarily cumbersome
  • Display drivers lack hardware acceleration
  • Poor help forums
  • “Wiki” style documentation
  • A large Linux distro that max out the system

More juice

The first thing you are going to notice with the Nano-pi is how important the power supply is. The nano ships with only one usb socket (one!) so a usb hub is the first thing you need. When you add a mouse and keyboard to that equation you have already maxed out a normal 5v 2a mobile power supply.

I noticed this after having problems booting properly when a mouse and keyboard was plugged in. I first thought it was the SD card, but no matter what card I tried – it still wouldn’t boot. It was only when I unplugged the mouse and keyboard that I could log in. Or should we say, cant log in because you don’t have a keyboard attached (sigh).

Now in most cases a Raspberry PI would run fine on 5v 2a, at least for ordinary desktop work; But the nano will be in serious trouble if you don’t give it more juice. So your first purchase should be a proper 5 volt 3 amp PSU. This is also recommended for the original Raspberry PI, but in my experience you can do quite a lot before you max out a PI.

Bluetooth

A redeeming factor for the lack of USB ports and power scaling, is that the board has Bluetooth built-in. So once you have paired and connected a BT keyboard things will be easier to work with. Personally I like keyboard and mouse to be wired. I hate having to change batteries or be disconnected at random (which always happens when you least need it). So the lack of USB ports and power delegation is negative for me, but objectively I can live with it as a trade-off for more CPU power.

Lack of accelerated graphics

It’s not hard to check if X uses the gpu or not. Select a large region of the desktop (holding the left mouse button down obviously) and watch in terror as it sluggishly tries to catch up with the cursor, repainting every cached co-ordinate. Had the GPU been used properly you wouldn’t even see the repaint of the rectangle, it would be smooth and instantaneous.

SD-card reader

I’m sorry but the sd-card reader on this puppy is the slowest I have ever used. I have never tested a device that boots so slow, and even something simple like starting chrome takes ages.

I tested with a cheap SD-card but also a more expensive class 10 card. I’m really trying to find something cool to write about, but it’s hard when boot times is worse than Raspberry PI 1b back in 2012.

1 gigabyte of ram

One thing that I absolutely hate in some of these cheap boards, is how they imagine Ubuntu to be a stamp of approval. The Raspberry PI foundation nailed it by creating a slim, optimized and blistering fast Debian distro. This is important because users don’t buy alternative boards just to throw that extra power away on Ubuntu, they buy these boards to get more cpu and gpu power (read: better value for money) for the same price.

Lubuntu is hopelessly obese for the hardware, as is the case with other cheap SBC’s as well. Something like Pixel is much more interesting. You have a slim, efficient and optimized foundation to build on (or strip down). Ubuntu is quite frankly overkill and eats up all the extra power the board supposedly delivers.

When it comes to ram, 1 gigabyte is a bit too small for desktop use. The reason I say this is because it ships with Ubuntu, why would you ship with Ubuntu unless the desktop was the focus? Which again begs the question: why create a desktop Linux device with 1 gigabyte of memory?

The nano-pi would rock with a slim distro, and hopefully someone will bake a more suitable disk-image for it.

Verdict so far

I still have a lot to test so giving you a final verdict right now would be unfair.

But I must be honest and say that I’m not that happy about this board. It’s not that the hardware is particularly awful (although the mali drivers renders it almost pointless), it’s just that it serves no point.

In order to turn this SBC into a reasonable device you have to buy parts that brings the price up to what you would pay for a ODroid XU4. And to be honest I would much rather have an ODroid XU4 than four nano-pi boards. You have plenty of USB ports, good power scaling (ODroid will start just fine on a normal charger), Bluetooth and pretty much everything you need.

For those rare projects where a single USB is enough, although I cannot for the life of me think of one right now, then sure, it may be cost-effective in quanta. But for homebrew servers, gaming rigs and/or your first SBC experience – I think you will be happier with an original Raspberry PI 3b+, a ODroid XU4 or even the Tinkerboard.

Modus operandi

Having said all that .. there is also something to say about modus-operandi. Different boards are designed for different systems. It may very well be that this system is designed to run Android as it’s primary system. So while they provide a Linux image, that may in fact only be a “bonus” distro. We shall soon see as I will test Android next.

Next up, how does it fare with the multi-threaded uae4arm? Stay tuned for more!

The Tinkerboard Strikes Back

August 20, 2017 Leave a comment

For those that follow my blog you probably remember the somewhat devastating rating I gave the Tinkerboard earlier this year (click here for part 1, and here for part 2). It was quite sad having to give such a poor rating to what is ultimately a fine piece of hardware. I had high hopes for it Рin fact I bought two of the boards because I figured there was no way it could suck with that those specs. But suck it did and while the muscle was there, the drivers were in such a state that it never emerged for the user. It was released prematurely, and I think most people that bought it agrees on this.

asus

The initial release was less than bad, it was horrible

Since my initial review those months ago good things have happened. Asus seem to have listened to the “poonami” of negative feedback and adapted their website accordingly. Unlike the first time I visited when you literally had to dig into recursive menus (which was less than intuitive in this case) just to download the software – the disk images are now available at the bottom of the product page. So thumbs up for that (!)

They have also made the GPIO¬†programming API¬†a lot easier to get; downloading it is reduced to a “one liner” for C developers, which is the way it should be. And they have likewise provided wrappers for other languages, like ever popular python and scratch.

I am a bit disappointed that they don’t provide freepascal units. A lot of developers use object pascal on these board after all, because Object Pascal gives you a better balance between productivity and depth. Pascal is easier to learn (it was designed for that after all) but avoids some of the pitfalls of C/C++ while retaining all the good things. Porting over C headers is fairly easy for a good pascal programmer – but it would be cool of Asus remember that there are more languages in the world than C and python.

All of this aside: the most important change of all is what Asus has done with the drivers! They have finally put together drivers that shows off the capabilities of the hardware and unleash the speed we all hoped for when the board was first announced. And man does it show! My previous experience with the Tinkerboard was horrible; it was the text-book example of a how not to release a product (the whole release has been odd; Asus is a huge, multi-national corporation. Yet their release had basement 3 man band written all over it).

So this is fantastic news! Finally the Tinkerboard delivers and can be used for real life projects!

Smart IOT

At The Smart Company we both create and use our core product, Smart Mobile Studio, to deliver third-party solutions. As the name implies Smart is a software development system initially made for mobile applications; but it quickly grew into a much larger toolchain and is exceptionally good for making embedded applications. With embedded applications I mean things that run on kiosk systems, cash machines and stuff like that; basically anything with a touch-screen that does something.

smarts

The Smart desktop gives you a good starting point for embedded work

One of the examples that ship with Smart Pascal is a fully working desktop embedded environment. Smart compiles for both ordinary browsers (JavaScript environments with a traditional HTML5 display) but also for node.js, which is JavaScript unbound by the strict rules of a browser. Developers typically use node.js to write highly scalable server software, but you are naturally not limited to that. Netflix is written 100% in Node.js, so we are talking serious firepower here.

Our embedded environment is called The Smart Desktop (also known as Amibian.js) and gives you a ready-made node.js back-end that couples with a HTML5 front-end. This is a ready to use environment that you can deploy your own applications through. Things like storage, a nice looking UI, user logon and credentials and much, much more is all implemented for you. You don’t have to use it of course, you can write your own system from scratch if you like. We created “Amibian” to demonstrate just how powerful Smart Pascal can be in the right hands.

With this in mind – my main concern when testing SBC’s (single board computers) is obviously web performance. By default JavaScript is a single core event-driven runtime system; you can spawn threads of course but its done somewhat different from how you would work in Delphi or C++. ¬†JavaScript is designed to be system friendly and a gentle giant if you like, which has turned out to be a good thing – because the way JS schedules execution makes it ideal for clustering!

Most people find it hard to believe that JavaScript can outperform native code, but the JavaScript runtimes of today is almost a whole eco system in themselves. With JIT compilers and LLVM optimization — it’s a whole new ballgame.

Making a scale

To give you a better context to see where the Tinkerboard is on a scale, I decided to set up a couple of simple tests. Nothing fancy, just running the same web applications and see how each of them perform on different boards. So I used the same 3 candidates as before, namely the Raspberry PI 3b, the Hardkernel ODroid XU4 and last but not least: the Asus Tinkerboard.

I setup the following applications to compile with the desktop system, meaning that they were compiled with the Smart project. We got plenty of web applications but for this I wanted to pack the most demanding apps in our library:

  • Skid-Row intro remake using the CODEF library
  • Quake 3 asm.js build
  • Plex

OK let’s go through them and see where the chips land!

The Raspberry PI 3b

bassoon

Bassoon ran well, its not that demanding

The Raspberry PI was aweful (click here for a video). There is no doubt that native applications like UAE4Arm runs extremely well on the PI (which contains hand optimized assembler, not exactly a fair fight)- but when it comes to modern HTML5 the PI doesn’t stand a chance. You could perhaps use a Raspberry PI 3b for simple applications which are not graphic and cpu intensive, but you can forget about anything remotely taxing.

It ran Bassoon reasonably fast, but all in all you really don’t want a raspberry when doing high quality IOT, unless its headless code and node.js perhaps. Frameworks like Johnny #5 gives you a ton of GPIO features out of the box – in fact you can target 40 embedded systems without any change to your code. But for large, high quality web front-ends, the PI just wont cut it.

  • Skid-Row: 1 frame per second or less
  • Quake: Can’t even start, just forget it
  • Plex: Starts but it lags so much you can’t watch anything

But hey, I never expected $35 to give me a kick ass ARM experience anyways. There are 1000 things the PI does very well, but HTML is not one of them.

ODroid XU4

XU4CloudShellAssemble29

The ODroid packs a lot of power!

The ODroid being faster than the Raspberry PI is nothing new, but I was surprised at how much power this board delivers. I never expected it to give me a Linux experience close to that of a x86 PC; I mean we are talking about a 45‚ā¨ SBC here. And it’s only 10‚ā¨ more than the Raspberry PI, which is a toy at best. But the ODroid XU4 delivers a good Linux desktop; And it’s well worth the extra 10‚ā¨ when compared to the PI.

Personally I don’t understand why people keep buying PI’s when there is so much better options on the market now. At least not if web technology is involved. A small server or emulator sure, but not HTML5 and browsers. The PI just cant handle it.

  • Skid-Row: 4-5 frames per second
  • Quake: Runs at very enjoyable speed (!)
  • Plex: Runs well but you may want to pick SD or 720p to avoid lags

What really shocked me was that ODroid XU4 can run Quake.js! The PI can’t even start that because it’s so demanding. It is one of the largest and most resource hungry asm.js projects out there – but ODroid XU4 did a fantastic job.

Now it’s not a silky smooth experience, I would guess something along the lines of 17-20 fps. But you know what? Thats pretty good for a $45 board.

I have owned far worse x86 PC’s in my day.

The Tinkerboard

Before i powered up the board I was reluctant to push it too far, because I thought it would fail me once again. I did hope that something had been done by Asus to rectify the situation though, because Asus really should have done a better job before releasing it. It’s now been roughly 6 months since I bought it, and roughly 8 months since it was released here in Europe. It would have been better for them to have waited with the release. I was not alone about butchering the whole board, its been a source of frustration for those that bought it. 75‚ā¨ is not much, but no-one likes to throw money out the window like that.

Long story short: I downloaded the latest Ubuntu image and burned that to an SD card (I actually first downloaded the Debian Jessie image they have, but sadly you have to do a bit of work to turn that into a desktop system – so I decided to go for Ubuntu instead). If the drivers are in order I have a feeling the Jessie image will be even faster – Ubuntu has always been a high-quality distribution, but it’s also one of the most demanding. One might even say it’s become bloated. But it does deliver a near Microsoft Windows like experience which has served the Linux community well.

But the Tinkerboard really delivers! (click here for the video) Asus have cleaned up their act and implemented the drivers properly, and you can feel that the moment the desktop comes into view. With the PI you are always fighting with lagging performance. When you start a program the whole system freezes for a while, when you quit a program the system freezes – hell when you move the mouse around the system bloody freezes! Well that is not the case with the Tinkerboard that’s for sure. The tinkerboard feels more like running vanilla Ubuntu on a normal x86 PC to be honest.

  • Skid-Row: 10-15 frames per second
  • Quake: Full screen 32bit graphics, runs like hell
  • Plex: Plays back fullscreen HD, just awesome!

All I can say is this: if you are going to do any bit of embedded coding, regardless if you are using Smart Mobile Studio or some other devkit — this is the board to get (!)

Like already mentioned it does cost almost twice as much as the PI, but that extra 30‚ā¨ buys you loads of extra power. It opens up so many avenues of code and you can explore software far more complex than both the PI and ODroid¬†combined. With the tinkerboard you can finally deliver a state of the art product built with off the shelves web components. It’s in a league of its own.

The ‘tinker’ rocks at last

When I first bought the tinker i felt cheated. It was so frustrating because the specs were so good and the terrible performance just came down to sloppy work and Asus releasing it prematurely for cash (lets face it, they tapped into the lucrative market established by the PI foundation). By looking at the specs you knew it had the firepower to deliver so much, but it was held back by ridicules drivers.

There is still a lot that can be done to make the Tinkerboard run even faster. Like I mentioned Ubuntu is not the racecar of distributions out there. Ubuntu is fat, there is no other way of saying it. So if someone took the time to create a minimalistic Jessie image, recompile every piece with maximum llvm optimization and as few running services as possible — the tinkerboard would positively fly!

So do I recommend it? I am thrilled to say that yes, I can finally recommend the tinkerboard! It is by far the coolest board in my collection now. In fact it’s so good that I’m donating one to my daughter. She is presently using an iMac which is overkill for her needs at age 10. Now I can make a super simple menu with Netflix and Youtube, buy a nice touch-screen display and wall mount it in her room.

Well done Asus!

LDef and bytecodes

July 14, 2017 Leave a comment

LDef, short for Language Definition format, is a standard I have been formulating for a couple of years. I have taken my experience with writing various compilers and parsers, and also my experience of writing RTL’s and combined it all into a standard.

programming-languages-for-iot-e1467856370607LDef is a way for anyone to create their own programming language. Just like popular libraries and packages deals with the low-level stuff, like Gr32 which is an excellent graphics library — LDef deals with the hard stuff and leaves you with the pleasant job of defining what the language should look like.

The idea is to make a language construction kit if you like, where the underlying engine is flexible enough to express the languages we know and love today – and also powerful enough to express new ideas. For example: let’s say you want to create an awesome new game system (just as an example, it applies to any system that can be automated). You have the means and skill to create the actual engine – but how are you going to market it? You will be up against monoliths like Unity and simple “click and play” engines like ClickTeam Fusion, Game Maker and the likes.

Well, the only way to make good games is hard work. There is no two ways about it. You can fake your way only so far – so at the end of the day you want to give your users something solid.

In our example of publishing a game-engine, I think that you would stand a much better chance of attracting users if you hooked that engine up to a language. A language that is easy to use, easy to learn and with commands that are both specific and non-specific to your engine.

There are some flavours of Basic that has produced knock-out games for decades, like BlitzBasic. That language alone has produced hundreds of titles for both PC, XBox and even Nintendo. So it’s insanely fast and not a pushover.

And here is the cool part about LDEF: namely that it makes it easy for you to design your own languages. You can use one of the pre-defined languages, like object pascal or visual basic if that is what you like – but ultimately the fun begins when you start to experiment with new ideas and language features. And it’s fun when you get to that point, because all the nitty gritty is handled. You get to focus on the superficial stuff like syntax and high level functions. So you can shave off quite a bit of development time and make coding fun again!

The paradox of faster bytecodes

Bytecodes used to be to slow for anything substantial. On 16-bit machines bytecodes were used in maybe one language (that I know of) and that was the ‘E’ compiler. The E language was maybe 30 years ahead of its time and is probably the only language I can think of that fits cloud programming like hand in glove. But it was also an excellent system automation language (scripting) and really turned some heads back in the late 80s and early 90s. REXX was only recently added to OS X, some 28 years after the Amiga line of computers introduced it to the general public.

ldef_bytecodes

Bytecode dump of a program compiled with the node.js version of the compiler

In modern times bytecodes have resurfaced through Java and the .NET framework which for some reason caused a stir in the whole development community. I honestly never bought into the hype, but I am old enough to remember the whole story – so I’m probably not the Microsoft demographic anyways. Java hyped their virtual machine opcodes to the point of exhaustion. People really are stupid. Man did they do a number on CEO’s and heads of R&D around the world.

Anyways, end of the story was that Intel and AMD went with it and did some optimizations that could help bytecodes run faster. The stack was optimized with Java, because let’s face it – it is the proverbial assault on the hardware. And the cache was expanded on command from the emper.. eh, Microsoft. Also (if I remember correctly) the “jump to pointer” and various branch instructions were made to execute faster. I remember reading about this in Dr. Dobbs Journal and Microsoft Developer Magazine; granted it was a few years ago. What was interesting is the symbiotic relationship that exists between Intel and Microsoft, I really didn’t know just how closely knit these guys were.

Either way, bytecodes in 2017 is capable of a lot more than they ever were on 16-bit and early 32-bit systems. A cpu like Intel i5 or i7 will chew through bytecodes like a warm knife on butter. It depends on how you orchestrate the opcodes and how much work you delegate to the various instructions.

Modeled instructions

Bytecodes are cool but they have to be modeled right, or its all going to end up as a bloated, slow and limited system. You don’t want to be to low-level, otherwise what is the point of bytecodes? Bytecodes should be a part of a bigger picture, one that could some day be modeled using FPGA’s for instance.

The LDef format is very flexible. Each instruction is ultimately a single 32-bit longword (4 bytes) where each byte holds key information about the command, what data is forward in the cache and how it should be read.

The byte organization is:

  • 0 – Actual opcode
  • 1 – Instruction layout

Depending on the instruction layout, the next two bytes can hold different values. The instruction layout is a simple value that defines how the data for the instruction is passed.

  • Constant to register
  • Variable to register
  • Register to register
  • Register to variable
  • Register to stack
  • Stack to register
  • Variable to variable
  • Constant to variable
  • Stack to variable
  • Program counter (PC) to register
  • Register to Program counter
  • ED (exception data) to register
  • Register to exception-data

As you can probably work out from the information here, this list hints to some archetectual features. Variables are first class citizens in LDef, they are allocated, managed and released using instructions. Constants can be either automatically handled and references by id (a resource chunk is linked to the class binary) or “in place” and compiled directly into the assembly as part of the instruction. For example

load R[0], "this is a test"

This line of code will take the constant “this is a test” and move it into register #0. You can choose to have the text-data stored as a proper resource which is appended to the compiled bytecode (all classes and modules have a resource chunk) or just compile “as is” and have the data read directly. The first option is faster and something you can adjust with compiler optimization options. The second option is easier to work with when you debug since you can see the data directly as a part of the debug memory dump.

And last but not least there are registers. 32 of them in number (so for the low-level coders out there you should have few limitations with regards to register mapping). All operations (like divide, multiply etc) operate on registers only. So to multiply two variables they first have to be moved into registers and the multiplication is executed there – then you can move the result to a variable afterwards.

ldef_asm

LDef assembly code. Simple but extremely effective

The reason registers is used in my runtime system – is because you will not be able to model a FPGA with high-level concepts like “variables” should someone every try to implement this as hardware. Things like registers however is very easy to model and how actual processors work. You move things from memory into a cpu register, perform an action, and then move the result back into memory.

This is where Java made a terrible mistake. They move all data onto the stack and then call the operation. This simplifies execution of instructions since there is never any registers to keep track of, but it just murders stack-space and renders Java useless on mobile devices. The reason Google threw out classical Java (e.g Java as bytecodes) is due to this fact (and more). After the first android devices came out they quickly switched to a native compiler – because Java was too slow, to power-hungry and required too much memory (especially stack space) to function properly. Battery life was close to useless and the only way to save Java was to go native. Which is laughable because the entire point of Java was mobility, “compile once run everywhere” — yeah well, that didn’t turn out to well did it ūüėÄ

Dot net improved on this by adding a “load resource” type instruction, where each method will load in the constant data by number – and they are loaded into pre-defined slots (the variables you have used naturally). Then you can execute operations in typical “A + B to C” style (actually all of that is omitted since the compiler already knows both A, B and C). This is much more stack friendly and places performance penalty on the common language runtime (CLR).

Sadly Microsoft’s platform, like everything Microsoft does, requires a pretty large infrastructure. It’s not simple, elegant and fast – it’s more monolithic, massive and resource hungry. You don’t see .net being the first thing ported to a new platform. You typically see GCC followed by Freepascal.

LDef takes the bytecode architecture one step further. On assembly level you reference data using identifiers just like .net, and each instruction is naturally executed by the runtime-engine – but data handling is kept within the virtual realm. You are expected to use the registers as temporary holding slots for your information. And no operations are ever done directly on a variable.

The benefit of this is:

  • Better payload balancing
  • Easier to JIT since the architecture is closer to real assembly
  • Retains important aspects of how real hardware works (with FPGA in mind)

So there are good reasons for the standard, all of them very good.

C like intermediate language

With assembler so clearly defined you would expect ¬†assembly to be the way you work. In essence that what you do, but since OOP is built into the system and there are structures you are expected to populate — structures that would be tedious to do in raw unbridled assembler, I have opted for a C++ inspired intermediate language.

ldef_app

The LDEF assembler kitchen sink

You would half expect me to implement pascal, but truth be told pascal parsing is more complex than C parsing, and C allows you to recycle parsers more easily, so dealing with sub structures and nested regions is less maintainance and easier to write code for.

So there is no spesific reason why I picked C++ as a intermediate language. I would prefer pascal but I also think it would cause a lot of confusion since object pascal will be the prime citizen of LDef languages. My other language, N++ also used curley brackets so I’m honestly not strict about what syntax people prefer.

Intermediate language features supported are:

  • Class declarations
  • Struct declarations
  • Parameter to register mapping
  • Before mehod code (enter)
  • After method code (leave)
  • Alloc section for class fields
  • Alloc section for method variables

The before and after code for methods is very handy. They allow you to define code that should execute before the actual procedure. On a higher level when designing a new language, this is where you would implement custom allocation, parameter testing etc.

So if you call this method:

function testcode() {
    enter {
      writeln("this is called before the method entry");
    }
    leave { 
      writeln("this is called after the method exits");
    }
  writeln("this is the method body");
}

Results in the following output:

this is called before the method entry
this is the method body
this is called after the method exits

 

When you work with designing your language, you eventually.

Truly portable

Now I have no aspirations in going into competition with neither Oracle, Microsoft or anyone in between. Like most geeks I do things I find interesting and enjoy working within a field of computing that is stimulating and personally rewarding.

Programming languages is an area where things havent really changed that much since the golden 80s. Sure we have gotten a ton of fancy new software, and the way people use languages has changed – but at the end of the day the languages we use havent really changed that much.

JavaScript is probably the only language that came out of the blue and took the world by storm, but that is due to the central role the browser holds for the internet. I sincerely doubt JavaScript would even have made a dent in the market otherwise.

LDef is the type of toolkit that can change all this. It’s not just another language, and it’s not just another bytecode engine. A lot of thought has gone into its architecture, not just notions of “how can we do this or that”, but big ideas about the future of computing and how IOT will sculpt the market within 5-8 years. And the changes will be permanent and irrevocable.

Being able to define new languages will be utmost important in the decade ahead. We don’t even know the landscape yet but we can extrapolate some ideas based on where technology is going. All of it in broad strokes of course, but still – there are some fundamental facts about computers that the timeless and havent aged a day. It’s like mathematics, the Pythagorean theorem may be 2500 years old but it’s just as valid today as it was back then. Principles never die.

I took the example of a game engine at the start of this article. That might have been a poor choice for some, but hopefully the general reader got the message: the nature of control requires articulation. Regardless if you are coding an invoice system or a game engine, factors like time, portability and ease of use will be just as valid.

There is also automation to keep your eye on. While most of it is just media hype at this point, there will be some form of AI automation. The media always exaggerates things, so I think we can safely disregard a walking, self-aware Terminator type robot replacing you at work. In my view you can disregard as much as 80% of what the media talks about (regardless of topic). But some industries will see wast improvement from automation. The oil and gas sector are the most obvious. A the moment security is as good as humans can make them – which means it is flawed and something goes wrong every day around the globe. But smart pumping stations and clever pressure measurements and handling will make a huge difference for the people who work with oil. And safer oil pipelines means lives saved and better environmental control.

The question is, how do we describe programs 20 years from now? Is our current tools up for the reality of IOT and billions of connected devices? Do we even have a language that runs equally well as a 1000 instance server-cluster as it would as a stand alone program on your desktop? When you start to look into parallel computing and multi-cluster data processing farms – languages like C# and C++ makes little sense. Node.js is close, very close, but dealing with all the callbacks and odd limitations of JavaScript is tedious (which is why we created Smart Pascal to begin with).

The future needs new things. And for that to happen we first need tools to create them. Which is where my passion is.

Node, native and beyond

When people create compilers and programming languages they often do so for a reason. It could be that their own tools are lacking (which was my initial motivation), or that they have thought of a better way to achieve something; the reasons can be many. In Microsofts case it was revenge and spite, since they were unsuccessful in stealing Java away from Sun Microsystems (Oracle now owns Java).

LDEF

LDef binaries are fairly straight forward. The less fluff the better

Point is, you implement your idea using the language you know – on the platform you normally use. So for me that is object pascal on windows. I’m writing object pascal because while the native compiler and runtime is written in Delphi – it is made to compile under Freepascal for Linux and OS X.

But the primary work is done in Smart Pascal and compiled to JavaScript for node.js. So the native part is actually a back-port from Smart. And there is a good reason I’m doing it this way.

First of all I wanted a runtime and compiler system that would require very little to run. Node.js has grown fat in features over the past couple of years – but out of the box node.js is fast, portable and available almost anywhere these days. You can write some damn fast and scalable cloud servers with node (and with fast i mean FAST, as in handling thousands of online gamers all playing complex first person worlds) and you can also write some stable and rock solid system services.

Node is turning into a jack of all trades, capable of scaling and clustering way beyond what native software can do. Netflix actually re-wrote their entire service stack using node back in 2014. The old C++ and ASP approach was not able to handle the payload. And every time they did a small change it took 45 minutes to compile and get a binary to test. So yeah, node.js makes so much more sense when you start looking a big-data!

So I wanted to write LDef in a way that made it portable and easy to implement. Regardless of platform, language and features. Out of the box JavaScript is pretty naked stuff and the most advanced high-level feature LDef uses is buffers to deal with memory. everything else is forced to be simple and straight forward. No huge architecture or global system services, just a small and fast runtime and your binaries. And that’s all you need to run your compiled applications.

Ultimately, LDef will be written in LDef itself and compile itself. Needing only a small executable stub to be ported to a new platform. Most of mono C# for Linux is written in C# itself – again making it super easy to move mono between distros and operating systems. You can’t do that with the Visual Studio, at least not until Microsoft wants you to. Neither would you expect that from Apple XCode. Just saying.

The only way to achieve the same portability that mono, freepascal and C/C++ has to offer, is naturally to design the system as such from the beginning. Keep it simple, avoid (operatingsystem) globalization at all cost, and never-ever use platform bound APIs except in the runtime. Be Posix but for everything!

Current state of standard and licensing

The standard is currently being documented and a lot of work has been done in this department already. But it’s a huge project to document since it covers not only LDEF as a high-level toolkit, but stretches from the compiler to the source-code it is designed to compile to the very binary output. The standard documentation is close to a book at this stage, but that’s the way it has to be to ensure every part is understood correctly.

But the question most people have is often “how are you licensing this?”.

Well, I really want LDEF to be a free standard. However, to protect it against hijacking and abuse – a license must be obtained for financial entities (as in companies) using the LDEF toolkit and standard in commercial products.

I think the way Unreal software handles their open-source business is a great example of how things should be done. They never charge the little guy or the Indie developer – until they are successful enough to afford it. So once sales hits a defined sum, you are expected to pay a small percentage in royalties. Which is only fair since Unreal engine is central to the software to begin with.

So LDef is open source, free to use for all types of projects (with an obligation to pay a 3% royalty for commercial products that exceeds $4999 in revenue). Emphasis is on open source development. As long as the financial obligations by companies and developers using LDEF to create successful products is respected, only creativity sets the limit.

If you use LDEF to create a successful product where you make 50.000 NKR (roughly USD 5000) you are legally bound to pay 3% of your product revenue monthly for the duration of the product. Which is extremely little (3% of $5000 is $150 which is a lot less than you would pay for a Delphi license, the latter costing between upwards of USD 3000).

 

Freepascal online, awesome work!

June 20, 2017 Leave a comment

Freepascal is, with the exception of gcc – the gnu c/c++ compiler toolchain, probably the most versatile and adaptable compiler in the world. Unlike most compilers outside the C camp – Freepascal is complete toolchain, and it has more than a few similarities to c/c++ in that it’s largely self-reliant.

MUI_PascalDemo

Freepascal scales from the old mc68000 family to the latest Intel i7, impressive!

FPC is easy to port over to new (or old) platforms, and as far as languages go – object pascal has a level of productivity that I have yet to find in other languages.

Now I have no interest in language wars or fanboy stereotyping, all languages have aspects that are unique and beneficial. So this is not an attempt to promote A over B.

Targeting both old and new from a single codebase

While mainstream and work related topics never brush into this – there is actually a healthy group of people still using older platforms. Naturally this is well within the “hobby” range – where passion and curiosity are the prime motivators rather than cash. But still, it is interesting to see how little of modern software is capable of scaling back to processors 10, 15 or 20 years out of date.

Alb42 is the nickname of a very industrious individual who has taken it on himself to port freepascal back to his roots. Which just happens to be the Amiga line of computers.

MiniSlugAmiga

Some of the best arcade games ever made are “vintage” by nature

But he has also done a lot of work for modern systems. A new Amiga was actually released this year (although most people havent noticed it) – and finding development software for a brand new platform is not easy. You have a variant of C/C++ naturally – but the honest truth is that C/C++ is not the language most people use to create desktop productivity software in. That is left to object pascal, C# and variations of Java. Delphi alone is responsible for a huge portfolio of popular software on Windows for instance. Only surpassed recently by monoliths like C#.

So getting freepascal running on Amiga OS 4 (which is the latest and modern operating system shipping with the new Amiga x5000 model) is very important to the market share of the platform.

Do it online!

Alb42 has setup an online compiler testbed where you can type in your freepascal snippets and compile to executable code on the spot. You can choose between MorphOS, Classic 68k, Aros (x86) and Amiga OS 4 (ppc). It’s a very simple setup yet it’s possible to do ad-hoc compilation and play around with the system ūüôā

onlinecompile

While humble, Alb42 has setup an interesting online kitchen sink that demonstrates how flexible and powerful object pascal really is

To give it a go, head over to: https://home.alb42.de/fpamiga/

And as always, visit his blog for both VMware images with cross compilation and binaries for your favorite retro system.

But perhaps even more important, your new x5000 (!)

Dont forget to join us at Amiga Dispatch on Facebook.

Cheers!

A day with the Tinkerboard, part 2

June 10, 2017 Leave a comment

Please read the first article here before you continue.

Having gone through a wealth of alternative boards, alternative to the Raspberry PI 3b that is; I have to admit that I am really, really disappointed by the results. Not just with the Tinkerboard but also boards like the ODroid XU4 and the whole fruit-basket of banana, orange and pineapple PI clones.

asus

What a waste

And please keep in mind that I have not been out to discredit anyone. I have been genuinely interested in these alternatives for many reasons, not just emulation or multimedia. I have no problem paying that extra 15£ for a system that delivers twice the power of the PI, in fact I even paid close to 150£ for the UP board. Which is also the only board that so far has delivered on its promise. Microsoft Windows and the horrors of eMMC storage considered.

Part of what I work with these days include node.js and full screen browser views, often touch enabled for POS (point of sale) type machines. This is an exciting field because for the first time in history we can now create rich, responsive and rock solid solutions using off the shelves web technology.

So while emulation of older systems is interesting for hobby purposes, the performance of vital technologies like the browser – and consequently the performance of JavaScript and the underlying JIT compilation, is naturally paramount for what I do.

So I was really looking forward to the extra power these boards advertize. Both CPU, number of cores, extra memory and last but not least – the benefits of a fast GPU. The latter directly impacts the effects we can use in our user-interface. And while effects may seem superficial, its important for touch feedback and the customers experience.

desktop_embedded

We have all come across touch devices that give little or no visual feedback. We all know what its like when you type something and the UI just hangs while talking with a server. This is why web tech is so important today. In many ways Html5 and websocket is re-defining what we think of as software.

So while I bring little pie-charts and numbers to the table, I do bring honesty and perspective. It’s not all fun and games.

Was it really fair?

It has been voiced by a couple that perhaps my way of testing things is unfair. Like I mentioned in my previous article the PI has a healthy head-start and a rich community that not only consumes and demands, but also contributes substantially to the value of the product. A well established community around a product is worth its weight in gold. The alternative boards have yet to establish that and when it comes to the Tinkerboard, it’s pretty much just getting started.

Secondly, as far as emulation goes, we have to remember that those who make these products rarely have emulation in mind (and probably not Amiga which by any standard is esoteric compared to Nintendo and Sega consoles). If we look at what people use gadgets like the Raspberry PI for, I think products like Kodi, Apache and various DIY programming experiments out-rank all the retro systems combined. And it’s clear that the people behind Tinkerboard is more media oriented than anything else; it ships with hardware support for 4k video playback after all.

Having said that, I do feel I gave both the ODroid and Tinkerboard a fair trial. I did not approach these boards as a developer, but rather as a consumer. I have focused on the experience of familiar things – like using a browser and how well JavaScript runs, does the UI lag under stress; things that any customer will face after purchase.

UAE4Arm vs FS-UAE

As far as UAE (Unix Amiga Emulator) there is one thing that might be unfair. Amibian is based around something called UAE4Arm which is a highly optimized version of the Amiga emulator. I would even go so far as to say its THE most optimized UAE version on any platform – Windows and OSX included.

The Amiga was a lot more than just a games machine, it was way ahead of both Windows and OS X for a decade

The Amiga was a lot more than just a games machine, it was way ahead of both Windows and OS X for a decade

Comparing that with FS-UAE is perhaps unfair. FS-UAE is based on the older, non optimized codebase. It’s known for stable emulation and good JIT performance. Sadly the JIT engine only works on x86 and is thus disabled on ARM systems. So to be honest it can’t hold a candle against UAE4Arm. The latter uses lookup tables for everything, fast switch cases instead of long sequences of “if then else” code (easier for the compiler to optimize), and it benefits from hardcore llvm level 3 optimization. So I agree, comparing FS to UAE4Arm is like comparing a tiger with a kitten. It’s not even in the same ballpark.

Optimization on this level is almost a lost art these days. People are used to languages like C# and Java – which are hopelessly bloated (for those that have seen what raw assembler and hand optimized pascal can deliver). On the Amiga we learned how to shave off a clock-cycle here and there – and the end result really made a huge difference.

The same type of low-level, every cycle counts, type optimization is used throughout UAE4Arm. FS-Uae is safe, slow and can hardly deliver A500 speed on a Raspberry PI. Yet UAE4Arm spits out 3.2 times the power of an Amiga 4000 with a 68040 cpu. That was the flagship power machine back in the day, and this little $40 card delivers 3 times the power under emulation.

Its like comparing a muffled 1978 Lada with a pissed off 2017 Lamborghini.

Fair is fair

Having tested just about every distro I could find for the Tinker, even back-tracking to older releases in hope that the drivers there would be better suited for the board (perhaps the new drivers were released in a hurry, or a mistake was made) the last straw really was Android. My hope was that Android could have better drivers since it sees a lot of development. It has giants like Google and Samsung behind it – so the last glimmer of light was that Asus had focused on Android rather than Linux.

podcast-event

Will Android save the say and give the Tinker it’s due credit?

I downloaded the latest image, burnt it and booted – thinking that this should finally put things right. Well it didn’t. Quite the opposite.

The first boot took almost an hour. Yes, an hour. I suspect this is because it did a network restore – downloading packages and requirements from the Internet. But that is still no excuse. I have tried Android on the PI it and it was up and running in less than 10 minutes; and that includes having to set some initial user information.

When the Android desktop finally came into view, it was stripped down with no repository package manager. In other words you would have to manually install packages. But that would have been OK had it not been for the utterly sluggish performance.

Android on the PI is no race-car, but at least it doesn’t freeze and lock-up every five second. Clicking on a link in the browser froze the whole system, and you were asked if you wanted to terminate the process again and again while Android figured out the meaning of life or whatever. Every single time.

I’m sorry but this is the worst experience I have ever had with a board. The ODroid XU4 at least tries to deliver a good experience. I had no problems finding Linux editions that ran well (especially gaming systems, Kodi and some homebrew distros) – and the chrome build for the ODroid gave far better results than both the PI and Tinkerboard. It was not the power-house it had been hyped up to be, but at least it delivered a normal desktop experience (even the PI locks up when facing demanding tasks, but the Tinker has an 8 core cpu, twice the ram and a supposedly superior GPU. With the capacity of running 8 threads and doing task scheduling in batches of 8 – the Tinkerboard should remain both stable and interactive under pressure).

I am sorry but I cannot recommend the Tinkerboard at all. The ODroid gives you a slight edge over the PI and if someone baked a more optimized Linux distro for it (it ships with a variant of Ubuntu) then ODroid XU4  would kick serious ass. But the Tinkerboard is a complete disaster.

Do not waste your money on the Tinkerboard. It may have the hardware but the software to utilize and express that power is simply not there! It has been the most pointless purchase I have done this year.

A common waste

The common denominator between these alternatives is the notion that more power equals more stuff. Which ultimately results in biting over more than they can chew. A distro like Pixel is lightweight, optimized and the PI foundation has spent a considerable amount of time making sure the drivers run as fast as possible.

Weird Science, a fun but completely fictional movie from 1985. I just posted this to liven up an otherwise boring trip down memory lane

Saying it doesnt make it so, except in Weird Science

What these alternative hardware suppliers don’t seem to get is that – what is the point of having a cpu that is twice as fast as the PI, when you waste it on a more demanding version of Linux? Both the ODroid and Tinkerboard ship with Ubuntu clones. And while Ubuntu is wonderful to work with on a high-end x86 machine, 32 gigabytes of ram and a i7 CPU growling away under your desk, it is without a doubt the worst possible OS you could pick for a tiny ARM board.

To put things into perspective: The Tinkerboard has the “theoretical” cpu power of a Samsung Galaxy S4 mobile phone (2012). Yet in their infinite wisdom they decided Ubuntu is what is really going to show off this fart of a device.

It makes absolutely no sense from a consumer’s perspective. As a user you want that extra power for running programs. You want to use the extra power on your stuff. Yet these guys seem to think that wasting all the extra power on infrastructure is going to score points.

So if anyone from the Tinkerboard or ODroid camp reads this, please remember the old saying “If you cannot innovate, emulate”. Make a fast, stable and small Jesse distro like Pixel. Focus on optimization. Compile whatever you can with maximum optimization. Shave off every clock cycle you can. Hand carve the drivers in assembler if that’s what it takes.

Because the sad result is, that right now the Raspberry PI is outperforming you with lesser hardware. There is no point in buying a Tinkerboard because it actually delivers a worse experience. It has bitten off more than it can chew and it’s trying to be something it’s not.

A day with the Tinkerboard, part 1

June 7, 2017 1 comment

Anyone into IOT mini computers, be it as a hobby or professionally – have no doubt heard about the Tinkerboard. The Tinkerboard is an alternative to the Raspberry PI (aren’t they all) produced by hardware giant Asus. Everything from its form-factor (size) to its color coordinated GPIO pins is more or less the same as the PI. What separates the two is that the Tinker gives you double the CPU power, double the ram and pretty much double of everything. If we compare it with the Raspberry PI 3b that is.

But you don’t get double for free. The Tinkerboard retails on Amazon for USD 60 which means something in the ballpark of 55¬£ for those that live in the UK. For the rest of us that live in far-away places like Norway, prices and taxes may vary.

tinker

Before I continue I want to write a few words about pricing. The word “expensive” is one I keep hearing when it comes to the Tinkerboard, but that is extremely unfair considering what you get for your money.

The Raspberry PI 3b is an awesome product and packs a lot of goodies into a credit-card size computer. The same is true for the Tinkerboard, but they have packed even more stuff into the same space! The difference in price is just 15£ Рbut those 15£ gives you the power to emulate demanding platforms like Nintendo Wii, Sega Dreamcast and Playstation (if you are into retro gaming). Platforms the Raspberry PI 3b dont stand a chance emulating (at least not smoothly).

For serious applications all that extra cpu power can be the difference between success and failure. In many ways the Tinker feels a bit like working on a low-end laptop. If you are building a movie server for your living-room the Tinkerboard will ble able to convert and stream movies much smoother and better than the PI. And if you are looking for a device to power your next kiosk system or POS device, again the extra memory and cpu power is going to make all the difference. And with just 15£ between the two models, I cannot imagine why anyone would pick the PI if gaming or movie streaming is on the menu.

Oh and it has built-in wi-fi and bluetooth support! Here are the specs as listed on Amazon:

  • High performance Quad Core ARM SOC 1.8GHz with 2GB of RAM -The Tinker board features the Rock chip RK3288 Soc with Mali – T764 GPU and 2GB of Dual Channel DDR3 memory
  • Non shared GBit LAN, Shielded Wi-Fi with upgradable antenna support
  • Highly compatible PCB & Topology, Tinker board offer extensive compatibility with SBC accessories & chassis
  • HD Audio & HD & UHD Video Support ‚Äď Tinker board supports HD audio 192/24bit audio along with accelerated HD & UHD ( 4K ) video playback support* requires use of Rock chip video player in TinkerOS
  • DIY Friendly Design ‚Äď Tinker board features multiple DIY friendly use features including a color coded GPIO header, silkscreen PCB and color coded pull tabs

That sounds pretty nice doesnt it ūüôā

One does not simply take on the mighty PI

Being an alternative to the Raspberry PI is not as easy as it sounds. You would imagine that it all boils down to the specs right? So if you beef up the cpu, ram and gpu everyone will throw away their PI’s and buy your stuff? If only that was true. In which case the Tinkerboard would be awesome since it outperforms the PI on every level.

But it’s not that simple. There is more to a product than just raw power.

One of the cool things about the Raspberry PI is the community. The knowledge base of the product if you like. In order to build a community and be a success, everything has to click. I mean just look at the third-party eco-system that exists around the PI; all those companies selling add-ons and widgets, sensors and remote controls. The PI itself is quite dull. It’s all that extra stuff that makes it so fun to play with. And then you have things like excellent customer service, a forum where people help each other out, well written documentation — the value of the product suddenly exceeds whatever you paid for it to begin with.

And the Tinkerboard is not alone in trying to get a piece of the action. It competes with boards like the ODroid XU4, the x86 based UP boards, Latte Panda, The Pine boards and a whole fruit-basket of banana, orange and whatever PI clones. All of them trying to get a bite of the phenomenon that is the Raspberry PI.

So Asus got their work cut out for them.

First impressions

Having unpacked the board and plugged in all the cables (which is exactly as dull as it sounds) the first thing you do is look for software. Which naturally should be a one-click operation on their website.

Well, the first thing that annoyed me was exactly that: the Asus website. The Tinkerboard is not something that belongs on a sterile, streamlined corporate site. It belongs on website that should be easy to navigate, with plenty of information that hobbyists and “tinkers” need to get started. The Asus website simply lacks the warmth and friendliness of the PI foundation website. I even had to google around a couple of times just to find the disk images. These should be readily available on the site, on the first page even – to make sure customers can get up and running quickly.

tinkerweb

It’s a little bit clangy and a little bit jammy

The presentation of the board was ok, but you really shouldnt have to use an external search engine to locate essential data for something you just bought. Unlike the PI foundation, Asus seem to have put little effort into what customers get once they have bought the board. As for community I dont even know if there is one.

When you finally manage to find the bloody download page, these are your options (there are 17 downloads in total, counting older OS images, config files and schematics):

  • TinkerOS 1.8
  • TinkerOS 1.6 Beta (my initial download)
  • Android, also beta

And don’t expect a polished Linux distro like Pixel either. You get a clean distro with some typical stuff pre-loaded (python, perl for some reason, and various tidbits) but nothing close to the quality of Pixel.

Note: Why they would list older beta releases in their downloads is beyond me. It only adds to the confusion of a sterile list of items. And customers who just got their boards will be eager to try it out and hence make a mistake like I did. Beta and unstable releases should be in a separate category, while the stable and up-to-date images are listed first.

The second thing that really irritated me was something as simple as adjusting the keyboard layout and setting the locale. These are simply not present in preferences at all. Once again you have to google around until you find the terminal commands to manually set these things, which utterly defeats booting into a desktop environment to begin with. This might seem trivial for someone who knows Linux well, but to a novice it can take hours. Things this fundamental should be available in preferences or at the very least as a script. Like raspi-config on the PI.

But OK. If we look away from these superficial things and focus on the actual board, things are not half bad.

It’s the insides that counts

The first thing you are going to notice is how much faster the system is compared to the PI. When you start an application on the PI that is demanding, like libre-office or Lazarus, the whole system can freeze-up for a few seconds while the CPU go through the roof. I was pleased to find that this is not the case here (at least not as disruptive). You can start a program and the system continues to be responsive while things load.

What made me furious though was the quality of the graphics drivers. Performance wise the Tinkerboard is twice (actually a bit more) as fast as the PI on all fronts – and the GPU is supposed to deliver a vastly superior graphics experience. Yet when I ran Amibian.js (the Smart Pascal JavaScript desktop system) in Chrome – the CSS hardware effects were painfully slow. I even questioned if GPU was used at all (Note: which I later confirmed was the case, so much for downloading a beta image by mistake). I was amazed at the cpu power of the Tinkerboard because despite having to do everything purely with the CPU (moving pixels in memory, allocating device independent bitmaps, rasterizing layers and complex compositions in X) and it was more than usable.

But it sure as hell was not the superior graphics experience I was hoping for.

I also noticed that when moving a normal desktop window (X window that is) rapidly around the screen, it would lag behind and re-trace your steps until it caught up with the mouse pointer. This was very odd since most composition engines would have eliminated these movements as much as possible – and the GPU would do all the work. Again, I blame the drivers and pondered what could possibly make a GPU perform so badly.

gpuwrong

Im supriced it booted at all, nothing is working

So while the system is notably faster than the PI, I get the sense that the drivers and tooling is not really polished. The hardware no doubt got juice but without drivers being carefully written for the board – most of the extra juice you pay for is wasted.

In the end I opened up chrome and typed “chrome://gpu” to see what was going on. And not surprisingly I was correct. It wasnt using the GPU at all and I had wasted hours on something that should have been clearly marked “Lacks hardware acceleration” in bold, italic, underlined big red letters. Like the PI foundation did when they shipped Raspberry PI 2 without GPU drivers. But they promptly delivered and it was ok, because you were informed up front.

 

Back to scratch

After a trip back to google I downloaded the right (or at least working) image, namely the TinkerOS 1.8 stable. It booted up, resized the partition to fill the whole SD card automatically (and other boring bits).

The first thing I did was start chrome and use the “chrome://gpu” to get some statistics. And thankfully most of the items were now working and active.

gpubetter

Thats better, although i ponder why GPU DIB’s are not active

Emulation, the name of the game

While I have serious tests to perform there is no doubt that what I was most eager to play with was UAE (the Unix Amiga Emulator). Like most retro-heads I have a ton of Raspberry PI’s and ODroid’s around the house with the sole purpose of emulating older game or computer systems. And I think everyone knows that my favorite computer in the whole wide world is the Commodore Amiga.

On the PI, Gunnar’s native Amibian distro gives you awesome performance. If you overclock the ram, gpu and cpu using safe numbers – the PI 3b spits out roughly 3.2 times the power of a Mc68040 based Amiga 4000. In other words 3 times the power of a high-end Amiga from the early 90s. That is quite an achievement for a $40 board. You can only imagine my expectations for a board running more than twice as fast as the PI does (and costing that 15¬£ more).

18986661_10154521981875906_472132585_o

Boots fine but you can forget about using it. The PI is 100 times faster than this

One snag though was that UAE4Arm which is the highly optimized version of the Amiga Emulator doesnt exist for the Tinkerboard. In theory it should compile without problems since the latest version uses SDL exclusively. So there is no odd, deprecated UI toolkit like the old UAE4Arm used.

The only thing available was fs-uae. And while that is a very popular emulator on Windows and Mac, it’s sadly not optimized as much as UAE4Arm is. The latter uses lookup tables to the extreme, all if, then, else statements have been replaced by fast switches — and it’s just optimized beyond anything else running on ARM.

The result on the PI in phenomenal, but sadly fs-uae was all I could play with. I will try to compile UAE4Arm on the Tinkerboard later though. But right now im to busy.

Not optimized at all

It is hard to pinpoint exactly where the problem is, but I suspect the older codebase of fs-uae combined with poorly written drivers is what resulted in the absolutely sluggish behavor.

Dont get me wrong, if all you want to do is play some A500 or A1200 games the Tinker is more than capable. But if you want to run the same high-performance desktop as Amibian gives you on the PI – you can pretty much forget it.

I set the system to emulate an A4000, activated the JIT engine and set my desktop to match the display of the desktop. This should run like hell — yet it was slow as a snail. You could literally see the mouse-pointer slowly trying to keep up as I moved it across the desktop.

One interesting thing though. When I ran fs-uae on the first image, the beta image where everything was crap, it actually ran exceptionally well. But here on the so-called stable and “up to date” version, it was useless for anything but clean floppy gaming.

There are many factors involved in this. The drivers sits at the bottom just above the kernel and exposes the hardware so it can be used through a common API. On top of this you have stuff like OpenGL and on top of that again you have SDL which simplifies typical gaming or graphics tasks. This may sound like a long road for code to travel, but it’s actually very fast. If the drivers expose the power of the GPU that is.

I can only conclude at this point that the Tinkerboard has the firepower, I have seen the result of tests performed by others (and tested a retrogaming image) – and you feel the difference almost immediately when you boot the device. But the graphics drivers seems.. crippled somehow. It’s easy to point the finger at Asus and say they have done a terrible job at writing drivers here, but it can also be a bottleneck elsewhere in the system.

This is something Raspberry PI has got so right. Every part of the system has been optimized for the hardware, which makes the PI feel smooth even under a lot of stress. It delegates all the hard graphical stuff to the GPU – and the GPU just deals with it.

Sadly this is not the case with the Tinkerboard so far. And it really is such a shame because the specs speak for themselves.

Android

Next time: We will be giving Android a test and hopefully that will have drivers that are more up to date than whatever we have seen so far..

Smart Pascal: stop sharing beta code

May 20, 2017 Leave a comment

I was a bit sad to hear that our RTL, despite the testing team being a very closed group of people, had made its way into the hands of others. People who then blog or comment that its unstable or that to many changes has been added.

When the phrase “early beta” and “our first sprint is to test the low-level functionality, like pointers, memory allocation and manipulation, streams, compression, codecs and type conversion” is clearly stated, then why on earth would someone start at the other end (the visual high level) which is due for changes weeks into the future?

Since this is doing the rounds I figured it is only fair that I clear up a few things:

An async run-time-library that should deliver identical behavior on 9 operating systems and a plethora of browsers, is not something you slap together over a weekend. Many of the solutions in SMS are the result of months of hard work; research and testing on both PC, Mac, various mobile devices and six different embedded boards.

I don’t know how many hours I have put into this project these past 6 years, but its more than I care to admit.

Obviously an early beta is not going to give you 100% compatibility. Especially not since the whole point of the first sprint was for our group to write, test and make sure the low-level methods are rock solid before we move on.

The second sprint is due soon, here we will look at databases, data binding, node.js and writing system level services (via the PM2 process manager).

The last sprint would focus on the visual, high level aspect of the RTL. Things like custom controls, layout, non visual components, event objects, messaging and user interface code.

I’m really gutted that we didn’t even make it through the first sprint before the code was shared. I have no idea who did this, and I frankly don’t care. But from now on I will only throw ball with people I trust and have a friendship with.

People doing reviews and publishing opinions on an upgrade that is presently 1/3 into the final release candidate – I just feel that is unfair.

Polyfills are not shims, and they are not dangerous

It was voiced that the new RTL loads in a couple of files that were considered dangerous by google. Actually that cannot be further from the truth.

But stop and think about this: why would I link to external JavaScript files for things like mutation-events and observers? If these things are available in browsers (which they have been for a long time) why would I need external files?

The answer is that these files are polyfills. And there is a huge difference between what is known as a shim, and a polyfill.

A polyfill is a small JavaScript library that checks if the browser supports the features you want. But if it doesn’t –¬†then the polyfill will register its own code to compensate for the lack of a function or feature. In other words a polyfill is a full implementation (!)

So even if Google, Mozilla or Microsoft remove support for node-observers or mutation-events tomorrow, your application wont be affected by it. Because the polyfill libraries are just that: full library implementations with no dependencies.

In the early beta I shipped out both the observer and mutation-events library has referenced. This has already been changed, and unless you include SmartCL.Observer.pas or SmartCL.MutationEvents.pas – these will no longer be linked automatically. That was never the plan either.

Had one of you (I have come across 3 different places with comments and info on the RTL around the web) bothered to just ask me, I would have included you in the beta process and explained this.

The executables have become so large!

This is another statement I stumbled over: “The new RTL is huge¬†and it generates large JavaScript binaries!”.

First of all you were using an early beta, which means things will be in flux while each level of the RTL is being tested, probed and verified. The pascal uses section in some units referece code they no longer use. Cleaning up that is due for sprint 3. But again in our first sprint focus was on memory, binary data, type conversion, streams, compression and various low-level stuff – not high level code!

As for size that is actually wrong: our namespacing and unit fragmentation is there for a reason. By dividing files into 3 namespaces, fragmenting large units with many classes into more files, our linker is able to trim away unused code more efficiently; as well as optimize your code beyond anything people have seen so far.

poptions

Rule of thumb: when you have finished writing your application, always switch on the optimization like you see above for the final build. So yes, you are expected to use packing, obfuscation etc. before deployment. Not just to protect your code, but also to make it load and run faster.

Since you will either use your applications on a mobile device, an embedded device or just online like a website (or desktop via nodewebkit compilation) – you can further speed up loading by activating gzip compression on the server. You will be surprised just how fast and small your applications are when you tune them correctly.

Practical example

The Smart Embedded and Cloud desktop consists of roughly 40+ unit files. Since this desktop can emulate older systems, like the 68k Amiga, Atari, Super Nintendo and much more (more about those later) I have made linking these in optional (read: you have to include the units in order to link the code in, UAE.Core.pas being the primary file).

smartdesk

Without the 68k emulation layer (and various other emulators, which is just added for fun and inspiration) and Aros ROM files (the bios file for the emulator), we are looking at code size in the ballpark of 4.7 megabyte. Which is nothing compared to high-end native solutions.

If you include the 68k emulation layer, the end result is a 10 megabyte index.html file. Again this is small potatoes if you work with embedded systems or kiosk booths.

But since the new RTL design was created to be heavily optimized, far better than the older RTL ever was, the end result after compression is 467 kilobyte (!). When you compress the version that includes the 68k emulation layer – it optimizes down to 1.5 megabyte of code.

  • No 68k emulation layer = 4.7 megabyte uncompressed
  • No 68k emulation,¬†compressed = 457 kilobyte (!)
  • 68k emulation added = 10.2 megabyte uncompressed
  • 68k emulation included, compressed = 1.5 megabyte
obfuscation

From 4.7 megabyte to 467 kilobyte

This level of compression and optimization was planned.  I have re-arranged the whole RTL in order to create the best possible circumstance for the compression and obfuscations modules Рand I believe the results speaks for itself.

Is the new RTL larger? Indeed it is, with plenty of new classes, controls and libraries. The more features you want to enjoy, the more infrastructure must be in place. That is just how it is, regardless of language.

But you should also know that the new and longer inheritance chain has little or no impact on execution speed. This is what the compiler options “de-virtualization” and “smart linking” are all about. In most cases the code for TW3OwnedObject and TW3OwnedErrorObject are linked into TW3Component. Here is the inheritance chain as of writing:

  • TW3OwnedObject
    • TW3OwnedErrorObject
      • TW3Component
        • TW3TagObj
          • TW3TagContainer
            • TW3MovableControl
              • TW3CustomControl

Why did I change this? There are many reasons. First of all I wanted to make the RTL more compatible with Delphi, where names like TComponent and TCustomControl are known and understood by most. So when facing TW3Component or TW3CustomControl, most Delphi developers will intuitively know what these are all about. TW3Component is also the base class for non visual, drag & drop components.

  TW3OwnedErrorObject = class(TW3OwnedObject)
  private
    FLastError: string;
    FOptions:   TW3ErrorObjectOptions;
  protected
    property    ErrorOptions: TW3ErrorObjectOptions read FOptions;
    function    GetLastError: string;
    procedure   SetLastErrorF(const ErrorText: string;
                const Values: Array of const);
    procedure   SetLastError(const ErrorText: string); virtual;
    procedure   ClearLastError;
  public
    property    LastError: string read FLastError;
    constructor Create(const AOwner: TObject); override;
    destructor  Destroy; override;
  end;

There are also other benefits, like uniform error handling. While this might seem useless in an exception based language, there are places where an exception is not something you want to throw. Like in a system level service running under node.js.

In such cases you should catch whatever exception occures and call SetLastError() with the description. The caller will then check if something went wrong and can take measures.

Embedded, a different beast

One of the things Smart Pascal does exceptionally well is embedded platforms. Regardless of what you have been hired to do, be it a kiosk booth in a museum, a touch based vending machine, a parking payment terminal or just some awesome IOT project, Smart Pascal makes this childs play. And the reason its so easy to write powerful web applications is precisely due to the broad scope of our RTL.

Smart Pascal is not a tool for creating websites (well you can, but that’s not what it was designed for). It is a tool to write high quality, complex and feature rich applications.

The distinction between a “web application” and “website” may seem subtle at first, but it all boils down to depth. The Smart RTL now contains a huge amount of classes and third party libraries that simplify writing modern applications with depth.

Other frameworks that (for some reason) are very popular, like JQuery, JQuery UI and ExtJS lacks this depth. They look great, but thats it. I recently catched up with a project that had taken the developers a whole year to create in pure JS. It took me 3 weeks to finish what they spent over 12 months implementing. And that has to do with depth and proper inheritance Рnot that absurd prototype cloning JavaScript coders use.

To sum up: if you want to enjoy the same features and power in the browser as you do with Delphi or Lazarus on your desktop¬†– then you cant expect Smart to produce binaries smaller than it’s native counterparts.

But more importantly, on embedded systems like the Raspberry PI, the UP x86 board or the new and sexy Tinkerboard — do you really think its going to matter if your application is 4, 8 or 10 megabyte? By now you should have realized that our compressor and obfuscator bakes that down to 460kb or 1.5 megabyte — both peanuts compared to similar system written in Java, C# or Delphi.

Ask and ye shall recieve

I dont really care who shared my RTL, but I am sad that it happened. Im not a difficult person when it comes to these things – all you have to do is ask.

Also remember that the RTL is a copyrighted product. It belongs to a registered financial entity (read: company) and is not subject for distribution.

FMX4Linux is coming, and we cant wait!

May 3, 2017 Leave a comment

When Embarcadero announced Linux support for their Tokyo release of Delphi, my soul literally left my body for a moment. Could it really be true? After all these years have Embarcadero done what many said would be impossible?

I must admit that people saying something is impossible has lost its sting for me. Over the past 3 years I have one of these impossible things on a weekly basis, yet people are just as shocked every single time. But this time – I was the one in a state of excitement.

As an outspoken (an understatement perhaps) active blogger, my skin has grown thick over the years. I also get to see a lot of cool tech long before it’s commercially available and mainstream – so it takes more for me to be swayed and dazzled. And you grow a healthy instinct for separating bullshit from true technical achievements too.

Tokyo

Delphi Tokyo is probably one of the finest Delphi editions to date

But yes, I admit it Рthis time Embarcadero surprised me in every positive way imaginable. If you follow my blog you know that I call it as I see it and hold little back. But this was a purely positive experience.

No I know what you are going to say; everyone knew about this right? Yeah me too. But my mind has been elsewhere lately with work and projects, so I didn’t catch the release buzz from the closed forums (well, “closed” is a matter of perspective, I have friends from Russia to the United States, and from china to the Sudan) and for the first time since the Borland days – Embarcadero got the drop on me.

I’m the kind of guy that runs on passion. Delphi, Smart Pascal and even object pascal as a general language is not just work for me – its something I love to use. I relax and enjoy myself when coding. So you can imagine my reaction when¬†my boss sent me a¬†message with “Download Tokyo and give me a report”. It was close to midnight but I was out of that bed faster than bacon on toast, ran into my home office wearing nothing but boxers and a Commodore t-shirt – and threw myself into¬†Embarcadero Developer Network’s download section.

From hero to zero in 2 seconds

I think it was around 07:00 the next morning, one hour before I was due for work that the magic phrase “command line and system services [daemons] only” hit home. And yes I “kinda”¬†knew that before – but maybe, just maybe Embarcadero had thrown in visual applications in the 11th hour. I even had a friendly bet with Jim McKeeth that a FMX UI solution¬†would appear less than 24 hours after release (more about that later).

linuxproj

Now that is a beautiful thing

Either way, it was quite the anti-climax after all that work in VMWare installing Delphi from scratch, waiting, hoping and praying. First of all because¬†I remember watching an in-depth technical review about¬†Firemonkey by David Intersimone¬†a few years back; the one where he describes the Firemonkey architecture in detail. Especially how the abstraction layer between the visual control¬†framework and the actual os made it possible for FMX to quickly adapt to new environments. Firemonkey is a complex and highly adaptable framework, but its biggest strength is paradoxically enough its simplicity. A simplicity achieved through abstracting the UI from the rendering functionality. At least as much as possible, you still have to deal with OS level windowing, security and all of that – so it’s no walk in the park either.

In the presentation (sadly I don’t have a link to this one) David went to great lengths to explain that regardless of operating system, as long as someone implemented a driver class that exposed the set of features FMX needs – Firemonkey would run as long as the compiler supported the instruction set. Visual engines could be DirectX, OpenGL, Cairo or whatever makes sense on that particular platform. As long as the “bridge class” talking with the operating system is there – Firemonkey can run on a toaster if¬†so be.

So Firemonkey has the same abstraction concept that we use in the VJL (Visual JavaScript Component Library) for Smart Pascal (differences not withstanding). If it’s Windows you are running on, DirectX is used; If it’s OS X then Apple’s implementation of OpenGL runs the show – and if you are on Linux you can pick between OpenGL and Cairo. I must admit I havent looked too closely at Cairo, but I know it was designed to make advanced composition and UI rendering more efficient.

Why it Tokyo didn’t ship with visual application support is beyond me, but considering the timing of what happened next, I have made my own conclusions.¬†It doesnt really matter to be honest – the point is we got it and it rocks!

FMX4Linux to the rescue!

Remember the wager I mentioned with Jim McKeeth? It wasnt a serious wager, I just commented and said “a Linux¬†FMX¬†solution will appear within 24hrs, you can bet on it” and added a smiley. I had no idea who or how, I just knew it’s going to turn up.

Because one thing that is a sure bet¬†– it’s that the Delphi community is a group made up of highly resourceful, inventive and clever people. And I was pretty sure that it wouldn’t take many hours before someone came up with a patch or framework to fill the void. And right I was. Less than 24 hours later and it was fact rather than conjecture.

fmxlinux

This is just a must have. There is no debate.

So less than 24 hours after Delphi Tokyo hit the shelves.¬†Eugene Kryukov and Alexey Sharagin presented “FMX for Linux”.¬†Giving you both the missing rendering back-end that talks to the system – and some kind of “widget mapping” (as far as I can understand, I wont pretend to know how they did it) that renders your UI according to GTK. So it’s not just a simple “patch”, theme or class that calls a handful of external routines; it’s a full visual implementation of FMX for Linux. Impressive? Oh yeah, and then some!

Let us explore!

Over the next few days I will be giving you an in-depth look at how this system works. I’m also going to test drive Html Components and see if we can get that running under Linux as well. Since FMX for Linux is still in development we have to take height for that – but¬†being able to target Ubuntu is pretty cool! And Remobjects, glorious Remobjects SDK, if that works out of the box I will dance the jig and upload it to YouTube, I swear to cow!

Remobjects_linux

Linux is about to feel the full onslaught of object pascal

There is little doubt what the next Smart Pascal IDE will be based on, and when you combine FMX for Linux, HTML components, Remobjects SDK and TMS into one Рyou got serious firepower to play with that will give even the most hardened C/C++ QT developer reason to be scared. And that is before the onslaught of Data Abstract and Remobjects C# native compiler.

Oh man next weekend is going to be the best ever!

Let’s not forget ARM targets

asus

The “PI killer” has arrived!

On a second note we will also be looking at the my latest embedded toys – namely the Asus Tinkerboard. I just got two of them in the mail today. Its going to be exciting to see how it fares against the Raspberry PI, ODroid XU4 and the Intel Atom based UP board.

We will also be testing how these two cards can be clustered together using node.js to work as one – and see how that impacts performance for our node.js based Smart Desktop project. These are exciting times indeed!

To make things even more interesting I will be pitching the Tinkerboard against the ODroid XU4 (original version, not the pussy passive save the environment unicorn they push now) against both the x86 UP v1 and Raspberry 3b. Although I think the Raspberry PI is in for the beating of its life when the ODroid and Tinker is overclocked to blood lust mode!

smartdesk

The Smart desktop has a powerful node.js back-end that packs a punch

So, when LLVM optimized JavaScript runs Mc68040 machine-code at 4 times the speed of a high-end Amiga 4000, I will be content.

So let’s do another “that’s impossible”¬†shall we ūüôā

Smart-Pascal: A brave new world, 2022 is here

April 29, 2017 6 comments

Trying to explain what Smart Mobile Studio does and the impact it can have on your development cycle is very hard. The market is rampant with superficial frameworks that promises you the world, and investors have been taken for a ride by hyped up, one-click “app makers” more than once.

I can imagine that being an investor is a bit like panning for gold. Things that glitter the most often turn out to be worthless – yet fortunes may hide beneath unpolished and rugged surfaces.

Software will disrupt most traditional industries in the next 5-10 years.
Uber is just a software tool, they don’t own any cars, yet they are now the
biggest taxi company in the world. -Source: R.M.Goldman, Ph.d

So I had enough. Instead of trying to tell people what I can do, I decided I’m going to show them instead. As the american’s say: “talk is cheap”. And a working demonstration is worth a thousand words.

Care to back that up with something?

A couple of weeks ago I published a video on YouTube of our Smart Pascal based desktop booting up in VMWare. The Amiga forums went off the chart!

vmware

For those that havent followed my blog or know nothing about the desktop I’m talking about, here is a short¬†summary of the events so far:


Smart Mobile Studio is a compiler that takes pascal, like that made popular in Delphi or Lazarus, and compiles it JavaScript instead of machine-code.

This product has shipped with an example of a desktop for years (called “Quartex media desktop”). It was intended as an example of how you could write a front-end for kiosk machines and embedded devices. Systems that could use a touch screen as the interface between customer and software.

You have probably seen those info booths in museums, universities and libraries? Or the ticket machines in subways, train-stations or even your local car-wash? All of those are embedded systems. And up until recently these have been small and expensive computers for running Windows applications in full-screen. Applications which in turn talk to a server or local database.

Smart Mobile Studio is able to deliver the exact same (and more) for a fraction of the price. A company in Oslo replaced their $300 per-board unit – with off the shelves $35 Raspberry Pi mini-computers. They then used Smart Pascal to write their client software and ran it in a fullscreen-browser. The Linux distribution was changed to boot straight into Firefox in full-screen. No Linux desktop, just a web display.

The result? They were able to cut production cost by $265 per unit.


Right, back to the desktop. I mentioned the Amiga community. This is a community of coders and gamers that grew up with the old Commodore machines back in the 80s and 90s. A new Amiga is now on the way (just took 20+ years) – and the look and feel of the new operating-system, Amiga OS 4.1, is the look and feel I have used in The Smart Desktop environment. First of all because I grew up on these machines myself, and secondly because the architecture of that system was extremely cost-effective. We are talking about a system that delivered pre-emptive multitasking in as little as 512Kb of memory (!). So this is my “ode to OS 4” if you will.

And the desktop has caused quite a stir both in the Delphi community, cloud community and retro community alike. Why? Because it shows some of the potential cloud technology can give you. Potential that has been under their nose all this time.

And even more important: it demonstrate how productive you can be in Smart Pascal. The operating system itself, both visual and non-visual parts, was put together in my spare time over 3 weeks. Had I been able to work on it daily (as a normal job) I would have knocked it out in a week.

A desktop as a project type

All programming languages have project types. If you open up Delphi and click “new” you are greeted with a rich menu of different projects you can make. From low-level DLL files to desktop applications or database servers. Delphi has it all.

delphistuff

Delphi offers a wide range of projects types you can create

The same is true for visual studio. You click “new solution” and can pick from a wide range of different projects. Web projects, servers, desktop applications and services.

Smart Pascal is the only system where you click “new project” and there is a type called “Smart desktop” and “Smart desktop application”. In other words, the power to create a full desktop is now an integrated part of Smart Pascal.

And the desktop is unique to you. You get to shape it, brand it and make it your own!

Let us take a practical example

Imagine a developer given the task to move the company’s aging invoice and credit system from the Windows desktop – to a purely web-based environment.

legacy2The application itself is large and complex, littered with legacy code and “quick fixes” going back decades. Updating such a project is itself a monumental task – but having to first implement concepts like what a window is, tasks, user space, cloud storage, security endpoints, look and feel, back-end services and database connectivity; all of that before you even begin porting the invoice system itself ? The cost is astronomical.

And it happens every single day!

In Smart Pascal, the same developer would begin by clicking “new project” and selecting “Smart desktop”. This gives him a complete desktop environment that is unique to his project and company.

A desktop that he or she can shape, adjust, alter and adapt according to the need of his employer.¬†Things like file-type recognition, storage, getting that database ‚Äď all of these things are taken care of already. The developer can focus on his task, namely to deliver a modern implementation of their invoice and credit¬†software ‚Äď not waste months¬†trying to force JavaScript frameworks do things they simply lack the depth to deliver.

Once the desktop has the look and feel in order, he would have to make a simple choice:

  • Should the whole desktop represent the invoice system or ..
  • Should the invoice system be implemented as a secondary application running on the desktop?

If it’s a large and dedicated system where the users have no need for other programs running, then implementing the invoice system inside the desktop itself is the way to go.

If however the customer would like to expand the system later, perhaps add team management, third-party web-services or open-office like productivity (a unified intranet if you like) – then the second option makes more sense.

On the brink of a revolution

The developer of 2022 is not limited to the desktop. He is not restricted to a particular operating system or chip-set. Fact is, cloud has already reduced these to a matter of preference. There is no strategic advantage of using Windows over Linux when it comes to cloud software.

Where a traditional developer write and implement a solution for a particular system (for instance Microsoft Windows, Apple OS X or Linux) ‚Äď cloud developers deliver whole eco systems; constellations of software constructed from many parts, both micro-services developed in-house but also services from¬†others; like Amazon or Azure.

All these parts co-operate and can be combined through established end-point standards, much like how components are used in Delphi or Visual Studio today.

smartdesk

The Smart Desktop, codename “Amibian.js”

Access to products written in Smart is through the browser, or sometimes through a ‚Äúpaper thin‚ÄĚ native host¬†(Cordova Phonegap,¬†Delphi and C/C++) that expose system level functionality.¬†These hosts wrap your application in a native, executable container ready for Appstore or Google Play.

Now the visual content is typically the same, and is only adapted for a particular device. The real work is divided between the client (which is now very much capable) and your server back-end.

So people still write code in 2022, but the software behaves differently and is designed to function as a group (cluster). And this requires a shift in the way we think.

asmjs

Above: One of my asm.js prototype compilers. Lets just say it runs fast!

Scaling a solution from processing 100 invoices a minute to handling 100.000 invoices a minute – is no longer a matter of code, but of architecture. This is where the traditional, native only approach to software comes up short, while more flexible approaches like node.js is infinitely more capable.

What has emerged up until now is just the tip of the ice-berg.

Over the next five to eight years, everything is going to change. And the changes will be irrevocable and permanent.

Running your Smart Pascal server as a system-level daemon is very easy once you know what to look for :)

The Smart Desktop back-end running as a system service on a Raspberry PI

As the Americans say, talk is cheap – and I’m done talking. I will do this with you, or without you. Either way it’s happening.

Nightly-build of the desktop can be tested here: http://quartexhq.myasustor.com/

Smart Desktop: Inter frame communication

April 24, 2017 Leave a comment

What is a process in a web desktop environment? Been thinking a bit about this, and it’s actually not that easy to hang the reality of an external script or website running in a frame as a solid “concept” (I use the word entity in my research notes). Visually it appears as a secondary, isolated web application – but ultimately (or technically) it’s just another web object with its own rendering context.

amibian_cortanaWhat I’m talking about is programs. Amibian.js or Smart desktop can already run a wealth of applications – which ultimately are just external or internal web-pages (a very simple term in some cases, but it will have to do). But when you want complex behavior from an eco-system, you must also provide an equally complex means of communication.

And this brings me neatly to the heart of todays task – namely how the desktop can talk to running web-applications and visa versa. And there are ultimately only two real options to choose from here: pure client, or client-server.

Unlike other embedded / web desktops I want to do as much as possible client-side. It would take me five minutes to implement message dispatching if I did this server-side, but the less dependent we are on a server – the more flexible amibian.js will be in terms of integration.

Screenshot

The first Desktop-Application type running in the desktop! And its aware that it is running under Amibian. This frame/host awareness helps establish parent/child relationship

Turns out that the browser has an API for this. A message API that allows a main website to talk with other websites embedded in IFrames. Which is just what we need.

But its very crude and not as straight forward as you may think. You have commands like postMessage() and then you can set up an event-handler that fires whenever something comes through. But you also have security measures to think about. What if you interface with a web-application designed to kill the system? Without the safety of an abstraction layer that would indeed be a very easy hack.

The Smart message API

This has been in the RTL for quite some time but I havent really used it for much. I had one demo a couple of years back that used the message-api to handle off-screen rendering in a separate IFrame (sort of poor mans thread), but that was it.

Well, time to update and bring it into 2017! So I give you the new and improved inter-frame, inter-window and inter-domain message framework! (ta da!)

It should be self-evident how to use it, but a little context wont hurt:

  • You inherit from TW3MessageData and implement whatever data-fields you need
  • You send messages using the w3_SendMessage() method
  • You broadcast (i.e send the message to all frames and windows listening) messages using the w3_broadcastMessage method

But here comes the twist, because I hate having to write a bunch of if/then/else switches for messages. So instead I created a simple subscription system! Super small yet very efficient!

msgbroker

In other words, you inform the system which messages you support, which then becomes your subscription. Whenever the local dispatcher encounters that particular message type  Рit will forward the message object to you.

Here is the code. Notice the helper functions that deals with finding the current domain and so on – this will be handy when establishing a parent/child relationship.

Also note: you are expected to inherit from these classes and shape them to your message types. I have implemented the bare-bones that mimic’s Windows message API. And yes, please google inter-frame communication while testing this.

unit SmartCL.Messages;

interface

uses
  w3c.dom,
  System.Types,
  System.Time,
  System.JSON,
  Smartcl.Time,
  SmartCL.System;

const
  CNT_QTX_MESSAGES_BASEID = 1000;

type

  TW3MessageData         = class;
  TW3CustomMsgPort       = class;
  TW3OwnedMsgPort        = class;
  TW3MsgPort             = class;
  TW3MainMessagePort     = class;
  TW3MessageSubscription = class;

  TW3MessageSubCallback  = procedure (Message: TW3MessageData);
  TW3MsgPortMessageEvent = procedure (Sender: TObject; EventObj: JMessageEvent);
  TW3MsgPortErrorEvent   = procedure (Sender: TObject; EventObj: JDOMError);

  (* The TW3MessageData represents the actual message-data which is sent
     internally by the system. Notice that it inherits from JObject, which
     means it supports 1:1 mapping from JSON.
     This is why Deserialize is a member, and Serialize is a class member. *)
  TW3MessageData = class(JObject)
  public
    property    ID: Integer;
    property    Source: String;
    property    Data: String;

    function Deserialize: string;
    class function Serialize(const ObjText: string): TW3MessageData;

    constructor Create;
  end;

  (* A "mesage port" is a wrapper around the [obj]->Window[]->OnMessage
     event and [obj]->Window[]->PostMessage API.
     Where [obj] can be either the main window, or an embedded
     IFrame->contentWindow. This base-class implements the generic behavior.
     You must inherit or use a decendant class if you dont know exactly what
     you are doing! *)
  TW3CustomMsgPort = Class(TObject)
  private
    FWindow:    THandle;
  protected
    procedure   Releasewindow;virtual;
    procedure   HandleMessageReceived(eventObj: JMessageEvent);
    procedure   HandleError(eventObj: JDOMError);
  public
    Property    Handle:THandle read FWindow;
    Procedure   PostMessage(msg:Variant;targetOrigin:String);virtual;
    procedure   BroadcastMessage(msg:Variant;targetOrigin:String);virtual;
    Constructor Create(WND: THandle);virtual;
    Destructor  Destroy;Override;
  published
    property    OnMessageReceived: TW3MsgPortMessageEvent;
    Property    OnError: TW3MsgPortErrorEvent;
  end;

  (* This is a baseclass for window-handles that already exist.
     You are expected to provide that handle via the constructor *)
  TW3OwnedMsgPort = Class(TW3CustomMsgPort)
  end;

  (* This message-port type is very different from both the
     ad-hoc baseclass and "owned" variation above.
     This one actually creates it's own IFrame instance, which
     means it's a completely stand-alone entity which doesnt need an
     existing window to dispatch and handle messages *)
  TW3MsgPort = Class(TW3CustomMsgPort)
  private
    FFrame:     THandle;
    function    AllocIFrame: THandle;
    procedure   ReleaseIFrame(aHandle: THandle);
  protected
    procedure   ReleaseWindow; override;
  public
    constructor Create; reintroduce; virtual;
  end;

  (* This message port represent the "main" message port for any
     application that includes this unit. It will connect to the main
     window (deriving from the "owned" base class) and has a custom
     message-handler which dispatches messages to any subscribers *)
  TW3MainMessagePort = Class(TW3OwnedMsgPort)
  protected
    procedure HandleMessage(Sender: TObject; EventObj: JMessageEvent);
  public
    constructor Create(WND: THandle); override;
  end;

  (* Information about registered subscriptions *)
  TW3SubscriptionInfo = Record
    MSGID:    integer;
    Callback: TW3MessageSubCallback;
  end;

  (* A message subscription is an object that allows you to install
     X number of event-handlers for messages you want to recieve. Its
     important to note that all subscribers to a message will get the
     same message -- there is no blocking or ownership concepts
     involved. This system is a huge improvement over the older WinAPI *)
  TW3MessageSubscription = class(TObject)
  private
    FObjects:   Array of TW3SubscriptionInfo;
  public
    function    ValidateMessageSource(FromURL: string): boolean; virtual;
    function    SubscribesToMessage(const MSGID: integer): boolean; virtual;
    procedure   Dispatch(const Message: TW3MessageData); virtual;
    function    Subscribe(const MSGID: integer; const CB: TW3MessageSubCallback): THandle;
    procedure   Unsubscribe(const Handle: THandle);

    Constructor Create; virtual;
    Destructor  Destroy; override;
  end;

  (* Helper functions which simplify message handling *)
  //function  W3_MakeMsgData:TW3MessageData;
  procedure W3_PostMessage(const msgValue: TW3MessageData);
  procedure W3_BroadcastMessage(const msgValue: TW3MessageData);

  (* Audience returns true if a message-ID have any
     subscriptions assigned to it *)
  function  W3_Audience(msgId: integer): boolean;

  (* This returns true if the current smart application is embedded in a frame.
     That can be handy to know when establishing parent/child relationships
     between main-application and child "programs" running in frames *)
  function AppRunningInFrame: boolean;
  function GetUrlLocation: string;
  function GetDomainFromUrl(URL: string; const IncludeSubDomain: boolean): string;

implementation

uses SmartCL.System;

var
_mainport:    TW3MainMessagePort = NIL;
_subscribers: Array of TW3MessageSubscription;

function AppRunningInFrame: boolean;
var
  LMyFrame: THandle;
begin
  // window.frameElement Returns the element (such as <iframe> or <object>)
  // in which the window is embedded, or null if the window is top-level.
  try
    // Note: attempting to read frameElement will throw a SecurityError
    // exception in cross-origin iframes. It should just return null,
    // but not all browsers play by the rules -- hence the try/catch
    asm
      @LMyFrame = window.frameElement;
    end;
  except
    on e: exception do
    exit;
  end;
  result := not (LMyFrame);
end;

function GetUrlLocation: string;
begin
  asm
    @result = window.location.hostname;
  end;
end;

function GetDomainFromUrl(Url: string; const IncludeSubDomain: boolean): string;
begin
  url := Url.trim();
  if url.length < 1 then
  begin
    asm
      @Url = window.location.hostname;
    end;
  end;
  asm
    @url = (@url).replace("/(https?:\/\/)?(www.)?/i", '');
    if (!@IncludeSubDomain) {
        @url = (@url).split('.');
        @url = (@url).slice((@url).length - 2).join('.');
    }
    if ((@url).indexOf('/') !== -1) {
      @result = (@url).split('/')[0];
    } else {
      @result = @url;
    }
  end;
end;

//#############################################################################
//
//#############################################################################

procedure QTXDefaultMessageHandler(sender:TObject;EventObj:JMessageEvent);
var
  x:      Integer;
  mItem:  TW3MessageSubscription;
  mData:  TW3MessageData;
begin
  mData := TW3MessageData.Serialize(EventObj.Data);
  //mData := new TW3MessageData();
  //mData.Serialize(EventObj.Data);

  for x:=0 to _subscribers.count-1 do
  Begin
    mItem := _subscribers[x];

    // Check that the message is subscribed to
    if mItem.SubscribesToMessage(mData.ID) then
    begin
      // Make sure subscriber validates the caller-domain
      if mItem.ValidateMessageSource( TVariant.AsString(EventObj.source) ) then
      begin
        (* We execute with a minor delay, allowing the browser to
         exit the function before we dispatch our data *)
        TW3Dispatch.Execute(procedure ()
        begin
          mItem.Dispatch(mData);
        end, 10);
      end;
    end;
  end;
end;

{ function W3_MakeMsgData: TW3MessageData;
begin
  result := new TW3MessageData();
  result.Source := "*";
end;      }

function W3_GetMsgPort: TW3MainMessagePort;
begin
  if _mainport = NIL then
  _mainport := TW3MainMessagePort.Create(BrowserAPI.Window);
  result := _mainport;
end;

procedure W3_PostMessage(const msgValue:TW3MessageData);
begin
  if msgValue<>NIL then
  W3_GetMsgPort().PostMessage(msgValue.Deserialize,msgValue.Source) else
  raise exception.create('Postmessage failed, message object was NIL error');
end;

procedure W3_BroadcastMessage(const msgValue:TW3MessageData);
Begin
  if msgValue<>NIL then
  W3_GetMsgPort().BroadcastMessage(msgValue,msgValue.Source) else
  raise exception.create('Broadcastmessage failed, message object was NIL error');
end;

function  W3_Audience(msgId: Integer): boolean;
var
  x:  Integer;
  mItem:  TW3MessageSubscription;
begin
  result:=False;
  for x:=0 to _subscribers.count-1 do
  Begin
    mItem:=_subscribers[x];
    result:=mItem.SubscribesToMessage(msgId);
    if result then
    break;
  end;
end;

//#############################################################################
// TW3MainMessagePort
//#############################################################################

Constructor TW3MessageData.Create;
begin
  self.Source:="*";
end;

function  TW3MessageData.Deserialize: string;
begin
  result := JSON.Stringify(self);
end;

class function TW3MessageData.Serialize(const ObjText: string): TW3MessageData;
begin
  result := TW3MessageData(JSON.parse(ObjText));
end;

//#############################################################################
// TW3MainMessagePort
//#############################################################################

constructor TW3MainMessagePort.Create(WND: THandle);
begin
  inherited Create(WND);
  OnMessageReceived := QTXDefaultMessageHandler;
end;

procedure TW3MainMessagePort.HandleMessage(Sender: TObject; EventObj: JMessageEvent);
begin
  QTXDefaultMessageHandler(self, eventObj);
end;

//#############################################################################
// TW3MessageSubscription
//#############################################################################

Constructor TW3MessageSubscription.Create;
begin
  inherited Create;
  _subscribers.add(self);
end;

Destructor TW3MessageSubscription.Destroy;
Begin
  _subscribers.Remove(self);
  inherited;
end;

function TW3MessageSubscription.Subscribe(const MSGID: integer; const CB: TW3MessageSubCallback): THandle;
var
  LObj: TW3SubscriptionInfo;
begin
  LObj.MSGID := MSGID;
  LObj.Callback := @CB;
  FObjects.add(LObj);
  result := LObj;
end;

procedure TW3MessageSubscription.Unsubscribe(const Handle: THandle);
var
  x:  Integer;
begin
  for x:=0 to FObjects.Count-1 do
  begin
    if variant(FObjects[x]) = Handle then
    Begin
      FObjects.delete(x,1);
      break;
    end;
  end;
end;

function TW3MessageSubscription.ValidateMessageSource(FromURL: string): boolean;
begin
  // by default we accept messages from anywhere
  result := true;
end;

function TW3MessageSubscription.SubscribesToMessage(const MSGID: integer): boolean;
var
  x:  Integer;
begin
  result:=False;
  for x:=0 to FObjects.Count-1 do
  Begin
    if FObjects[x].MSGID = MSGID then
    Begin
      result:=true;
      break;
    end;
  end;
end;

procedure TW3MessageSubscription.Dispatch(const Message:TW3MessageData);
var
  x:  Integer;
begin
  for x:=0 to FObjects.Count-1 do
  Begin
    if FObjects[x].MSGID = Message.ID then
    Begin
      if assigned(FObjects[x].Callback) then
      FObjects[x].Callback(Message);
      break;
    end;
  end;
end;

//#############################################################################
// TW3MsgPort
//#############################################################################

Constructor TW3MsgPort.Create;
begin
  FFrame:=allocIFrame;
  if (FFrame) then
  inherited Create(FFrame.contentWindow) else
  Raise Exception.Create('Failed to create message-port error');
end;

procedure TW3MsgPort.ReleaseWindow;
Begin
  ReleaseIFrame(FFrame);
  FFrame:=unassigned;
  Inherited;
end;

Procedure TW3MsgPort.ReleaseIFrame(aHandle:THandle);
begin
  If (aHandle) then
  Begin
    asm
      document.body.removeChild(@aHandle);
    end;
  end;
end;

function TW3MsgPort.AllocIFrame: THandle;
Begin
  asm
    @result = document.createElement('iframe');
  end;

  if (result) then
  begin
    /* if no style property is created, we provide that */
    if not (result['style']) then
    result['style'] := TVariant.createObject;

    /* Set visible style to hidden */
    result['style'].display := 'none';

    asm
      document.body.appendChild(@result);
    end;

  end;
end;

//#############################################################################
// TW3CustomMsgPort
//#############################################################################

Constructor TW3CustomMsgPort.Create(WND: THandle);
Begin
  inherited Create;
  FWindow := WND;
  if (FWindow) then
  begin
    FWindow.addEventListener('message', @HandleMessageReceived, false);
    FWindow.addEventListener('error', @HandleError, false);
  end;
End;

Destructor TW3CustomMsgPort.Destroy;
begin
  if (FWindow) then
  begin
    FWindow.removeEventListener('message', @HandleMessageReceived, false);
    FWindow.removeEventListener('error', @HandleError, false);
    ReleaseWindow;
  end;
  inherited;
end;

procedure TW3CustomMsgPort.HandleMessageReceived(eventObj:JMessageEvent);
Begin
  if assigned(OnMessageReceived) then
  OnMessageReceived(self,eventObj);
End;

procedure TW3CustomMsgPort.HandleError(eventObj:JDOMError);
Begin
  if assigned(OnError) then
  OnError(self,eventObj);
end;

procedure TW3CustomMsgPort.Releasewindow;
begin
  FWindow := Unassigned;
end;

Procedure TW3CustomMsgPort.PostMessage(msg:Variant;targetOrigin:String);
begin
  if (FWindow) then
  FWindow.postMessage(msg,targetOrigin);
end;

// This will send a message to all frames
procedure TW3CustomMsgPort.BroadcastMessage(msg:Variant;targetOrigin:String);
var
  x:  Integer;
  mLen: Integer;
begin
  mLen := TVariant.AsInteger(browserAPI.Window.frames.length);
  for x:=0 to mLen-1 do
  begin
    browserAPI.window.frames[x].postMessage(msg,targetOrigin);
  end;
end;

finalization
begin
  if assigned(_mainport) then
  _mainport.free;
end;

end.

Smart Puppy: Smart pascal meets linux!

April 21, 2017 Leave a comment

logo_waifu2x_art_noise1_scale_tta_1One of my absolute favorite operating-systems in the whole world has to be Puppy Linux. I discovered it just a few days ago and I have fallen completely in love with this thing. I can vaguely remember giving it a testdrive a few years back, but I¬†didn’t know much about Linux in general so I didn’t understand what I it represented.

So if you are looking for a friendly, small, fast and easy to use Linux system – then Puppy is about as friendly as it gets. The Facebook user group with the same name is a warm and friendly place to be. Much like Delphi developer the Admin(s) take pride in keeping things orderly – and people who hang out there engage, care and help each other out.

Before you run out and download Puppy, which I hope you do later – please understand that Puppy is very different from Linux in general. You could almost say that it’s a whole alternative to mainstream Linux as we know it.

But, once you know about the differences then you are in for a treat! I will explain them in the article, so please be patient and take the time to digest.

Puppies hate fluff

One of the reasons I never converted wholesale to Linux (and yes I did try) – is that the average Linux distro is unbearable and unnecessary cryptic. For some reason Linux architects suffer from a terrible affliction, namely a shortage of characters. This sickness means that Linux don’t have enough characters for everyone, so programmers must use a maximum of five letters when naming their software. If coders ignore this shortage and blatantly name something directly or intuitively – then¬†Richard Stallman and Lady Gaga¬†will order a “drive-by pony tail cut” on the dude. And a Linux administrator without his pont-tail is finished (the nerd equivalent of flipping burgers at McDonalds).

Puppy Linux does contain it’s fair share of the classical Linux software (that goes without saying). But, the man behind this wonderful Linux flavour is also a level-headed, clever¬†and resourceful man (or woman) – so he has thankfully broken with what can only be described as archaic thinking.

puppy01

Puppy Linux is not exactly software impaired

So even with my minimal Linux experience I was able to navigate around the filesystem, locate documents (which here is called “Documents” and “My Documents” even). There is a whole bunch of these tiny differences, small things that makes all the difference. From the way he (or she) has named things – to where things are stored and placed.

And it’s so small! The basic install is less than 300 megabytes in size (!) Yes you read that right. The generic Puppy Linux installation with desktop and a few popular applications is less than 300 megabytes.

In my case I can have a fully loaded development studio, featuring GCC, FPC (freepascal), Lazarus IDE, CodeBlocks IDE, KDevelop IDE, Anjunta developer studio – and last but never least Smart Mobile Studio on a 2 gigabyte USB stick (!) I don’t think you can even get USB sticks that small any more (?) The smallest I got is 32 gigabyte and the largest is 256 gigabyte.

But before we go on with the wonders of Puppy Linux Рlets look at what Linux did wrong. Why is Linux even to this day considered hard to use? Or to put it another way: what has Windows and OS X done right to be considered easier to use yet capable of the same (and often more) ?

Naming, what Linux did wrong

One of the tenants of professional programming, is to ensure that classes, members and functions have meaningful names. There was a time when you would get away with single character class, variable and method names — but that wont fly in 2017. Your Q&A department would have you for breakfast if you checked in code like that. Classes, name-spacing and packages should be descriptive. End of story.

The reason this has become an almost sacred law, should be obvious: it may not be you that maintains the software 5, 10 or 15 years down the road. A piece of code should always be written in such a way that it can be understood and thus maintained by others within a reasonable time-frame (which also means plenty of comments and good documentation). This is not a matter of preference, but of time and money. And when you pay out salaries these factors are one and the same.

So naming elements of software in 2017 has a lot of criteria attached to it. The most obvious so far being:

  • Always name things clearly because¬†that
    • ensures ease of use
    • simplifies maintenance
    • removes doubt as to “what is what”
    • less user-mistakes
  • The less mistakes, either in understanding something or using something, the less money a business throws out the window. Money that could be spent paying you to make something cool instead (or fix bugs that are critical).
  • The less user-mistakes caused by customers,¬†the more your service department can focus on quality of service. When a company starts it usually have outstanding support, but as it grows their service-desk slowly become robots.
  • The easier and more intuitive a system is, the more users it will attract. If people can pick something up and just naturally figure out how things work, then statistics show that¬†they most likely will continue using it through thick and thin.

Right. With these rules in mind Рwhat happens if you take them but apply them to Linux instead? Not Linux code or libraries or stuff like that, but Linux the user-experience from top to bottom?

And don’t get me wrong, I think Linux is awesome so this is not an attack on Linux; I’m simply pointing out factors that could help make Linux even better.

I mean, just look at the Linux filesystem. Again you have this absurd¬†shortage of characters. Why would anyone abbreviate the word “user[s]” into “usr” ? It make noh sense. ¬†Same with “lib”, would it have killed you to call it “libraries”? And so it continues with “dev” – because calling it “devices” would¬†cause the space-time-continuum to break.

Shell shocked

The shell (or command-line under Windows) and it’s commands is really the thing that annoys me the most. There is a fine line between use and abuse, and the level of abbreviation here is beyond whimsical and harmless – and well into the realm of silly and absurd

Who in their right mind would name a command “ps”? What could it possibly mean? The first thing that comes to mind is “print spool”. If you come from any other platform than Linux (and perhaps Unix, I don’t know) you would never¬†imagine that “ps” actually means “list all running processes and their states”.

ps_command

“ps” lists the running processes and their states

Above: running “ps” from the shell lists the running processes. Would it have killed the coders to just call it, oh perhaps, “listprocesses” or “showrunningprograms”?

The “ps” command is just one in a long, long list of commands that really should be brought into the twenty-first century. The benefits should be obvious. It should not be necessary for a 43-year-old man to blog about this, because it’s been a problem for the better part of three decades.

  • Kids and teenagers is the bread and butter for all operating systems. The faster a kid of teenager can do something with a system, the more loyal that individual will be to the platform in the future.
  • Linux needs developers and users from other platforms. When someone who has been a successful developer for almost 30 years find a system cryptic and hard to use, how much harder will it be for a non-technical user?
  • Standards are important. The location of files, libraries and settings should be uniform. As of writing Linux¬†seem to have 3 different standards (again, I am no expert): systemd, initd and “systemx”. The latter is just a name I made up, because no-one really knows what it’s called.¬†We are now in the realm of PlayStation, ChromeOS, WebOS and systems that build on the Linux –¬†but deviate the moment the drivers have loaded.

Again, I’m not writing this in a negative mindset. I have been using Ubuntu for a while¬†as an alternative to Windows and OS X. But this has been a purely user-centric experience. I have not done any programming except random bits of Freepascal and node.js experiements. I have enjoyed Ubuntu purely as a user. Writing documents, checking email, browsing the web, IRC, reading news groups – ordinary stuff.

So I am very positive to Linux, but I have yet to find “my” flavour of it. A Linux¬†distro I feel at home with and that appeals to my way of working.

Until today that is..

Enter puppy Linux

Puppy is a flavour of Linux that just demolishes some of Linux’s most holiest of concepts.¬†Everyone will tell you never to run as root, always have the root account in peace¬†– and¬†keep it under lock and key just in case someone gets into your second account right?

Well not Puppy. Here you are expected to run as root and you can, if you for some reason must, jump out into a secondary user which is fake. So indeed – puppy Linux is a single user Linux system. It’s the rebel, the scoundrel and rouge of the Linux world – the distro that couldn’t care less what the other guys are doing.

gcc.png

Fancy a spot of coding? GCC is a SFS module away ..

Secondly, and this is very cool, Puppy is highly modular. No I’m not talking about packages, all Linux distros have that in some form or another. No I’m talking about something called SFS files, short for squashed file-system.

To make a long story short, Puppy allows you to mount compressed files as disks and they become a part of the system. It’s a bit like the virtual-drive API on windows (if you have ever coded against that?). You may have noticed in Windows how you can double-click on a .ISO file and suddenly the file is mounted in the file-explorer and stays mounted until you manually dis-mount the damn thing?

Well, SFS is that but also much more. Because when you mount the SFS file whatever applications it contains registers on the start-menu, adds itself to the global path and essentially becomes one with the whole system. This took me a while to wrap my head around this (good features always comes with a price, so i keept waiting for the negative. But there were none!). The people I talked to about this were not coders, so they had some very colorful explanations to how it all worked. But once I realized SFS was just a zip-file (or tarball or whatever) with a fixed structure (including mount script and dis-mount script) I got the picture.

Size and speed matters

Before I started using a PC back in the early 90’s I was a huge Amiga fan. I still am (as you no doubt have noticed). One of the first things I found, or first difference between Amiga computing and PC computing that hit me – was how wasteful PC’s were. I remember I was shocked when I saw how much space and cpu power the average programmer just wasted — because on the Amiga everyone strived to be as resourceful and efficient as possible.

We would spend days optimizing even the smallest parts of our applications just to ensure that it ran at top speed and produced as little bloat as possible. This was just baked into us, it was the way of the force¬†and as common as your grandfather’s work ethics. Quality and achievement went hand in hand.

cb

CodeBlocks is an excellent IDE ūüôā

When you fire up Puppy Linux you are instantly reminded that there are people to this day that cares about size and speed. And that maybe, just maybe, consumerism has tricked you into throwing away perfectly usable technology year after year. Machines that actually had more than enough CPU power for the tasks you wanted, but was slowed down by bloated operating-systems, poor programming and lazy code generators.

Puppy Linux is the fastest bloody Linux you will ever run. The only operatingsystem I have tried that runs faster, is Aros compiled for Arm (a distro called Aeros, a reverse engineered edition of Amiga OS). But as far as x86 and the Linux kernel goes — Puppy Linux is the bomb.

I know I’m repeating myself here but: less than 300 megabytes for a fully loaded Linux distro with text processor, browser, devkit, music player, video player and all the “typical” applications you would use for daily tasks?¬†And it truly is¬†the fastest hunk of junk in the galaxy without question.

Amiga coders and the cult of joy

When I started to snoop around the Puppy environment and community, I started¬†to notice a couple of “tell-tell” signs. Tiny, subtle things that only an Amiga coder¬†would pick up on. Enough to give you a hunch, a gut feeling – but not enough to blatantly say it out loud.¬†“Amiga guys did this” i would whisper to myself. And it’s not really such a big surprise to find that coders now in their 40s that used to be Amiga coders.

In 30 years time there will be company owners and CEO’s that grew up with Playstation and have fond memories of that. But they wont recognize each-other by their craftmanship – that is the difference.

cult

The cult of joy lives on, albeit in new forms

The Amiga was special because it was not just a games machine. It was also a complete rewrite of what constituted the power¬†operatingsystem of its time:¬†Unix. In other words they copied the best stuff from Unix (which by the way had the same absurd filesystem as Linux still has) but cleaned it up. First thing to be cleaned was (drumroll) the filesystem. But that’s another story all together.

When I entered the Puppy Linux forum I naturally mentioned that I was a complete total Linux novice, and that my favorite machine before x86 was an Amiga. And what do you think happened? Let’s just say that more than a few greeted me with open arms. These were the Amiga users that went to Linux when Commodore went under all those years ago. And they had been active in shaping Linux ever since (!)

So yeah, had a great time on their forums – and it was like running into your long-lost cousin or something. Like if you havent seen a family member in 30 years and suddenly you meet them face to face.

Tired of 30 gigabyte operatingsystems?

Puppy Linux is not for everyone. It’s the kind of system you either love or hate. I have yet to find someone on a middle-ground regarding puppy. Either you love it, or you hate it. Or if you prefer: either you use it and are thrilled about it, or you never install it.

It has a lot of good things going for it:

  • It is built to be one of the smallest, working desktop environment you can get
  • It is built according to¬†“the old ways”, where speed, efficiency and size matter
  • It runs fine on older hardware (my test machine is an 8 year old laptop)¬†and makes stuff you would otherwise throw away become valuable again.
  • It is storage abstracted, meaning you can have all your personal stuff inside a single SFS archive (easier to back up), while the operatingsystem remains on a USB stick.
  • You don’t have to permanently install it (again, boot from a USB stick).
  • It is single user by default, which is perfect for IOT projects and devices!
  • It supports ARM, so you can now enjoy¬†this awesome thing on Raspberry PI 3 !
  • Its Linux so it has all the benefits of a rich driver database
  • Latest Puppy is binary compatible with Ubuntu (whatever that means)
  • There are 3 different desktops for it (to my knowledge), so if you don’t like the default client just install something else
  • It is the perfect rescue USB stick. At less than 300 Mb you can fit it on any old USB stick you have around the house. I think the smallest you can buy now is 4 GB
  • It has a warm, helpful, friendly and international group of users

Oh and it’s free!

As a final note: I installed Wine, the system that makes it possible to run Windows software on Linux (not an emulator, more of a api-call middle-ware /slash/ dispatcher). I was quite surprised to see it run Smart Mobile Studio on the first try!

So fancy a bit of hacking this weekend? Why not give puppy a go?

Check it out here: http://puppylinux.org/main/Download%20Latest%20Release.htm

Smart Pascal + WebOS = true

April 15, 2017 Leave a comment

LG-WebOSIf you own a television (and who doesn’t) chances are you own an LG model. The LG brand has been on the market since before recorded history (or so it feels) and have been a major player for decades.

What is little known though, is that LG also owns and finance a unique operating system. This is not uncommon these days, most NAS and cloud vendors have their own system (there are roughly 20 cloud based systems on the market that I know of). The only way to make a Smart TV (sigh) is to add a computer to it. And if you buy a television today chances are it contains a small embedded board running some custom-made operating system designed to do just that.

Every vendor has their own system, and those that don’t usually end up forking Android and adapt that to their needs.

Luna WebOS

LG’s system is called WebOS. This may sound like yet another html eye candy front end, but hear me out because this OS¬†is not what it seems.

VirtualBox_Developers LuneOS emulator appliance 20151006131924-stable-038-253_15_04_2017_16_01_43

Except for some font issues (see disk label) Smart loaded fine under WebOS

Remember Palm OS? If you are between 35 and 45 you should remember that before Apple utterly demolished the mobile-phone market with their iPhone, one of the most popular brands was Palm. Their core product being “Palm Pilot”, a kind of digital¬†filo-fax¬†(personal organizer) and mobile phone rolled into one.

WebOS is based on what used to be called Palm-OS, but it has been completely revamped, given a sexy new user interface (looks a lot like android to be honest!) and brought into the present age. And best of all: its 100% free! It runs on a plethora of systems, from x86 to Raspberry PI to Mips. It is used in televisions, terminals and set-top-boxes around the world – and is quite popular amongst engineers.

Check it out their new portal here; there is also plenty of links to pre-built images if you look around there: https://pivotce.com/

JavaScript application stack

One of the coolest features in my view, is that their applications are primarily made by JavaScript. The OS itself is native of course, but they have discovered that JavaScript and HTML5 is an excellent way to build applications. Applications that is easy to control, sandboxed and safe yet incredibly powerful and capable.

VirtualBox_Developers LuneOS emulator appliance 20151006131924-stable-038-253_15_04_2017_16_00_46

Smart Desktop booting Quake 3 in Luna OS

Well, today I sent them an Email requesting their SDK. I know they use Enyo.js as their primary and suggested framework – but that is no problem for Smart Mobile Studio. A better question is: can Luna handle our codebase?

When I loaded up Quake 3 in pure Asm.JS the TV crashed with a spectacular access violation. So they are probably not used to the level of hardcore coding Smart developers represent. But yeah, Quake III is about as advanced as it gets – and it pushes any browser to the outer limits of what is possible.

Once I get the SDK, docs and a few examples – I will begin adding support for it as quickly as possible. Which means you get a new project type especially for the platform + API units (typically stored under $RTL\Apis folder).

Why is this cool?

Because with support for Luna / WebOS, you as a Delphi or Smart developer can offer your services to companies that use WebOS or Luna (the open source version) in their devices. The list of companies is substantial – and they are well established companies. And as you have no doubt noticed, hardware engineers doesn’t always make the best software engineers. So this opens up plenty of opportunities for a good object pascal / Smart developer.

17948597_10154372538120906_1912877833_o

Luna has an Android like quality over it – except its more smooth to use

Remember Рother developers will use vanilla JavaScript. You have the onslaught of the VJL and the might of our libraries at your disposal. You can produce in days what others use weeks on achieving.

These are exciting days! I’ll keep you posted on our progress!

Smart Pascal, the next generation

April 15, 2017 1 comment

I want to take the time to talk a bit about the future, because like all production companies we are all working towards lesser and greater goals. If you don’t have a goal then you are in trouble; Thankfully our goals have been very clear from the beginning. Although I must admit that our way there has been.. “colorful” at times.

When we started back in 2010 we didn’t really know what would become of our plans. We only knew that this was important; there was a sense of urgency and “we have to build this” in the air; I think everyone involved felt that this was the case, without any rational explanation as to why. Like all products of passion it can consume you in a way – and you work day and night on turning an idea into something real. From the intangible to the tangible.

transitions_callback

It seems like yesterday, but it was 5 years ago!

By the end of 2011 / early 2012, Eric and myself had pretty much proven that this could be done. At the time there were more than enough nay-sayers and I think both of us got flamed quite often for daring to think different. People would scoff at me and say I was insane to even contemplate that object pascal could ever be implemented for something as insignificant and mediocre as JavaScript. This was my first meeting with a sub-culture of the Delphi and C++ community, a constellation I have gone head-to-head with on many occasions. But they have never managed to shake my resolve as much as inch.

 

 

When we released version 1.0 in 2012 some ideas about what could be possible started to form. J√łrn¬†defined plans for a system we later dubbed “Smart net”. In essence it would be something you logged onto from the IDE – allowing you to store your projects in the cloud, compile in the cloud (connected with Adobe build services) and essentially¬†move parts of your eco-system to the cloud. Keep in mind this was when people still associated cloud with “storage”; they had not yet seen things like Uber or Netflix¬†or played Quake 3 at 160 frames per second,¬†courtesy of asm.js in their browser.

The second part would be a website where you could do the same, including a live editor, access to the compiler and also the ability to buy and sell components, solutions and products. But for that we needed a desktop environment (which is where the Quartex Media Desktop came in later).

cool

The first version of the Media Desktop, small but powerful. Here running in touch-screen mode with classical mobile device layout (full screen forms).

Well, we have hit many bumps along the road since then. But I must be honest and say, some of our detours have also been the most valuable. Had it not been for the often absurd (to the person looking in) research and demo escapades I did, the RTL wouldn’t be half as powerful as it is today. It would deliver the basics, perhaps piggyback on Ext.js or some lame, run of the mill¬†framework – and that would be that. Boring, flat and limited.

What we really wanted to deliver was a platform. Not just a website, but a rich environment for creating, delivering and enjoying web and cloud based applications. And without much fanfare – that is ultimately what the Smart Desktop and it’s sexy node.js back-end is all about is all about.

We have many project types in the pipeline, but the Smart Desktop type is by far the most interesting and powerful. And its 100% under your control. You can create both the desktop itself as a project – and also applications that should run on that desktop as separate projects.

This is perfectly suited for NAS design (network active storage devices can usually be accessed through a web portal on the device), embedded boards, intranets and even intranets for that matter.

You get to enjoy all the perks of a multi-user desktop, one capable of both remote desktop access, telnet access, sharing files and media, playing music and video (we even compiled the mp4 codec from C to JavaScript so you can play mp4 movies without the need for a server backend).

The Smart Desktop

The Smart Desktop project is not just for fun and games. We have big plans for it. And once its solid and complete (we are closing in on 46% done), my next side project will not be more emulators or demos Рit will be to move our compiler(s) to Amazon, and write the IDE itself from scratch in Smart Pascal.

smart desktop

The Smart Desktop – A full desktop in the true sense of the word

And yeah, we have plans for EmScripten as well – which takes C/C++ and compiles it into asm.js. It will take a herculean effort to merge our RTL with their sandboxed infrastructure –¬†but the benefits are too great to ignore.

As a bonus you get to run native 68k applications (read: Amiga applications) via emulation. While I realize this will be mostly interesting for people that grew up with that machine – it is still a testament to the power of Smart and how much you can do if you really put your mind to it.

Naturally, the native IDE wont vanish. We have a few new directions we are investigating here – but native will absolutely not go anywhere. But cloud, the desktop system we are creating, is starting to become what we set out to make five years ago (has it been half a decade already? Tempus fugit!). As you all know it was initially designed as an example of how you could write full-screen applications for Raspberry PI and similar embedded devices. But now its a full platform in its own right – with a Linux core and node.js heart, there really is very little you cannot do here.

scsc

The Smart Pascal compiler is one of our tools that is ready for cloud-i-fication

Being able to login to the Smart company servers, fire up the IDE and just code – no matter if you are: be it Spain, Italy, Egypt, China or good old USA¬†— is a pretty awesome thing!

Clicking compile and the server does the grunt work and you can test your apps live in a virtual window; switch between device layouts and targets — then hit “publish” and it goes to Cordova (or Delphi) and voila – you get a message back when binaries for 9 mobile devices is ready for deployment. One click to publish your applications on Appstore, Google play and Microsoft marketplace.

Object pascal works

People may have brushed off object pascal (and from experience those people have a very narrow view of what object pascal is all about), but when they see what Smart delivers, which in itself is written in Delphi, powered by Delphi and should be in every Delphi developer’s toolbox — i think it should draw attention to both Delphi as a product, object pascal as a language – and¬†smart as a solution.

With Smart it doesn’t matter what computer you use. You can sit at home with the new A1222 PPC Amiga, or a kick-ass Intel i7 beast that chew¬†virtual machines for breakfast. If your computer can handle a modern website, then you can learn object pascal and work directly in the cloud.

desktop_embedded

The Smart Desktop running on cheap embedded hardware. The results are fantastic and the financial savings of using Smart Pascal on the kiosk client is $400 per unit in this case

Heck you can work off a $60 ODroid XU4, it has more than enough horsepower to drive the latest chrome or Firefox engines. All the compilation takes place on the server anyways. And if you want a Delphi vessel rather than phonegap (so that it’s a Delphi application that opens up a web-view in full-screen and expose features to your smart code) then you will be happy to know that this is being investigated.

More targets

There are a lot of systems out there in the world, some of which did not exist just a couple of years ago. FriendOS is a cloud based operating system we really want to support, so we are eager to get cracking on their SDK when that comes out. Being able to target FriendOS from Smart is valuable, because some of the stuff you can do in SMS with just a bit of code – would take weeks to hand write in JavaScript. So you get a productive edge unlike anything else – which is good to have when a new market opens.

As far as Delphi is concerned there are smaller systems that Embarcadero may not be interested in, for example the many embedded systems that have come out lately. If Embarcadero tried to target them all – it would be a never-ending cat and mouse game. It seems like almost every month there is a new board on the market. So I fully understand why Embarcadero sticks to the most established vendors.

ov-4f-img

Smart technology allows you to cover all your bases regardless of device

But for you, the programmer, these smaller boards can repsent thousands of dollars worth of saving. Perhaps you are building a kiosk system and need to have a good-looking user interface that is not carved in stone, touch capabilities, low-latency full-duplex communication with a server; not much you can do about that if Delphi doesnt target it. And Delphi is a work horse so it demands a lot more cpu than a low-budget ARM SoC can deliver. But web-tech can thrive in these low-end environments. So again we see that Smart can compliment and be a valuable addition to Delphi. It helps you as a Delphi developer to act on opportunities that would otherwise pass you by.

So in the above scenario you can double down. You can use Smart for the user-interface on a low power, low-cost SoC (system on a chip) kiosk — and Delphi on the server.

It all depends on what you are interfacing with. If you have a full Delphi backend (which I presume you have) then writing the interface server in Delphi obviously makes more sense.

If you don’t have any back-end then, depending on your needs or future plans, it could be wise to investigate if node.js is right for you. If it’s not – go with what you know. You can make use of Smart’s capabilities either way to deliver cost-effective, good-looking device front-ends of mobile apps. Its valuable tool in your Delphi toolbox.

Better infrastructure and rooting

So far our support for various systems has been in the form of APIs or “wrapper units”. This is good if you are a low-level coder like myself, but if you are coming directly from Delphi without any background in web technology – you wouldn’t even know where to start.

So starting with the next IDE update each platform we support will not just have low-level wrapper units, but project types and units written and adapted by human beings. This means extra work for us Рbut that is the way it has to be.

As of writing the following projects can be created:

  • HTML5 mobile applications
  • HTML5 mobile console applications
  • Node.js console applications
  • node.js server applications
  • node.js service applications (requires PM2)
  • Web worker project (deprecated, web-workers can now be created anywhere)

We also have support for the following operating systems:

  • Chrome OS
  • Mozilla OS
  • Samsung Tizen OS

The following API’s have shipped with Smart since version 1.2:

  • Khronos browser extensions
  • Firefox¬†spesific API
  • NodeWebkit
  • Phonegap
    • Phonegap provides native access to roughly 9 operating systems. It is however cumbersome to work with and beta-test if you are unfamiliar with the “tools of the trade” in the JavaScript world.
  • Whatwg
  • WAC Apis

Future goals

The first thing we need to do is to update and re-generate ALL header files (or pascal units that interface with the JavaScript libraries) and make what we already have polished, available, documented and ready for enterprise level use.

kiosk-systems

Why pay $400 to power your kiosk when $99 and Smart can do a better job?

Secondly, project types must be established where they make sense. Not all frameworks are suitable for full project isolation, but act more like utility libraries (like jQuery or similar training-wheels). And as much as possible of the RTL made platform independent and organized following our namespace scheme.

But there are also other operating systems we want to support:

  • Norwegian made Friend OS, which is a business oriented cloud desktop
  • Node.js OS is very exciting
  • LG WebOS,¬†and their Enyo application framework
  • Asustor DLM web operating system is also a highly attractive system to support
  • OpenNAS has a very powerful JavaScript application framework
  • Segate Nas OS 4 likewise use JavaScript for visual, universal applications
  • Microsoft Universal¬†Platform allows you to create truly portable, native speed JavaScript applications
  • QNap QTS web operating system [now at version 4.2]

All of these are separate from our own NAS and embedded device system: Smart Desktop, which uses node.js as a backend and will run on anything as long as node and a modern browser is present.

Final words

I hope you guys have enjoyed my little trip down memory lane, and also the plans we have for the future. Personally I am super excited about moving the IDE to the cloud and making Smart available 24/7 globally – so that everyone can use it to design, implement and build software for the future right now.

Smart Pascal Builder (or whatever nickname we give it) is probably the first of its kind in the world. There are a ton of “write code on the web” pages out there, but so far there is not a single hard-core development studio like I have in mind here.

So hold on, because the future is just around the corner ūüėČ

Delphi developer on its own server

April 4, 2017 Leave a comment

While the Facebook group will naturally continue exactly like it has these past years, we have set up a server active Delphi developers on my Quartex server.

This has a huge benefit: first of all those that want to test the Smart Desktop can do so from the same domain – and people who want to test Smart Mobile Studio can do so with myself just a PM away. Error reports etc. will still need to be sent to the standard e-mail, but now I can take a more active role in supervising the process and help clear up whatever missunderstanding could occur.

casebook

Always good with a hardcore Smart, Laz, amibian.js forum!

Besides that we are building a lively community of Delphi, Lazarus, Smart and Oxygene/Remobjects developers! Need a job? Have work you need done? Post an add — wont cost you a penny.

So why not sign up? Its free, it wont bite you and we can continue regardless of Facebook’s up-time this year..

You enter here and just fill out user/pass and that’s it:¬†http://quartexhq.myasustor.com/sharetronix/

Smart Pascal: Information to alpha testers

April 3, 2017 Leave a comment

Note: This is re-posted here since we are experiencing networking problems at http://www.smartmobilestudio.com. The information should show up there shortly.

Our next-gen RTL will shortly be sent to people who have shown interest in testing our new RTL. We will finish the RTL in 3 stages of alpha before we hit beta (including code freeze, just fixes to existing code) and then release. After that we move on to the IDE to bring that up to date as well.

Important changes

You will notice that visual controls now have a ton of new methods, but one very interesting in particular called: ObjectReady. This method holds an important and central role in the new architecture.

You may remember that sometimes you had to use Handle.ReadyExecute() in the previous RTL? Often to synchronize an activity, setting properties or just calling ReSize() when the control was ready and available in the DOM.

To make a long story short, ObjectReady() is called when a control has been constructed in full and the element is ready to be used. This also includes the ready-state of child elements. So if you create 10 child controls, ObjectReady will only be called once those 10 children have finished constructing – and the handle(s) have safely been injected into the DOM.

To better understand why this is required, remember that JavaScript is asynchronous. So even though the constructor finish – child objects can still be busy “building” in the background.

If you look in SmartCL.Components.pas, notice that when the constructor finishes –it does a asynchronous call to a procedure named ReadySync(). This is a very important change from the previous version – because now synchronization can be trusted. I have also added a limit to how long ReadySync() can wait.¬†So if it takes to long the ReadySync() method will simply exit.

ObjectReady

As you probably have guessed, when ReadySync() is done and all child elements have been successfully created – it called the ObjectReady() method.

This makes things so much easier to work with. In many ways it resembles¬†Delphi and Freepascal’s AfterConstruction() method.

To summarize the call-chain (or timeline) all visual controls follow as they are constructed:

  • Constructor Create()
  • InitializeObject()
  • ReadySync
  • ObjectReady()
  • Invalidate
  • Resize

If you are pondering what on earth Invalidate() is doing in a framework based on HTML elements: it calls Resize() via the RequestAnimationFrame API. TW3GraphicControl visual controls that are actually drawn, much like native VCL or LCL components are – naturally invalidate the graphics in this method. But ordinary controls just does a synchronized resize (and re-layout) of the content.

When implementing your own visual controls (inheriting from TW3CustomControl), the basic procedures you would have to override and implement are:

  • InitializeObject;
  • FinalizeObject;
  • ObjectReady;
  • Resize;

Naturally, if you dont create any child controls or data of any type – then you can omit InitializeObject and FinalizeObject; these act as constructor and destructor in our framework. So in the JVL (Javascript Visual component Library) you dont override the constructor and destructor directly unless it is extremely important.

Where to do what

What the JVL does is to ensure a fixed set of behavioral traits in a linear fashion- inside an environment where anything goes. In order to achieve that the call chain (as explained above) must be predictable and rock solid.

Here is a quick cheat sheet over what to do and where:

  • You create child instances and set variables¬†in InitializeObject()
  • You set values and access the child instances for the first time in ObjectReady()
  • You release any child instances and data in FinalizeObject()
  • You enable/disable behavior in CreationFlags()
  • You position and place child controls in Resize()
  • Calling Invalidate() refreshes the layout or graphics, depending on what type of control you are working with.

What does a custom control look like ?

Its actually quite simple. Now I have included a ton of units here in order to describe what they contain, then you can remove those you wont need. The reason we have fragmented the code like this (for example System.Reader, System.Stream.Reader and so on) is because node.js, Arduino, Raspberry PI, Phonegap, NodeWebKit are all platforms that run JavaScript in one form or another – and each have code that is not 1:1 compatible with the next.

Universal code, or code that executes the same on all platforms is isolated in the System namespace. All files prefixed with “System.” are universal and can be used everywhere, regardless of project type, target or platform.

When it comes to the reader / writer classes, it’s not just streams. We also have binary buffers (yes, you actually have allocmem() etc. in our RTL) – but you also have special readers that work with database blobs, Bson attachments .. hence we had no option but to fragment the units. At least it makes for smaller code ūüôā

unit MyOwnControlExample;

interface

uses
  // ## The System namespace is platform independent
  System.Widget,           // TW3Component
  System.Types,            // General types
  System.Types.Convert,    // Binary access to types
  System.Types.Graphics,   // Graphic types (TRect, TPoint etc)
  System.Colors,           // TColor constants + tools
  System.Time,             // TW3Dispatch + time methods

  // Binary data and streams
  System.Streams,
  System.Reader, System.Stream.Reader,
  System.Writer, System.Stream.Writer,

  // Binary data and allocmem, freemem, move, copy etc
  System.Memory,
  System.Memory.Allocation,
  System.Memory.Buffer,

  // ## The SmartCL namespace works only with the DOM
  SmartCL.System,         // Fundamental methods and classes
  SmartCL.Time,           // Adds requestAnimationFrame API to TW3Dispatch
  SmartCL.Graphics,       // Offscreen pixmap, canvas etc.
  SmartCL.Components,     // Classes for visual controls
  SmartCL.Effects,        // Adds about 50 fx prefixed CSS3 GPU effect methods
  SmartCL.Fonts,          // Font and typeface control
  SmartCL.Borders,        // Classes that control the border of a control
  SmartCL.CSS.Classes,    // Classes for self.css management
  SmartCL.CSS.StyleSheet, // Create stylesheets or add styles by code

  { Typical child controls
  SmartCL.Controls.Image,
  SmartCL.Controls.Label,
  SmartCL.Controls.Panel,
  SmartCL.Controls.Button,
  SMartCL.Controls.Scrollbar,
  SMartCL.Controls.ToggleSwitch,
  SmartCL.Controls.Toolbar }
  ;

type

  TMyVisualControl = class(TW3CustomControl)
  protected
    procedure InitializeObject; override;
    procedure FinalizeObject; override;
    procedure ObjectReady; override;
    procedure Resize; override;
  end;

implementation

procedure TMyVisualControl.InitializeObject;
begin
  inherited;
  // create child instances here
end;

procedure TMyVisualControl.FinalizeObject;
begin
  // Release child instances here
  inherited;
end;

procedure TMyVisualControl.ObjectReady;
begin
  inherited;
  // interact with controls first time here
end;

procedure TMyVisualControl.Resize;
begin
  inherited;
  if not (csDestroying in ComponentState) then
  begin
    // Position child elements here
  end;
end;

CreateFlags? What is that?

Delphi’s VCL and Lazarus’s LCL have had support for CreateFlags for ages. It essentially allows you to set some very important properties when a control is created; properties that enable or disable how the control behaves.

  TW3CreationFlags = set of
    (
    cfIgnoreReadyState,     // Ignore waiting for readystate
    cfSupportAdjustment,    // Controls requires boxing adjustment (default!)
    cfReportChildAddition,  // Dont call ChildAdded() on element insertion
    cfReportChildRemoval,   // Dont call ChildRemoved() on element removal
    cfReportMovement,       // Report movements? Managed call to Moved()
    cfReportResize,         // Report resize? Manages call to Resize()
    cfAllowSelection,       // Allow for text selection
    cfKeyCapture            // flag to issue key and focus events
    );

As you can see these properties are quite fundamental, but there are times when you want to alter the default behavior radically to make something work. Child elements that you position and size directly doesn’t need cfReportReSize for instance. This might not mean much if its 100 or 200 child elements. But if its 4000 rows in a DB grid, then dropping that event check has a huge impact.

ComponentState

Yes, finally all visual (and non visual) have componentstate support. This makes your code much more elegant, and also way more compatible with Delphi and Lazarus. Here are the component-states we support right now:

  TComponentState = set of (
    csCreating,   // Set by the RTL when a control is created
    csLoading,    // Set by the RTL when a control is loading resources
    csReady,      // Set by the RTL when the control is ready for use
    csSized,      // Set this when a resize call is required
    csMoved,      // Set this when a control has moved
    csDestroying  // Set by the RTL when a control is being destroyed
    );

So now you can do things like:

procedure TMyControl.StartAnimation;
begin
  // Can we do this yet?
  if (csReady in ComponentState) then
  begin
    // Ok do magical code here
  end else
  //If not, call back in 100ms - unless the control is being destroyed
  if not (csDestroying in ComponentState) then
    TW3Dispatch.Execute(StartAnimation, 100);
end;

Also notice TW3Dispatch. You will find this in System.Time and SmartCL.Time (it is a partial class and can be expanded in different units); we have isolated timers, repeats and delayed dispatching in one place.

SmartCL.Time adds RequestAnimationFrame() and CancelAnimationFrame() access, which is an absolute must for synchronized graphics and resize.

Other changes

In this post I have tried to give you an overview of immediate changes. Changes that will hit you the moment you fire up Smart with the new RTL. It is very important that you change your existing code to make use of the ObjectReady() method in particular. Try to give the child elements some air between creation and first use – you get a much more consistent result on all browsers.

The total list of changes is almost to big to post on a blog but I did publish a part of it earlier. But not even that list contains the full extent. I have tried to give you an understanding

Click here to look at the full change log

Node and delphi in sweet harmony

March 31, 2017 Leave a comment

There are cases when you want to dip your toe into Delphi land. While I can’t think of anything in particular at the moment – it would be handy if you say, have some Delphi dll’s that you would like to re-use, so you don’t have to re-write the whole thing in Smart Pascal.

Enter FFI

There is this mind-blowing package over at NPM and if you don’t know it yet, you are going to love it. Its called FFI (foreign function interface) and essentially allows you to call functions from ordinary libraries (DLL, .SO and .DynLib’s). It is supported on both Windows, OS X and Linux. So yeah, now you can call Delphi from your Smart code!

So let’s say you have this awesome data backend written in Delphi and you don’t want to re-write 10 years of code. What you can do is simply create a DLL and expose a simplified API for the core functionality — and then call those from your Smart Pascal node.js server — which is just mind-bending¬†awesome (excuse my language)!

You can also call functions exposed in the same process. So lets say you are calling node.js from your Delphi applications (or C++, or Freepascal or C# – whatever floats your boat) then you guessed it – you can now invoke methods from the “runner”.

There goes another weekend

I was going to relax this weekend but im busy compiling node.js from scratch (and you know how I feel about C++ from Linux, its like the worst date ever!) to something visual studio can eat – and then export it from that into Delphi. For some reason C++ builder goes off bonkers when I try to build it there. Which is so odd because it should be vanilla C/C++ (probably some gnu curiosity messing it up)

 

kick_ass

Create awesome full browser-desktop apps in Smart!

 

But once .DCU i-fied, how does a TNodeJS component sound? Already have TV8Engine – so yeah, interfacing with the mothership (read: Delphi) is important to us.

Anyways – if your brain didn’t just blow with the FFI info, then go and get all girly inside right now – because this bridges the gap so utterly between native and scalable, kick ass clustered JavaScript server-side!

https://www.npmjs.com/package/ffi

If you still wonder if this is powerful enough, check out the Smart based operating system that is currently kicking serious butt! It even runs 68k assembly programs (real Amiga applications) in a window! This is the core demo for the SMS update (it should have been out 2 weeks ago, I know – but its not me in charge of that bit!).

amibianrocks

An OS in smart? Sure, with a linux bootstrap ofcourse – but man does it run!

Oh and by the way, Smart Pascal now gives you access to more than 350.000 node.js packages. That is the biggest, bad-ass code-repo in existence. And you can then add typescript as well.

Ok let’s do this!

Smart Pascal: Update specifics

January 20, 2017 Leave a comment

With what is probably the biggest update Smart Mobile Studio has seen to date only a couple of weeks away, what should the Smart Pascal programmer know about the new RTL?

I have written quite a few articles on the changes, so you may want to backtrack and look at them first – but in this post I will try to summerize two of the most important changes.

Unit files, a background

The first version of Smart Mobile Studio (sms) was designed purely for webkit based, HTML5 applications. As the name implies the initial release was designed purely for mobile application development; with the initial target firmly fixed on IPhone and IPad. As a result, getting rid of webkit exclusive code has been very time-consuming to say the least.

At the same time we added support for embedded devices, like the Espruino micro-controller, and in leu of that process -it became clear that the number of devices we can support is huge. As of writing we have code for more than 40 embedded devices lined up, ready to be added to the RTL (after the new RTL is in circulation).

Add to this the fact that node.js is now the most powerful cloud technology in the world, the old “DOM only”, webkit oriented codebase was hopelessly narrow minded and incapable of doing what we wanted.¬†Something had to give, otherwise this would be impossible to maintain – not to mention document in any reasonable logical form.

A better unit system

I had to come up with a system that preserved universal code – yet exposed special features depending on the target you compile for.

So when you compile for HTML5 you get all the cool stuff the document object model (DOM) has to offer, but also full access to the universal units. Same goes for node.js, its has all the standard stuff that is universal – but expose a ton of JS objects that exist nowhere else.

What I ended up with was that each namespace should have the same units, but adapted for the platform it represents. So you will have units like System.Time.pas, which represents universal functionality regarding timing and synchronization; But you are supposed to use the one prefixed by your target namespace. For instance, SmartCL.Time or SmartNJ.Time. In cases where the platform doesnt expose anything beyond the universal code, you simply fall back on the System.*.pas unit.

namespace_scheme

It may look complex, but it’s super easy once you get it

The rules are super simple:

  • Coding for the browser? Use the units prefixed with SmartCL.*
  • Coding for node.js? Use the units prefixed with SmartNJ.*
  • Coding for embedded? Use the units prefixed with SmartIOT.*

Remember that you should always include the system version of the unit as well. So if you include SmartCL.Time.pas, add System.Time.pas to the uses list as well. It makes it easier to navigate should you want to CTRL + Click on a symbol.

Better synchronization engine

JavaScript is async by nature. Getting JavaScript to behave like traditional object pascal is not only hard, it is futile. Besides, you dont want to be treated like a child and shielded from the reality of JavaScript – because that’s where all the new cool stuff is at.

In Delphi and Lazarus we are used to our code being blocking. So if you create child objects in your constructur, these objects are available there and then. Well, thats not how JavaScript works. You can create an object, the code continues, but in reality the object is not finished yet. It may in fact be assembling in the background (!)

To make writing custom-controls sane and human, we had to change a few things. So in your constructor (which is InitializeObject in our RTL, not the actual class constructor) you are expected to create child object – but you should never use them.

Instead there is a method called ObjectReady() which naturally fires when all your child elements have been constructed properly and the control has been safely injected into the document object model. This is where you set properties and start to manipulate your child objects.

synchro

So finally the setup process is stable, fast and working as you expect. As long as you follow this simple rule, your controls will appear and function exactly like you expect them to.

  • Never override Create and Destroy, use InitializeObject and FinalizeObject
  • Override ObjectReady() to set initial values

Creation flags

We have also added creation flags to visual controls. This is a notion familiar to both Delphi’s VCl and Lazarus LCL component structures. It allows you to define a few flags that determines behavior.

For instance, if your control is sized manually and will never really need to adapt – then it’s a waste of time to have Resize() fire when the size is altered. Your code will run much faster if the RTL can just ignore these changes. The flags you can set are:

  TW3CreationFlags = set of
    (
    cfIgnoreReadyState,     // Ignore waiting for readystate
    cfSupportAdjustment,    // Controls requires boxing adjustment (default!)
    cfReportChildAddition,  // Dont call ChildAdded() on element insertion
    cfReportChildRemoval,   // Dont call ChildRemoved() on element removal
    cfReportMovement,       // Report movements? Managed call to Moved()
    cfReportResize,         // Report resize? Manages call to Resize()
    cfAllowSelection,       // Allow for text selection
    cfKeyCapture            // assign tabindex, causing the control to issue key and focus events
    );

To alter the default behavior you override the method CreationFlags(), which is a class function:

class function TW3TagObj.CreationFlags: TW3CreationFlags;
begin
  result := [
    // cfAllowSelection,
    cfSupportAdjustment,
    cfReportChildAddition,
    cfReportChildRemoval,
    cfReportMovement,
    cfReportReSize
    ];
end;

Notice the AlloSelection flag? This determines if mouse selection of content is allowed. If you are making a text display, or editor, or anything that the user should be allowed to select – then you want this flag set.

The cfKeyCapture is also a flag many people have asked about. It determines if keyboard input should be allowed for the element. By default this is set to OFF. But now you dont have to manually mess around with element attributes. Just add that flag and your control will fire the OnKeyPress event whenever it has focus and the user press a key.

Tweening and fallback effects

In the last update a bug managed to sneak into our SmartCL.effect.pas library, which means a lot of people havent been able to fully use all the effects. This bug has now been fixed, but we have also added a highly efficient tweening library. The concept is that effects will be implemented both as hardware acellerated, GPU powered effects – but also with CPU based tweening alternatives.

For mobile devices it is highly adviced to use the GPU versions (the normal SmartCl.Effects.pas versions).

When you add the unit SmartCL.Effects.pas to your forms, what happens is that ALL TW3MovableObject based controls are suddenly extended with around 20 fx prefixed methods. This is because we use partial classes (which means classes can be extended).

This is super powerful, and with a single line of code you can make elements fly over the screen, shrink, zoom out or scale in size. You even get a callback when the effect has finished – and you can daisy-chain calls to execute a whole list of effects in sequence.

FMyButton.fxSizeTo(10, 10, 200, 200, 0.8, procedure ()
  begin
   writeln("Effect is done!");
  end);

And execute in sequence. The effects will wait their turn and execute one by one.

FMyButton.fxMoveTo(10,10, 0.8)
  .fxSizeTo(100,100, 0.8)
  .fxFadeTo(0.3, 0.8);

Stay tuned for more info!