Archive

Posts Tagged ‘embedded’

Quartex Pascal, Build 10b is out for backers

July 26, 2020 5 comments

I am deeply moved by some of the messages I have received about Quartex Pascal. Typically from people who either bought Smart Mobile studio or have followed my blog over the years. In short, developers that like to explore new ideas; people who also recognize some of the challenges large and complex run-time libraries like the VCL, FMX and LCL face in 2020.

Since I started with all this “compile to JavaScript” stuff, the world has changed. And while I’m not always right – I was right about this. JavaScript will and have evolved into a power-house. Largely thanks to Microsoft killing Basic. But writing large scale applications in JavaScript, that is extremely time consuming — which is where Quartex Pascal comes in.

support2

Quartex Pascal evolves every weekend. There are at least 2 builds each weekend for backers. Why not become a backer and see the product come to life? Get instant access to new builds, the docs, and learn why QTX code runs so much faster than the alternatives?

A very important distinction

Let me first start by iterating what I mentioned in my previous post, namely that I am no longer involved with The Smart Company. Nor am I involved with Smart Mobile Studio. I realize that it can be difficult for some to separate me from that product, since I blogged and created momentum for it for more than a decade. But alas, life change and sometimes you just have to let go.

The QTX Framework which today has become a fully operational RTL was written by me back between 2013-2014 (when I was not working for the company due to my spinal injury). And the first version of that framework was released under an open-source license.

When I returned to The Smart Company, it was decided that to save time – we would pull the QTX Framework into the Smart RTL. Since I owned the QTX Framework, and it was open source, it was perfectly fine to include the code. The code was bound by the open source licensing model, so we broke no rules including it. And I gave dispensation that it could be included (although the original license explicitly stated that the units should remain untouched and separate, and only inherited from).

desktop_02

Quartex Media Desktop, a complete desktop system (akin to X Windows for Linux) written completely in Object Pascal, including a clustered, service oriented back-end. All of it done in Quartex Pascal  — a huge project in its own right. Under Quartex Pascal,  this is now a project type, which means you can have your own cloud solution at the click of a button.

As I left the company for good before joining Embarcadero, The Smart Company and myself came to an agreement that the parts of QTX that still exists in the Smart Mobile Studio RTL, could remain. It would be petty and small to make a huge number out of it, and I left on my own terms. No point ruining all that hard work we did. So we signed an agreement that underlined that there would be overlaps here and there in our respective codebases, and that the QTX Framework and Quartex Media Desktop was my property.

Minor overlaps

As mentioned there will be a few minor overlaps, but nothing substantial. The class hierarchy and architecture of the QTX RTL is so different, that 90% of the code in the Smart RTL simply won’t work. And I made it that way on purpose so there would be no debates over who made what.

QTX represents how I felt the RTL should have been done. And its very different  from where Smart Mobile Studio ended up.

The overlaps are simple and few, but it can be helpful for Smart developers to know about if they plan on taking QTX for a test-drive:

  • TInteger, TString and TVariant. These were actually ported from Delphi (part of the Sith Library, a pun on Delphi’s Jedi Library).
  • TDataTypeConverter came in through the QTX Framework. It has been completely re-written from scratch. The QTX version is endian aware (works on both ARM, X86 and PPC). Classes that deal with binary data (like TStream, TBuffer etc) inherit from TDataTypeConverter. That way, you dont have to call a secondary instance just to perform conversion. This is easier and much more efficient.
  • Low-level codecs likewise came from the QTX Framework, but I had to completely re-write the architecture. The old model could only handle binary data, while the new codec classes also covers text based formats. Codecs can be daisy-chained so you can do encoding, compression and encryption by feeding data into the first, and catching the processed data from the last codec in the chain. Very handy, especially when dealing with binary messages and database drivers.
  • The in-memory dataset likewise came from the QTX Framework. This is probably the only unit that has remained largely unchanged. So that is a clear overlap between the Smart RTL and QTX.
  • TextCraft is an open source library that covers Delphi, Freepascal and DWScript. The latter was pulled in and used as the primary text-parser in Smart. This is also the default parser for QTX, and have been largely re-written so it could be re-published under the Shareware license.

Since the QTX RTL is very different from Smart, I haven’t really bothered to use all of the old code. Stuff like the CSS Effects units likewise came from the QTX Framework, but the architecture I made for Smart is not compatible with QTX, so it makes no sense using that code. I ported my Delphi tweening library to DWScript in 2019, which was a part of my Patreon project. So most of the effects in QTX use our own tweening library. This has some very powerful perks, like being able to animate a property on any object rather than just a HTML Element. And you can use it for Canvas drawing too, which is nice.

Progress. Where are we now?

So, where am I in this work right now? The RTL took more or less 1 year to write from  scratch. I only have the weekends  for this type of work,  and it would have been impossible without my backers. So I cannot thank each backer enough for the faith in this. The RTL and new IDE is actually just a stopping point on the road to a much bigger project, namely CloudForge, which is the full IDE running as an application on the Quartex Media Desktop. But first, let’s see what has been implemented!

AST unit view

unit_view

The Unit Overview panel. Easy access to ancestor classes as links (still early R&D). And the entire RTL on a second tab. This makes it very easy to learn the new RTL. There is also proper documentation, both as PDF and standard helpfile.

When the object-pascal code is compiled by DWScript, it goes through a vigorous process of syntax checking, parsing, tokenizing and symbolization (or objectification is perhaps a better word), where every inch of the code is transformed into objects that the compiler can work with and produce code from. These symbols are isolated in what is known as an AST, short for “Abstract Symbol Tree”. Basically a massive in-memory tree structure that contains your entire program reduced to symbols and expressions.

In order for us to have a live structural view of the current unit, I have created a simple background process that compiles the current unit, grabs the resulting AST, locates the unit symbol, and then displays the information in a meaningful way. This is the exact same  as most other IDE’s do, be it Visual Studio, Embarcadero Delphi, or Lazarus.

So we have that in place already. I also want to make it more elaborate, where  you can click yourself to glory by examining ancestors, interfaces, partial class groups – as well as an option to include inherited members (which should be visually different). Either way, the AST code is done. Just need to consolidate a few tidbits so that each Treeview node retains information about source-code location (so that when you double-click a symbol, the IDE navigates to where the symbol exists in the codebase).

JavaScript parsing and compilation

QTX doesn’t include just one compiler, but three. In order for the unit structure to also work for JavaScript files I have modified Besen, which is an ES5 compatible JavaScript engine written in Delphi. And I use this much like DWScript to parse and work with the AST.

unit_view2

Besen is a wonderful compiler. Where DWScript produces JavaScript from Object Pascal, Besen produces bytecodes from JavaScript (which are further JIT compiled). This opens up for some interesting options. I need to add support for ES6 though, modules and require are especially important for modern node.js programming (and yes, the QTX RTL supports these concepts)

HTML5 Rendering and CSS preview

Instead of using Chromium inside the IDE, which is pretty demanding, I have decided to go for HTMLComponents to deal with “normal” tasks. The “Welcome” tab-page for example — it would be complete overkill to use a full Chromium instance just for that, and TEdgeBrowser is likewise shooting sparrows with a Bazooka.

THTMLComponents have a blistering fast panel control that can render more or less any HTML5 document you throw at it (much better than the old TFrameViewer component). But obviously, it doesn’t have JS support. But we won’t be using JS when displaying static information – or indeed, editing HTML5 compliant content.

WYSIWYG Editor

The biggest benefit for HTMLComponents, is that it’s a fully operational HTML compliant editor. Which means you can do more or less all your manual design with that editor. In Quartex Pascal there is direct support for HTML files. Quartex works much like Visual Studio code, except it has visual designers. So you can create a HTML file and either type in the code manually, or switch to the HTMLComponents editor.

Which is what products like Help & Manual uses it for

helpmanual3

Image from HTMLComponents application gallery website

Support for HTML, CSS and JS files directly

While not new, this is  pretty awesome. Especially since we can do a bit of AST navigation here too to present similar information as we do for Object Pascal. The whole concept behind the QTX RTL, is that you have full control over everything. You can stick to a normal Delphi like form designer and absolute positioning, or you can opt for a more dynamic approach where you create content via code. This is perfect for modern websites that blend scrolling, effects and content (both dynamic and static content) for a better user experience.

You can even (spoiler alert), take a piece of HTML and convert it into visual controls at runtime. That is a very powerful function, because when doing large-scale, elaborate custom controls – you can just tell the RTL “hey, turn this piece of HTML into a visual control for me, and deliver it back when you are ready).

Proper Form Designer

Writing a proper form designer like Delphi has is no walk in the park. It has to deal not just with a selected control, but also child elements. It also has to be able to select multiple elements based on key-presses (shift + click adds another item to the selection),  or from the selection rectangle.

unit_view3

A property form layout control. Real-time rendering of controls is also possible, courtesy of HTMLComponents. But to be honest, it just gets in the way. Its much easier to work with this type of designer. It’s fast, responsive, accurate and will have animated features that makes it a joy to work with. 

Well, that’s not going to be a problem. I spent a considerable amount of time writing a proper form designer, one that takes both fixed and dynamic content into account. So the Quartex form designer handles both absolute and stacked layout modes (stacked means top-down, what in HTML is knock as blocking element  display, where each section stretch to the full width, and only have a defined height [that you can change]).

Node.js Service Protocol Designer

Writing large-scale servers, especially clustered ones, is very fiddly in vanilla JavaScript under node.js. It takes 3 seconds to create a server object, but as we all know, without proper error handling, a concurrent message format, modern security and a solid framework to handle it all — that “3 second” thing falls to the ground quickly.

This is where the Ragnarok message system comes in. Which is both a message framework, and a set of custom servers adapted for dealing with that type of data. It presently supports WebSocket, TCP and UDP. But expanding that to include REST is very easy.

support3

This is where the full might of the QTX Framework begins to dawn. As i wrote before we started on the Quartex Media Desktop (Which this IDE and RTL is a part of), in the future developers wont just drag & drop components on a form; they will drag & drop entire ecosystems ..

But the power of the system is not just in how it works, and how you can create your own protocols, and then have separate back-end services deal with one part of your infrastructure’s workload. It is because you can visually design the protocols using the Node Builder. This is being moved into the QTX IDE as I type. So should make it for Build 12 next weekend.

In short, you design your protocols, messages and types – a bit like RemObjects SDK if you have used that. And the IDE generates both server and client code for you. All you have to do is fill in the content that acts on the messages. Everything else is handled by the server components.

Suddenly, you can spend a week writing a large-scale, platform agnostic service stack — that would have taken JavaScript developers months to complete. And instead of having to manage a 200.000 lines codebase in JavaScript — you can enjoy a 4000 line, easily maintainable Object Pascal codebase.

Build 11

Im hoping to have build 11 out tomorrow (Sunday) for my backers. Im still experimenting a bit with the symbol information panel, since I want it to be informative not just for classes, but also for methods and properties. Making it easy to access ancestor implementations etc.

I also need to work a bit more on the JS parsing. Under ES5 its typical to use variables to hold objects  (which is close to how we think of a class), so composite and complex datatypes must be expanded. I  also need to get symbol position to work property, because since Besen is a proper bytecode compiler, it doesn’t keep as much information in it’s AST as DWScript does.

Widgets (which is what visual controls are called under QTX) should appear in build 12 or 13. The IDE supports zip-packages. The file-source system I made for the TVector library (published via Embarcadero’s website a few months back) allows us to mount not just folders as a file-source, but also zip files. So QTX component packages will be ordinary zip-files containing the .pas files, asset files and a metadata descriptor file that tells the IDE what to expect. Simple,  easy and very effective.

Support the project!

Want to support the project? All financial backers that donates $100+ get their name in the product, access to the full IDE source-code on completion, and access to the Quartex Media Desktop system (which is a complete web desktop with a clustered back-end,  compiled to JavaScript and running on node.js. Portable, platform and chipset independent, and very powerful).

support

Your help matters! It pays for components, hours and above all, tools and motivation. In return, you get full access to everything and a perpetual license. No backers will ever pay a cent for any future version of Quartex Pascal. Note: PM me after donating so I can get you added to the admin group! Click here to visit paypal: https://www.paypal.com/paypalme/quartexNOR

All donations are welcome, both large and small. But donations over $100, especially reoccurring, is what drives this project forward. It also gets you access to the Quartex Developer group on Facebook where new builds, features etc is happening. It is the best way to immediately get the latest build, read documentation as its written and see the product come to life!

 

RemObjects Elements + ODroid N2 = true

August 7, 2019 Leave a comment

Since the release of Raspberry PI back in 2012 the IOT and Embedded market has exploded. The price of the PI SBC (single board computer) enabled ordinary people without any engineering background to create their own software and hardware projects; and with that the IOT revolution was born.

Almost immediately after the PI became a success, other vendors wanted a piece of the pie (pun intended), and an avalanche of alternative mini computers started surfacing in vast quantities. Yet very few of these so-called “pi killers” actually stood a chance. The power of the Raspberry PI is not just price, it’s actually the ecosystem around the product. All those shops selling electronic parts that you can use in your IOT projects for example.

55468436_2018717198423856_993746185506258944_n

The ODroid N2, one of the fastest SBCs in it’s class

The ODroid family of single-board computers stands out as unique in this respect. Where other boards have come and gone, the ODroid family of boards have remained stable, popular and excellent alternatives to the Raspberry PI. Hardkernel, the maker of Odroid boards and its many peripherals, are not looking for a “quick buck” like others have. Instead they have slowly and steadily perfected their hardware,  software, and seeded a great community.

ODroid is very popular at RemObjects, and when we added 64-bit ARM Linux support a couple of weeks back, it was the ODroid N2 board we used for testing. It has been a smooth ride all the way.

ODroid

As I am typing this, a collection of ODroid XU4s is humming away inside a small, desktop cluster I have built. This cluster is made up of 5 x ODroid XU4 boards, with an additional ODroid N2 acting as the head (the board that controls the rest via the network).

67582488_10156396548830906_5204248427029856256_o

My ODroid Cluster in all its glory

Prior to picking ODroid for my own projects, I took the time to test the most popular boards on the market. I think I went through eight or ten models, but none of the other were even close to the quality of ODroid. It’s very easy to confuse aggressive marketing with quality. You can have the coolest hardware in the world, but if it lacks proper drivers and a solid Linux distribution, it’s for all means and purposes a waste of time.

Since IOT is something that i find exciting on a personal level, being able to target 64-bit ARM Linux has topped my wish-list for quite some time. So when our compiler wizard Carlo Kok wanted to implement support for 64-bit ARM-Linux, I was thrilled!

We used the ODroid N2 throughout the testing phase, and the whole process was very smooth. It took Carlo roughly 3 days to add support for 64-bit ARM Linux and it hit our main channel within a week.

I must stress that while ODroid N2 is one of our verified SBCs, the code is not explicitly about ODroid. You can target any 64-bit ARM SBC providing you use a Debian based Linux (Ubuntu, Mint etc). I tested the same code on the NanoPI board and it ran on the first try.

Why is this important?

The whole point of the Elements compiler toolchain, is not just to provide alternative compilers; it’s also to ensure that the languages we support become first class citizens, side by side with other archetypical languages. For example, if all you know is C# or Java, writing kernel drivers has been our of limits. If you are operating with traditional Java or .Net, you have to use a native bridge (like the service host under Windows). Your only other option was to code that particular piece in traditional C.

Water-Weather-tvOS@2x

With Elements you can pick whatever language you know and target everything

With Elements that is no longer the case, because our compilers generates llvm optimized machine-code; code that in terms of speed, access and power stand side by side with C/C++. You can even import C/C++ header files and work directly with the existing infrastructure. There is no middleware, no service host, no bytecodes and no compromise.

Obviously you can compile to bytecodes too if you like (or WebAssembly), but there are caveats to watch out for when using bytecodes on SBCs. The garbage collector can make or break your product, because when it kicks in -it causes CPU spikes. This is where Elements step-up and delivers true native compilation. For all supported targets.

More boards to come

This is just my personal blog, so for the full overview of boards I am testing there will be a proper article on our official RemObjects blog-space. Naturally I can’t test every single board on the market, but I have around 10 different models which covers the common boards used by IOT and embedded projects.

But for now at least, you can check off the ODroid N2 (64-bit) and NanoPI-Fire 2 (32-bit)

Amibian.js under the hood

December 5, 2018 2 comments

Amibian.js is gaining momentum as more and more developers, embedded systems architects, gamers and retro computer enthusiasts discover the project. And I have to admit I’m pretty stoked about what we are building here myself!

intro

In a life-preserver no less 😀

But, with any new technology or invention there are two common traps that people can fall into: The first trap is to gravely underestimate a technology. JavaScript certainly invites this, because only a decade ago the language was little more than a toy. Since then JavaScript have evolved to become the most widely adopted programming language in the world, and runtime engines like Google’s V8 runs JavaScript almost as fast as compiled binary code (“native” means machine code, like that produced by a C/C++ compiler, Pascal compiler or anything else that produces programs that run under Linux or Windows).

It takes some adjustments, especially for traditional programmers that havent paid attention to where browsers have gone – but long gone are the days of interpreted JavaScript. Modern JavaScript is first parsed, tokenized and compiled to bytecodes. These bytecodes are then JIT compiled (“just in time”, which means the compilation takes place inside the browser) to real machine-code using state of the art techniques (LLVM). So the JavaScript of 2018 is by no means the JavaScript of 2008.

The second trap you can fall into – is to exaggerate what a new technology can do, and attach abilities and expectations to a product that simply cannot be delivered. It is very important to me that people don’t fall into either trap, and that everyone is informed about what Amibian.js actually is and can deliver – but also what it wont deliver. Rome was not built-in a day, and it’s wise to study all the factors before passing judgement.

I have been truly fortunate that people support the project financially via Patreon, and as such I feel it’s my duty to document and explain as much as possible. I am a programmer and I often forget that not everyone understands what I’m talking about. We are all human and make mistakes.

Hopefully this post will paint a clearer picture of Amibian.js and what we are building here. The project is divided into two phases: first to finish Amibian.js itself, and secondly to write a Visual Studio clone that runs purely in the browser. Since it’s easy to mix these things up, I’m underlining this easy – just in case.

What the heck is Amibian.js?

Amibian.js is a group of services and libraries that combined creates a portable operating-system that renders to HTML5. A system that was written using readily available web technology, and designed to deliver advanced desktop functionality to web applications.

The services that make up Amibian.js was designed to piggyback on a thin Linux crust, where Linux deals with the hardware, drivers and the nitty-gritty we take for granted. There is no point trying to write a better kernel in 2018, because you are never going to catch up with Linus Torvalds. It’s must more interesting to push modern web technology to the absolute limits, and build a system that is truly portable and distributed.

smart_ass

Above: Amibian.js is created in Smart Pascal and compiled to JavaScript

The service layer is written purely in node.js (JavaScript) which guarantees the same behavior regardless of host platform. One of the benefits of using off-the-shelves web technology is that you can physically copy the whole system from one machine to the other without any changes. So if you have a running Amibian.js system on your x86 PC, and copy all the files to an ARM computer – you dont even have to recompile the system. Just fire up the services and you are back in the game.

Now before you dismiss this as “yet another web mockup” please remember what I said about JavaScript: the JavaScript in 2018 is not the JavaScript of 2008. No other language on the planet has seen as much development as JavaScript, and it has evolved from a “browser toy” – into the most important programming language of our time.

So Amibian.js is not some skin-deep mockup of a desktop (lord knows there are plenty of those online). It implements advanced technologies such as remote filesystem mapping, an object-oriented message protocol (Ragnarok), RPCS (remote procedure call invocation stack), video codec capabilities and much more — all of it done with JavaScript.

In fact, one of the demos that Amibian.js ships with is Quake III recompiled to JavaScript. It delivers 120 fps flawlessly (browser is limited to 60 fps) and makes full use of standard browser technologies (WebGL).

utube

Click on picture above to watch Amibian.js in action on YouTube

So indeed, the JavaScript we are talking about here is cutting edge. Most of Amibian.js is compiled as “Asm.js” which means that the V8 runtime (the code that runs JavaScript inside the browser, or as a program under node.js) will JIT compile it to highly efficient machine-code.

Which is why Amibian.js is able to do things that people imagine impossible!

Ok, but what does Amibian.js consist of?

Amibian.js consists of many parts, but we can divide it into two categories:

  • A HTML5 desktop client
  • A system server and various child processes

These two categories have the exact same relationship as the X desktop and the Linux kernel. The client connects to the server, invokes procedures to do some work, and then visually represent the response This is identical to how the X desktop calls functions in the kernel or one of the Linux libraries. The difference between the traditional, machine code based OS and our web variation, is that our version doesn’t have to care about the hardware. We can also assign many different roles to Ambian.js (more about that later).

smartdesk

Enjoying other cloud applications is easy with Amibian.js, here is Plex, a system very much based on the same ideas as Amibian.js

And for the record: I’m trying to avoid a bare-metal OS, otherwise I would have written the system using a native programming language like C or Object-Pascal. So I am not using JavaScript because I lack skill in native languages, I am using JavaScript because native code is not relevant for the tasks Amibian.js solves. If I used a native back-end I could have finished this in a couple of months, but a native server would be unable to replicate itself between cloud instances because chipset and CPU would be determining factors.

The Amibian.js server is not a single program. The back-end for Amibian.js consists of several service applications (daemons on Linux) that each deliver specific features. The combined functionality of these services make up “the amibian kernel” in our analogy with Linux. You can think of these services as the library files in a traditional system, and programs that are written for Amibian.js can call on these to a wide range of tasks. It can be as simple as reading a file, or as complex as registering a new user or requesting admin rights.

The greatest strength of Amibian.js is that it’s designed to run clustered, using as many CPU cores as possible. It’s also designed to scale, meaning that it will replicate itself and divide the work between different instances. This is where things get’s interesting, because an Amibian.js cluster doesn’t need the latest and coolest hardware to deliver good performance. You can build a cluster of old PC’s in your office, or a handful of embedded boards (ODroid XU4, Raspberry PI’s and Tinkerboard are brilliant candidates).

But why Amibian.js? Why not just stick with Linux?

That is a fair question, and this is where the roles I mentioned above comes in.

As a software developer many of my customers work with embedded devices and kiosk systems. You have companies that produce routers and set-top boxes, NAS boxes of various complexity, ticket systems for trains and busses; and all of them end up having to solve the same needs.

What each of these manufacturers have in common, is the need for a web desktop system that can be adapted for a specific program. Any idiot can write a web application, but when you need safe access to the filesystem, unified API’s that can delegate signals to Amazon, Azure or your company server, things suddenly get’s more complicated. And even when you have all of that, you still need a rock solid application model suitable for distributed computing. You might have 1 ticket booth, or 10.000 nation wide. There are no systems available that is designed to deal with web-technology on that scale. Yet 😉

Let’s look at a couple of real-life scenarios that I have encountered, I’m confident you will recognize a common need. So here are some roles that Amibian.js can assume and help deliver a solution rapidly. It also gives you some ideas of the economic possibilities.

Updated: Please note that we are talking javascript here, not native code. There are a lot of native solutions out there, but the whole point here is to forget about CPU, chipset and target and have a system floating on top of whatever is beneath.

  • When you want to change some settings on your router – you login to your router. It contains a small apache server (or something similar) and you do all your maintenance via that web interface. This web interface is typically skin-deep, annoying to work with and a pain for developers to update since it’s connected to a native apache module which is 100% dependent on the firmware. Each vendor end up re-inventing the wheel over and over again.
  • When you visit a large museum notice the displays. A museum needs to display multimedia, preferably on touch capable devices, throughout the different exhibits. The cost of having a developer create native applications that displays the media, plays the movies and gives visual feedback is astronomical. Which is why most museums adopt web technology to handle media presentation and interaction. Again they re-invent the wheel with varying degree of success.
  • Hotels have more or less the exact same need but on a smaller scale, especially the larger hotels where the lobby have information booths, and each room displays a web interface via the TV.
  • Shopping malls face the same challenge, and depending on the size they can need anything from a single to a hundred nodes.
  • Schools and education spend millions on training software and programming languages every year. Amibian.js can deliver both and the schools would only pay for maintenance and adaptation – the product itself is free. Kids get the benefit of learning traditional languages and enjoying instant visual feedback! They can learn Basic, Pascal, JavaScript and C. I firmly believe that the classical languages will help make them better programmers as they evolve.

You are probably starting to see the common denominator here?

They all need a web-based desktop system, one that can run complex HTML5 based media applications and give them the same depth as a native operating-system; Which is pretty hard to achieve with JavaScript alone.

Amibian.js provides a rich foundation of more than 4000 classes that developers can use to write large, complex and media rich applications (see Smart Mobile Studio below). Just like Linux and Windows provides a wealth of libraries and features for native application development – Amibian.js aims to provide the same for cloud and embedded systems.

And as the name implies, it has roots in the past with the machine that defined multimedia, namely the Commodore Amiga. So the relation is more than just visually, Amibian.js uses the same system architecture – because we believe it’s one of the best systems ever designed.

If JavaScript is so poor, why should we trust you to deliver so much?

First of all I’m not selling anything. It’s not like this project is something that is going to make me a ton of cash. I ask for support during the development period because I want to allocate proper time for it, but when done Amibian.js will be free for everyone (LGPL). And I’m also writing it because it’s something that I need and that I havent seen anywhere else. I think you have to write software for yourself, otherwise the quality wont be there.

Secondly, writing Amibian.js in raw JavaScript with the same amount of functions and depth would take years. The reason I am able to deliver so much functionality quickly, is because I use a compiler system called Smart Mobile Studio. This saves months and years of development time, and I can use all the benefits of OOP.

Prior to starting the Amibian.js project, I spent roughly 9 years creating Smart Mobile Studio. Smart is not a solo project, many individuals have been involved – and the product provides a compiler, IDE (editor and tools), and a vast run-time library of pre-made classes (roughly 4000 ready to use classes, or building-blocks).

amibian_shell

Writing large-scale node.js services in Smart is easy, fun and powerful!

Unlike other development systems, Smart Mobile Studio compiles to JavaScript rather than machine-code. We have spent a great deal of time making sure we could use proper OOP (object-oriented programming), and we have spent more than 3 years perfecting a visual application framework with the same depth as the VCL or FMX (the core visual frameworks for C++ builder and Delphi).

The result is that I can knock out a large application that a normal JavaScript coder would spend weeks on – in a single day.

Smart Mobile Studio uses the object-pascal language, a dialect which is roughly 70% compatible with Delphi. Delphi is exceptionally well suited for writing large, data driven applications. It also thrives for embedded systems and low-level system services. In short: it’s a lot easier to maintain 50.000 lines of object pascal code, than 500.000 lines of JavaScript code.

Amibian.js, both the service layer and the visual HTML5 client application, is written completely using Smart Mobile Studio. This gives me as the core developer of both systems a huge advantage (who knows it better than the designer right?). I also get to write code that is truly OOP (classes, inheritance, interfaces, virtual and abstract methods, partial classes etc), because our compiler crafts something called a VMT (virtual method table) in JavaScript.

Traditional JavaScript doesn’t have OOP, it has something called prototypes. With Smart Pascal I get to bring in code from the object-pascal community, components and libraries written in Delphi or Freepascal – which range in the hundreds of thousands. Delphi alone has a massive library of code to pick from, it’s been a popular toolkit for ages (C is 3 years older than pascal).

But how would I use Amibian.js? Do I install it or what?

Amibian.js can be setup and used in 4 different ways:

  • As a true desktop, booting straight into Amibian.js in full-screen
  • As a cloud service, accessing it through any modern browser
  • As a NAS or Kiosk front-end
  • As a local system on your existing OS, a batch script will fire it up and you can use your browser to access it on https://127.0.0.1:8090

So the short answer is yes, you install it. But it’s the same as installing Chrome OS. It’s not like an application you just install on your Linux, Windows or OSX box. The whole point of Amibian.js is to have a platform independent, chipset agnostic system. Something that doesn’t care if you using ARM, x86, PPC or Mips as your CPU of preference. Developers will no doubt install it on their existing machines, Amibian.js is non-intrusive and does not affect or touch files outside its own eco-system.

But the average non-programmer will most likely setup a dedicated machine (or several) or just deploy it on their home NAS.

The first way of enjoying Amibian.js is to install it on a PC or ARM device. A disk image will be provided for supporters so they can get up and running ASAP. This disk image will be based on a thin Linux setup, just enough to get all the drivers going (but no X desktop!). It will start all the node.js services and finally enter a full-screen web display (based on Chromium Embedded) that renders the desktop. This is the method most users will prefer to work with Amibian.js.

The second way is to use it as a cloud service. You install Amibian.js like mentioned above, but you do so on Amazon or Azure. That way you can login to your desktop using nothing but a web browser. This is a very cost-effective way of enjoying Amibian.js since renting a virtual instance is affordable and storage is abundant.

The third option is for developers. Amibian.js is a desktop system, which means it’s designed to host more elaborate applications. Where you would normally just embed an external website into an IFrame, but Amibian.js is not that primitive. Hosting external applications requires you to write a security manifest file, but more importantly: the application must interface with the desktop through the window’s message-port. This is a special object that is sent to the application as a hand-shake, and the only way for the application to access things like the file-system and server-side functionality, is via this message-port.

Calling “kernel” level functions from a hosted application is done purely via the message-port mentioned above. The actual message data is JSON and must conform to the Ragnarok client protocol specification. This is not as difficult as it might sound, but Amibian.js takes security very seriously – so applications trying to cause damage will be promptly shut down.

You mention hosted applications, do you mean websites?

Both yes and no: Amibian.js supports 3 types of applications:

  • Ordinary HTML5/JS based applications, or “websites” as many would call them. But like I talked about above they have to establish a dialog with the desktop before they can do anything useful.
  • Hybrid applications where half is installed as a node.js service, and the other half is served as a normal HTML5 app. This is the coolest program model, and developers essentially write both a server and a client – and then deploy it as a single package.
  • LDEF compiled bytecode applications, a 68k inspired assembly language that is JIT compiled by the browser (commonly called “asm.js”) and runs extremely fast. The LDEF virtual machine is a sub-project in Amibian.js

The latter option, bytecodes, is a bit like Java. A part of the Amibian.js project is a compiler and runtime system called LDEF.

patron_asm2

Above: The Amibian.js LDEF assembler, here listing opcodes + disassembling a method

The first part of the Amibian.js project is to establish the desktop and back-end services. The second part of the project is to create the worlds first cloud based development platform. A full Visual Studio clone if you like, that allows anyone to write cloud, mobile and native applications directly via the browser (!)

Several languages are supported by LDEF, and you can write programs in Object Pascal, Basic and C. The Basic dialect is especially fun to work with, since it’s a re-implementation of BlitzBasic (with a lot of added extras). Amiga developers will no doubt remember BlitzBasic, it was used to create some great games back in the 80s and 90s. It’s well suited for games and multimedia programming and above all – very easy to learn.

More advanced developers can enjoy Object Pascal (read: Delphi) or a sub-set of C/C++.

And please note: This IDE is designed for large-scale applications, not simple snippets. The ultimate goal of Amibian.js is to move the entire development cycle to the cloud and away from the desktop. With Amibian.js you can write a cool “app” in BlitzBasic, run it right in the browser — or compile it server-side and deploy it to your Android Phone as a real, natively compiled application.

So any notion of a “mock desktop for HTML” should be firmly put to the side. I am not playing around with this product and the stakes are very real.

But why don’t you just use ChromeOS?

There are many reasons, but the most important one is chipset independence. Chrome OS is a native system, meaning that it’s core services are written in C/C++ and compiled to machine code. The fundamental principle of Amibian.js is to be 100% platform agnostic, and “no native code allowed”. This is why the entire back-end and service layer is targeting node.js. This ensures the same behavior regardless of processor or host system (Linux being the default host).

Node.js has the benefit of being 100% platform independent. You will find node.js for ARM, x86, Mips and PPC. This means you can take advantage of whatever hardware is available. You can even recycle older computers that have lost mainstream support, and use them to run Amibian.js.

A second reason is: Chrome OS might be free, but it’s only as open as Google want it to be. ChromeOS is not just something you pick up and start altering. It’s dependence on native programming languages, compiler toolchains and a huge set of libraries makes it extremely niche. It also shields you utterly from the interesting parts, namely the back-end services. It’s quite frankly boring and too boxed in for any practical use; except for Google and it’s technology partners that is.

I wanted a system that I could move around, that could run in the cloud, on cheap SBC’s. A system that could scale from handling 10 users to 1000 users – a system that supports clustering and can be installed on multiple machines in a swarm.

A system that anyone with JavaScript knowledge can use to create new and exciting systems, that can be easily expanded and serve as a foundation for rich media applications.

What is this Amiga stuff, isn’t that an ancient machine?

In computing terms yes, but so is Unix. Old doesn’t automatically mean bad, it actually means that it’s adapted and survived challenges beyond its initial design. While most of us remember the Amiga for its games, I remember it mainly for its elegant and powerful operating-system. A system so flexible that it’s still in use around the world – 33 years after the machine hit the market. That is quite an achievement.

image2

The original Amiga OS, not bad for a 33-year-old OS! It was and continues to be way ahead of everyone else. A testament to the creativity of its authors

Amibian.js as the name implies, borrows architectural elements en-mass from Amiga OS. Quite simply because the way Amiga OS is organized and the way you approach computing on the Amiga is brilliant. Amiga OS is much more intuitive and easier to understand than Linux and Windows. It’s a system that you could learn how to use fully with just a couple of days exploring; and no manuals.

But the similarities are not just visual or architectural. Remember I wrote that hosted applications can access and use the Amibian.js services? These services implement as much of the original ROM Kernel functions as possible. Naturally I can’t port all of it, because it’s not really relevant for Amibian.js. Things like device-drivers serve little purpose for Amibian.js, because Amibian.js talks to node.js, and node talks to the actual system, which in turn handles hardware devices. But the way you would create windows, visual controls, bind events and create a modern, event-driven application has been preserved to the best of my ability.

But how does this thing boot? I thought you said server?

If you have setup a dedicated machine with Amibian.js then the boot sequence is the same as Linux, except that the node.js services are executed as background processes (daemons or services as they are called), the core server is initialized, and then a full-screen HTML5 view is set up that shows the desktop.

But that is just for starting the system. Your personal boot sequence which deals with your account, your preferences and adaptations – that boots when you login to the system.

When you login to your Amibian.js account, no matter if it’s just locally on a single PC, a distributed cluster, or via the browser into your cloud account — several things happen:

  1. The client (web-page if you like) connects to the server using WebSocket
  2. Login is validated by the server
  3. The client starts loading preferences files via the mapped filesystem, and then applies these to the desktop.
  4. A startup-sequence script file is loaded from your account, and then executed. The shell-script runtime engine is built into the client, as is REXX execution.
  5. The startup-script will setup configurations, create symbolic links (assigns), mount external devices (dropbox, google drive, ftp locations and so on)
  6. When finished the programs in the ~/WbStartup folder are started. These can be both visual and non-visual.

As you can see Amibian.js is not a mockup or “fake” desktop. It implements all the advanced features you expect from a “real” desktop. The filesystem mapping is especially advanced, where file-data is loaded via special drivers; drivers that act as a bridge between a storage service (a harddisk, a network share, a FTP host, Dropbox or whatever) and the desktop. Developers can add as many of these drivers as they want. If they have their own homebrew storage system on their existing servers, they can implement a driver for it. This ensures that Amibian.js can access any storage device, as long as the driver conforms to the driver standard.

In short, you can create, delete, move and copy files between these devices just like you do on Windows, OSX or the Linux desktop. And hosted applications that run inside their own window can likewise request access to these drivers and work with the filesystem (and much more!).

Wow this is bigger than I thought, but what is this emulation I hear about? Can Amibian.js really run actual programs?

Amibian.js has a JavaScript port of UAE (Unix Amiga Emulator). This is a fork of SAE (scripted Amiga Emulator) that has been heavily optimized for web. Not only is it written in JavaScript, it performs brilliantly and thus allows us to boot into a real Amiga system. So if you have some floppy-images with a game you love, that will run just fine in the browser. I even booted a 2 gigabyte harddisk image 🙂

But Amiga emulation is just the beginning. More and more emulators are ported to JavaScript; you have Nes, SNes, N64, PSX I & II, Sega Megadrive and even a NEO GEO port. So playing your favorite console games right in the browser is pretty straight forward!

But the really interesting part is probably QEmu. This allows you to run x86 instances directly in the browser too. You can boot up in Windows 7 or Ubuntu inside an Amibian.js window if you like. Perhaps not practical (at this point) but it shows some of the potential of the system.

I have been experimenting with a distributed emulation system, where the emulation is executed server-side, and only the graphics and sound is streamed back to the Amibian.js client in real-time. This has been possible for years via Apache Guacamole, but doing it in raw JS is more fitting with our philosophy: no native code!

I heard something about clustering, what the heck is that?

Remember I wrote about the services that Amibian.js has? Those that act almost like libraries on a physical computer? Well, these services don’t have to be on the same machine — you can place them on separate machines and thus its able to work faster.

47470965_10155861938320906_4959664457727868928_n

Above: The official Amibian.js cluster, 4 x ODroid XU4s SBC’s in a micro-rack

A cluster is typically several computers connected together, with the sole purpose of having more CPU cores to divide the work on. The cool thing about Amibian.js is that it doesn’t care about the underlying CPU. As long as node.js is available it will happily run whatever service you like – with the same behavior and result.

The official Amibian.js cluster consists of 5 ODroid XU4/S SBC (single board computers). Four of these are so-called “headless” computers, meaning that they don’t have a HDMI port – and they are designed to be logged into and software setup via SSH or similar tools. The last machine is a ODroid XU4 with a HDMI out port, which serves as “the master”.

The architecture is quite simple: We allocate one whole SBC for a single service, and allow the service to copy itself to use all the CPU cores available (each SBC has 8 CPU cores). With this architecture the machine that deals with the desktop clients don’t have to do all the grunt work. It will accept tasks from the user and hosted applications, and then delegate the tasks between the 4 other machines.

Note: The number of SBC’s is not fixed. Depending on your use you might not need more than a single SBC in your home setup, or perhaps two. I have started with 5 because I want each part of the architecture to have as much CPU power as possible. So the first “official” Amibian.js setup is a 40 core monster shipping at around $250.

But like mentioned, you don’t have to buy this to use Amibian.js. You can install it on a single spare X86 PC you have, or daisy chain a couple of older PC’s on a switch for the same result.

Why Headless? Don’t you need a GPU?

The headless SBC’s in the initial design all have GPU (graphical processing unit) as well as audio capabilities. What they lack is GPIO pins and 3 additional USB ports. So each of the nodes on our cluster can handle graphics at blistering speed — but that is ultimately not their task. They serve more as compute modules that will be given tasks to finish quickly, while the main machine deals with users, sessions, traffic and security.

The 40 core cluster I use has more computing power than northern europe had in the early 80s, that’s something to think about. And the pricetag is under $300 (!). I dont know about you but I always wanted a proper mainframe, a distributed computing platform that you can login to and that can perform large tasks while I do something else. This is as close as I can get on a limited budget, yet I find the limitations thrilling and fun!

Part of the reason I have opted for a clustered design has to do with future development. While UAE.js is brilliant to emulate an Amiga directly in the browser – a more interesting design is to decouple the emulation from the output. In other words, run the emulation at full speed server-side, and just stream the display and sounds back to the Amibian.js display. This would ensure that emulation, of any platform, runs as fast as possible, makes use of multi-processing (read: multi threading) and fully utilize the network bandwidth within the design (the cluster runs on its own switch, separate from the outside world-wide-web).

I am also very interested in distributed computing, where we split up a program and run each part on different cores. This is a topic I want to investigate further when Amibian.js is completed. It would no doubt require a re-design of the LDEF bytecode system, but this something to research later.

Will Amibian.js replace my Windows box?

That depends completely on what you use Windows for. The goal is to create a self-sustaining system. For retro computing, emulation and writing cool applications Amibian.js will be awesome. But Rome was not built-in a day, so it’s wise to be patient and approach Amibian.js like you would Chrome OS. Some tasks are better suited for native systems like Linux, but more and more tasks will run just fine on a cloud desktop like Amibian.js.

Until the IDE and compilers are in place after phase two, the system will be more like an embedded OS. But when the LDEF compiler and IDE is in place, then people will start using it en-mass and produce applications for it. It’s always a bit of work to reach that point and create critical mass.

tomes

Object Pascal is awesome, but modern, native development systems are quite demanding

My personal need has to do with development. Some of the languages I use installs gigabytes onto my PC and you need a full laptop to access them. I love Amibian.js because I will be able to work anywhere in the world, as long as a browser and normal internet line is available. In my case I can install a native compiler on one of the nodes in the cluster, and have LDEF emit compatible code; voila, you can build app-store ready applications from within a browser environment.

 

I also love that I can set-up a dedicated platform that runs legacy applications, games – and that I can write new applications and services using modern, off the shelve languages. And should a node in the cluster break down, I can just copy the whole system over to a new, affordable SBC and keep going. No super expensive hardware to order, no absurd hosting fees, and finally a system that we all can shape and use in a plethora of systems. From a fully fledged desktop to a super advanced NAS or Router that use Amibian.js to give it’s customers a fantastic experience.

And yes, I get to re-create the wonderful reality of Amiga OS without the absurd egoism that dominates the Amiga owners to this day. I don’t even know where to begin with the present license holders – and I am so sick of the drama that rolling my own seemed the only reasonable path forward.

Well — I hope this helps clear up any misconceptions about Amibian.js, and that you find this as interesting as I do. As more and more services are pushed cloud-side, the more relevant Amibian.js will become. It is perfect as a foundation for large-scale applications, embedded systems — and indeed, as a solo platform running on embedded devices!

I cant wait to finish the services and cluster this sucker on the ODroid rack!

If you find this project interesting, head over to my Patreon website and get involved! I could really use your support, even if it’s just a $5 “high five”. Visit the project at: http://www.patreon.com/quartexNow

Nano PI Fire 3, part two

July 18, 2018 Leave a comment

If you missed the first installment of this test, please click here to catch up. In this installment we are just going to dive straight into general use and get a feel for what can and cannot be done.

Solving the power problem

pi-powerLike mentioned in the previous article, a normal mobile charger (5 volt, 2 amps) is not enough to support the nano-pi. Since I have misplaced my original PI power-supply with 5 volt / 3 amps I decided to cheat. So I plugged the power USB into my PC which will deliver as much juice as the device needs. I don’t have time to wait for a new PSU to arrive so this will have to do.

But for the record (and underlined) a proper PSU with at least 2.5 amps is essential to using this board. I suggest you order the official Raspberry PI 3b power-supply. But if you should find one with 3 amps that would be even better.

Web performance

The question on everyone’s mind (or at least mine) is: how does the Nano-PI fire 3 perform when rendering cutting edge, hardcore HTML5? Is this little device a potential candidate for running “The Smart Desktop” (a.k.a Amibian.js for those of you coming from the retro-computing scene)?

Like I suspected earlier, X (the Linux windowing framework) doesn’t have drivers that deliver hardware acceleration at all.

shot_desktop-1024x819-2-1024x819

Lubuntu is a sexy desktop no doubt there, but it’s overkill for this device

This is quite easy to test: when selecting a rectangle on the Lubuntu desktop and moving the mouse-cursor around (holding down the left mouse button at the same time) if it lags terribly, that is a clear indicator that no acceleration exists.

And I was right on the money because there is no acceleration what so ever for the Linux distribution. It struggles hopelessly to keep up with the mouse-pointer as you move it around with an active selection; something that would be silky smooth had the GPU been tasked with the job.

But, hardware acceleration is not just about the desktop. It’s not some flag you enable and it magically effect everything, but rather several API’s at either the kernel-level or immediate driver level (modules the kernel loads), each affecting different aspects of a system.

So while the desktop “2d blitting” is clearly cpu driven, other aspects of the system can still be accelerated (although that would be weird and rare. But considering how Asus messed up the Tinkerboard I guess anything goes these days).

Asking Chrome for the hard facts

I fired up Google Chrome (which is the default browser thank god) and entered the magic url:

chrome://gpu

This is a built-in page that avails a detailed report of what Chrome learns about the current system, right down to specific GPU features used by OpenGL.

As expected, there was NO acceleration what so ever. So I was quite surprised that it managed to run Amibian.js at all. Even without hardware acceleration it outperformed the Raspberry PI 3b+ by a factor of 4 (at the very least) and my particle demo ran at a whopping 8 fps (frames per second). The original Rasperry PI could barely manage 2 fps. So the Nano-PI Fire is leagues ahead of the PI in terms of raw cpu power, which is brilliant for headless servers or computational tasks.

FriendlyCore vs Lubuntu? QT for the win

Now here is a funny thing. So far I have used the Lubuntu standard Linux image, and performance has been interesting to say the least. No hardware acceleration, impressive cpu results but still – what good is a SBC Linux distro without fast graphics? Sure, if you just want a head-less file server or host services then you don’t need a beefy GPU. But here is the twist:

Turns out the makers of the board has a second, QT oriented distro called Friendly-core. And this image has OpenGL-ES support and all the missing acceleration lacking from Lubuntu.

I was pretty annoyed with how Asus gave users the run-around with Tinkerboard downloads, but they have thankfully cleaned up their act and listened to their customers. Friendly-elec might want to learn from Asus mistakes in this area.

Qtwebenginebrowser

QT has a rich history, but it’s being marginalized by node.js and Delphi these days

Alas, Friendly-core xenial 4.4 Arm64 image turned out to be a pure embedded development image. This is why the board has a debug port (which is probably awesome if you are into QT development). So this is for QT developers that want to use the board as a single-application system where they write the code on Windows or Linux, compile and it’s all transported to the board with live debugging back to the devtools they use. In other words: not very useful for non C/C++ QT developers.

Android Lolipop

2000px-Android_robot.svgI have only used Android on a pad and the odd Samsung Galaxy phone, so this should be interesting. I Downloaded the Lolipop disk image, burned it to the sd-card and booted up.

After 20 minutes with a blank screen i gave up.

I realize that some Android distros download packages ad-hoc and install directly from a repository, so it can take some time to get started; but 15-20 minutes with a black screen? The Android logo didn’t even show up — and that should be visible almost immediately regardless of network install or not.

This is really a great shame because I wanted to test some Delphi Firemonkey applications on it, to see how well it scales the more demanding GPU tasks. And yes i did try a different SD-Card to be sure it wasnt a disk error. Same result.

Back to Lubuntu

Having spent a considerable time trying to find a “wow” factor for this board, I have to just surrender to the fact that it’s just not there. . This is not a “PI” any more than the Tinkerboard is a PI. And appending “pi” to a product name will never change that.

I can imagine the Nano-PI Fire 3 being an awesome single-application board for QT C/C++ developers though. With a dedicated debug port making it a snap to transport, execute and do live debugging directly on the hardware — but for general DIY hacking, using it for native Android development with Delphi, or node.js development with Smart Mobile Studio – or just kicking back with emulators like Mame, UAE or whatever tickles your fancy — its just too rough around the edges. Which is really a shame!

So at the end of the day I re-installed Lubuntu and figure I just have to wait until Friendly-elec get their act together and issue proper drivers for the Mali GPU. So it’s $35 straight out the window — but I can live with that. It was a risk but at that price it’s not going to break the bank.

The positive thing

The Nano-PI Fire 3 is yet another SBC in a long list that fall short of its potential. Like many others they try to use the word “PI” to channel some of the Raspberry PI enthusiasm their way – but the quality of the actual system is not even close.

In fact, using PI in their product name is setting themselves up for a fall – because customers will quickly discover that this product is not a PI, which can cause some subconscious aversion and resentment.

37013365_10155541149775906_3122577366065348608_o

The Nano rendered Amibian.js running some very demanding demos 4 times as fast as the PI 3b, one can only speculate what the board could do with proper drivers for the GPU.

The only positive feature the Fire-3 clearly has to offer, is abundantly more cpu power. It is without a doubt twice as fast (if not 3 times as fast) as the Raspberry PI 3b. The fact that it can render highly demanding and complex HTML5 demos 4 times faster than the Raspberry PI 3b without hardware acceleration is impressive. This is a $35 board after all, which is the same price.

But without proper drivers for the mali, it’s a useless toy. Powerful and with great potential, but utterly useless for multimedia and everything that relies on fast 2D and 3D graphics. For UAE (Amiga emulation) you can pretty much forget it. Even if you can compile the latest UAE4Arm with SDL as its primary display framework – it wouldn’t work because SDL depends on the graphics drivers. So it’s back to square one.

But the CPU packs a punch that is without question.

Final verdict

Top the x86 UP board, left bottom a Raspberry PI 3, bottom right the ODroid XU4

There are a lot of stable and excellent options out there, take your time

I was planning to test UAE next but as I have outlined above: without drivers that properly expose and delegate the power of the mali, it would be a complete disaster. I’m not even sure it would build.

As such I will just leave this board as is. If it matures at some point that would be great, but my advice to people looking for a great SBC experience — get the new Raspberry PI 3b+ and enjoy learning and exploring there.

And if you are into Amibian.js or making high quality HTML5 kiosk / node.js based systems, then fork out the extra $10 and buy an ODroid XU4. If you pay $55 you can pick up the Asus Tinkerboard which is blistering fast and great value for money, despite its turbulent introduction.

Note: You cannot go wrong with the ODroid XU4. Its affordable, stable and fast. So for beginners it’s either the Raspberry PI 3b+ or the ODroid. These are the most mature in terms of software, drivers and stability.

The Tinkerboard Strikes Back

August 20, 2017 1 comment

For those that follow my blog you probably remember the somewhat devastating rating I gave the Tinkerboard earlier this year (click here for part 1, and here for part 2). It was quite sad having to give such a poor rating to what is ultimately a fine piece of hardware. I had high hopes for it – in fact I bought two of the boards because I figured there was no way it could suck with that those specs. But suck it did and while the muscle was there, the drivers were in such a state that it never emerged for the user. It was released prematurely, and I think most people that bought it agrees on this.

asus

The initial release was less than bad, it was horrible

Since my initial review those months ago good things have happened. Asus seem to have listened to the “poonami” of negative feedback and adapted their website accordingly. Unlike the first time I visited when you literally had to dig into recursive menus (which was less than intuitive in this case) just to download the software – the disk images are now available at the bottom of the product page. So thumbs up for that (!)

They have also made the GPIO programming API a lot easier to get; downloading it is reduced to a “one liner” for C developers, which is the way it should be. And they have likewise provided wrappers for other languages, like ever popular python and scratch.

I am a bit disappointed that they don’t provide freepascal units. A lot of developers use object pascal on these board after all, because Object Pascal gives you a better balance between productivity and depth. Pascal is easier to learn (it was designed for that after all) but avoids some of the pitfalls of C/C++ while retaining all the good things. Porting over C headers is fairly easy for a good pascal programmer – but it would be cool of Asus remember that there are more languages in the world than C and python.

All of this aside: the most important change of all is what Asus has done with the drivers! They have finally put together drivers that shows off the capabilities of the hardware and unleash the speed we all hoped for when the board was first announced. And man does it show! My previous experience with the Tinkerboard was horrible; it was the text-book example of a how not to release a product (the whole release has been odd; Asus is a huge, multi-national corporation. Yet their release had basement 3 man band written all over it).

So this is fantastic news! Finally the Tinkerboard delivers and can be used for real life projects!

Smart IOT

At The Smart Company we both create and use our core product, Smart Mobile Studio, to deliver third-party solutions. As the name implies Smart is a software development system initially made for mobile applications; but it quickly grew into a much larger toolchain and is exceptionally good for making embedded applications. With embedded applications I mean things that run on kiosk systems, cash machines and stuff like that; basically anything with a touch-screen that does something.

smarts

The Smart desktop gives you a good starting point for embedded work

One of the examples that ship with Smart Pascal is a fully working desktop embedded environment. Smart compiles for both ordinary browsers (JavaScript environments with a traditional HTML5 display) but also for node.js, which is JavaScript unbound by the strict rules of a browser. Developers typically use node.js to write highly scalable server software, but you are naturally not limited to that. Netflix is written 100% in Node.js, so we are talking serious firepower here.

Our embedded environment is called The Smart Desktop (also known as Amibian.js) and gives you a ready-made node.js back-end that couples with a HTML5 front-end. This is a ready to use environment that you can deploy your own applications through. Things like storage, a nice looking UI, user logon and credentials and much, much more is all implemented for you. You don’t have to use it of course, you can write your own system from scratch if you like. We created “Amibian” to demonstrate just how powerful Smart Pascal can be in the right hands.

With this in mind – my main concern when testing SBC’s (single board computers) is obviously web performance. By default JavaScript is a single core event-driven runtime system; you can spawn threads of course but its done somewhat different from how you would work in Delphi or C++.  JavaScript is designed to be system friendly and a gentle giant if you like, which has turned out to be a good thing – because the way JS schedules execution makes it ideal for clustering!

Most people find it hard to believe that JavaScript can outperform native code, but the JavaScript runtimes of today is almost a whole eco system in themselves. With JIT compilers and LLVM optimization — it’s a whole new ballgame.

Making a scale

To give you a better context to see where the Tinkerboard is on a scale, I decided to set up a couple of simple tests. Nothing fancy, just running the same web applications and see how each of them perform on different boards. So I used the same 3 candidates as before, namely the Raspberry PI 3b, the Hardkernel ODroid XU4 and last but not least: the Asus Tinkerboard.

I setup the following applications to compile with the desktop system, meaning that they were compiled with the Smart project. We got plenty of web applications but for this I wanted to pack the most demanding apps in our library:

  • Skid-Row intro remake using the CODEF library
  • Quake 3 asm.js build
  • Plex

OK let’s go through them and see where the chips land!

The Raspberry PI 3b

bassoon

Bassoon ran well, its not that demanding

The Raspberry PI was aweful (click here for a video). There is no doubt that native applications like UAE4Arm runs extremely well on the PI (which contains hand optimized assembler, not exactly a fair fight)- but when it comes to modern HTML5 the PI doesn’t stand a chance. You could perhaps use a Raspberry PI 3b for simple applications which are not graphic and cpu intensive, but you can forget about anything remotely taxing.

It ran Bassoon reasonably fast, but all in all you really don’t want a raspberry when doing high quality IOT, unless its headless code and node.js perhaps. Frameworks like Johnny #5 gives you a ton of GPIO features out of the box – in fact you can target 40 embedded systems without any change to your code. But for large, high quality web front-ends, the PI just wont cut it.

  • Skid-Row: 1 frame per second or less
  • Quake: Can’t even start, just forget it
  • Plex: Starts but it lags so much you can’t watch anything

But hey, I never expected $35 to give me a kick ass ARM experience anyways. There are 1000 things the PI does very well, but HTML is not one of them.

ODroid XU4

XU4CloudShellAssemble29

The ODroid packs a lot of power!

The ODroid being faster than the Raspberry PI is nothing new, but I was surprised at how much power this board delivers. I never expected it to give me a Linux experience close to that of a x86 PC; I mean we are talking about a 45€ SBC here. And it’s only 10€ more than the Raspberry PI, which is a toy at best. But the ODroid XU4 delivers a good Linux desktop; And it’s well worth the extra 10€ when compared to the PI.

Personally I don’t understand why people keep buying PI’s when there is so much better options on the market now. At least not if web technology is involved. A small server or emulator sure, but not HTML5 and browsers. The PI just cant handle it.

  • Skid-Row: 4-5 frames per second
  • Quake: Runs at very enjoyable speed (!)
  • Plex: Runs well but you may want to pick SD or 720p to avoid lags

What really shocked me was that ODroid XU4 can run Quake.js! The PI can’t even start that because it’s so demanding. It is one of the largest and most resource hungry asm.js projects out there – but ODroid XU4 did a fantastic job.

Now it’s not a silky smooth experience, I would guess something along the lines of 17-20 fps. But you know what? Thats pretty good for a $45 board.

I have owned far worse x86 PC’s in my day.

The Tinkerboard

Before i powered up the board I was reluctant to push it too far, because I thought it would fail me once again. I did hope that something had been done by Asus to rectify the situation though, because Asus really should have done a better job before releasing it. It’s now been roughly 6 months since I bought it, and roughly 8 months since it was released here in Europe. It would have been better for them to have waited with the release. I was not alone about butchering the whole board, its been a source of frustration for those that bought it. 75€ is not much, but no-one likes to throw money out the window like that.

Long story short: I downloaded the latest Ubuntu image and burned that to an SD card (I actually first downloaded the Debian Jessie image they have, but sadly you have to do a bit of work to turn that into a desktop system – so I decided to go for Ubuntu instead). If the drivers are in order I have a feeling the Jessie image will be even faster – Ubuntu has always been a high-quality distribution, but it’s also one of the most demanding. One might even say it’s become bloated. But it does deliver a near Microsoft Windows like experience which has served the Linux community well.

But the Tinkerboard really delivers! (click here for the video) Asus have cleaned up their act and implemented the drivers properly, and you can feel that the moment the desktop comes into view. With the PI you are always fighting with lagging performance. When you start a program the whole system freezes for a while, when you quit a program the system freezes – hell when you move the mouse around the system bloody freezes! Well that is not the case with the Tinkerboard that’s for sure. The tinkerboard feels more like running vanilla Ubuntu on a normal x86 PC to be honest.

  • Skid-Row: 10-15 frames per second
  • Quake: Full screen 32bit graphics, runs like hell
  • Plex: Plays back fullscreen HD, just awesome!

All I can say is this: if you are going to do any bit of embedded coding, regardless if you are using Smart Mobile Studio or some other devkit — this is the board to get (!)

Like already mentioned it does cost almost twice as much as the PI, but that extra 30€ buys you loads of extra power. It opens up so many avenues of code and you can explore software far more complex than both the PI and ODroid combined. With the tinkerboard you can finally deliver a state of the art product built with off the shelves web components. It’s in a league of its own.

The ‘tinker’ rocks at last

When I first bought the tinker i felt cheated. It was so frustrating because the specs were so good and the terrible performance just came down to sloppy work and Asus releasing it prematurely for cash (lets face it, they tapped into the lucrative market established by the PI foundation). By looking at the specs you knew it had the firepower to deliver so much, but it was held back by ridicules drivers.

There is still a lot that can be done to make the Tinkerboard run even faster. Like I mentioned Ubuntu is not the racecar of distributions out there. Ubuntu is fat, there is no other way of saying it. So if someone took the time to create a minimalistic Jessie image, recompile every piece with maximum llvm optimization and as few running services as possible — the tinkerboard would positively fly!

So do I recommend it? I am thrilled to say that yes, I can finally recommend the tinkerboard! It is by far the coolest board in my collection now. In fact it’s so good that I’m donating one to my daughter. She is presently using an iMac which is overkill for her needs at age 10. Now I can make a super simple menu with Netflix and Youtube, buy a nice touch-screen display and wall mount it in her room.

Well done Asus!

A day with the Tinkerboard, part 1

June 7, 2017 1 comment

Anyone into IOT mini computers, be it as a hobby or professionally – have no doubt heard about the Tinkerboard. The Tinkerboard is an alternative to the Raspberry PI (aren’t they all) produced by hardware giant Asus. Everything from its form-factor (size) to its color coordinated GPIO pins is more or less the same as the PI. What separates the two is that the Tinker gives you double the CPU power, double the ram and pretty much double of everything. If we compare it with the Raspberry PI 3b that is.

But you don’t get double for free. The Tinkerboard retails on Amazon for USD 60 which means something in the ballpark of 55£ for those that live in the UK. For the rest of us that live in far-away places like Norway, prices and taxes may vary.

tinker

Before I continue I want to write a few words about pricing. The word “expensive” is one I keep hearing when it comes to the Tinkerboard, but that is extremely unfair considering what you get for your money.

The Raspberry PI 3b is an awesome product and packs a lot of goodies into a credit-card size computer. The same is true for the Tinkerboard, but they have packed even more stuff into the same space! The difference in price is just 15£ – but those 15£ gives you the power to emulate demanding platforms like Nintendo Wii, Sega Dreamcast and Playstation (if you are into retro gaming). Platforms the Raspberry PI 3b dont stand a chance emulating (at least not smoothly).

For serious applications all that extra cpu power can be the difference between success and failure. In many ways the Tinker feels a bit like working on a low-end laptop. If you are building a movie server for your living-room the Tinkerboard will ble able to convert and stream movies much smoother and better than the PI. And if you are looking for a device to power your next kiosk system or POS device, again the extra memory and cpu power is going to make all the difference. And with just 15£ between the two models, I cannot imagine why anyone would pick the PI if gaming or movie streaming is on the menu.

Oh and it has built-in wi-fi and bluetooth support! Here are the specs as listed on Amazon:

  • High performance Quad Core ARM SOC 1.8GHz with 2GB of RAM -The Tinker board features the Rock chip RK3288 Soc with Mali – T764 GPU and 2GB of Dual Channel DDR3 memory
  • Non shared GBit LAN, Shielded Wi-Fi with upgradable antenna support
  • Highly compatible PCB & Topology, Tinker board offer extensive compatibility with SBC accessories & chassis
  • HD Audio & HD & UHD Video Support – Tinker board supports HD audio 192/24bit audio along with accelerated HD & UHD ( 4K ) video playback support* requires use of Rock chip video player in TinkerOS
  • DIY Friendly Design – Tinker board features multiple DIY friendly use features including a color coded GPIO header, silkscreen PCB and color coded pull tabs

That sounds pretty nice doesnt it 🙂

One does not simply take on the mighty PI

Being an alternative to the Raspberry PI is not as easy as it sounds. You would imagine that it all boils down to the specs right? So if you beef up the cpu, ram and gpu everyone will throw away their PI’s and buy your stuff? If only that was true. In which case the Tinkerboard would be awesome since it outperforms the PI on every level.

But it’s not that simple. There is more to a product than just raw power.

One of the cool things about the Raspberry PI is the community. The knowledge base of the product if you like. In order to build a community and be a success, everything has to click. I mean just look at the third-party eco-system that exists around the PI; all those companies selling add-ons and widgets, sensors and remote controls. The PI itself is quite dull. It’s all that extra stuff that makes it so fun to play with. And then you have things like excellent customer service, a forum where people help each other out, well written documentation — the value of the product suddenly exceeds whatever you paid for it to begin with.

And the Tinkerboard is not alone in trying to get a piece of the action. It competes with boards like the ODroid XU4, the x86 based UP boards, Latte Panda, The Pine boards and a whole fruit-basket of banana, orange and whatever PI clones. All of them trying to get a bite of the phenomenon that is the Raspberry PI.

So Asus got their work cut out for them.

First impressions

Having unpacked the board and plugged in all the cables (which is exactly as dull as it sounds) the first thing you do is look for software. Which naturally should be a one-click operation on their website.

Well, the first thing that annoyed me was exactly that: the Asus website. The Tinkerboard is not something that belongs on a sterile, streamlined corporate site. It belongs on website that should be easy to navigate, with plenty of information that hobbyists and “tinkers” need to get started. The Asus website simply lacks the warmth and friendliness of the PI foundation website. I even had to google around a couple of times just to find the disk images. These should be readily available on the site, on the first page even – to make sure customers can get up and running quickly.

tinkerweb

It’s a little bit clangy and a little bit jammy

The presentation of the board was ok, but you really shouldnt have to use an external search engine to locate essential data for something you just bought. Unlike the PI foundation, Asus seem to have put little effort into what customers get once they have bought the board. As for community I dont even know if there is one.

When you finally manage to find the bloody download page, these are your options (there are 17 downloads in total, counting older OS images, config files and schematics):

  • TinkerOS 1.8
  • TinkerOS 1.6 Beta (my initial download)
  • Android, also beta

And don’t expect a polished Linux distro like Pixel either. You get a clean distro with some typical stuff pre-loaded (python, perl for some reason, and various tidbits) but nothing close to the quality of Pixel.

Note: Why they would list older beta releases in their downloads is beyond me. It only adds to the confusion of a sterile list of items. And customers who just got their boards will be eager to try it out and hence make a mistake like I did. Beta and unstable releases should be in a separate category, while the stable and up-to-date images are listed first.

The second thing that really irritated me was something as simple as adjusting the keyboard layout and setting the locale. These are simply not present in preferences at all. Once again you have to google around until you find the terminal commands to manually set these things, which utterly defeats booting into a desktop environment to begin with. This might seem trivial for someone who knows Linux well, but to a novice it can take hours. Things this fundamental should be available in preferences or at the very least as a script. Like raspi-config on the PI.

But OK. If we look away from these superficial things and focus on the actual board, things are not half bad.

It’s the insides that counts

The first thing you are going to notice is how much faster the system is compared to the PI. When you start an application on the PI that is demanding, like libre-office or Lazarus, the whole system can freeze-up for a few seconds while the CPU go through the roof. I was pleased to find that this is not the case here (at least not as disruptive). You can start a program and the system continues to be responsive while things load.

What made me furious though was the quality of the graphics drivers. Performance wise the Tinkerboard is twice (actually a bit more) as fast as the PI on all fronts – and the GPU is supposed to deliver a vastly superior graphics experience. Yet when I ran Amibian.js (the Smart Pascal JavaScript desktop system) in Chrome – the CSS hardware effects were painfully slow. I even questioned if GPU was used at all (Note: which I later confirmed was the case, so much for downloading a beta image by mistake). I was amazed at the cpu power of the Tinkerboard because despite having to do everything purely with the CPU (moving pixels in memory, allocating device independent bitmaps, rasterizing layers and complex compositions in X) and it was more than usable.

But it sure as hell was not the superior graphics experience I was hoping for.

I also noticed that when moving a normal desktop window (X window that is) rapidly around the screen, it would lag behind and re-trace your steps until it caught up with the mouse pointer. This was very odd since most composition engines would have eliminated these movements as much as possible – and the GPU would do all the work. Again, I blame the drivers and pondered what could possibly make a GPU perform so badly.

gpuwrong

Im supriced it booted at all, nothing is working

So while the system is notably faster than the PI, I get the sense that the drivers and tooling is not really polished. The hardware no doubt got juice but without drivers being carefully written for the board – most of the extra juice you pay for is wasted.

In the end I opened up chrome and typed “chrome://gpu” to see what was going on. And not surprisingly I was correct. It wasnt using the GPU at all and I had wasted hours on something that should have been clearly marked “Lacks hardware acceleration” in bold, italic, underlined big red letters. Like the PI foundation did when they shipped Raspberry PI 2 without GPU drivers. But they promptly delivered and it was ok, because you were informed up front.

 

Back to scratch

After a trip back to google I downloaded the right (or at least working) image, namely the TinkerOS 1.8 stable. It booted up, resized the partition to fill the whole SD card automatically (and other boring bits).

The first thing I did was start chrome and use the “chrome://gpu” to get some statistics. And thankfully most of the items were now working and active.

gpubetter

Thats better, although i ponder why GPU DIB’s are not active

Emulation, the name of the game

While I have serious tests to perform there is no doubt that what I was most eager to play with was UAE (the Unix Amiga Emulator). Like most retro-heads I have a ton of Raspberry PI’s and ODroid’s around the house with the sole purpose of emulating older game or computer systems. And I think everyone knows that my favorite computer in the whole wide world is the Commodore Amiga.

On the PI, Gunnar’s native Amibian distro gives you awesome performance. If you overclock the ram, gpu and cpu using safe numbers – the PI 3b spits out roughly 3.2 times the power of a Mc68040 based Amiga 4000. In other words 3 times the power of a high-end Amiga from the early 90s. That is quite an achievement for a $40 board. You can only imagine my expectations for a board running more than twice as fast as the PI does (and costing that 15£ more).

18986661_10154521981875906_472132585_o

Boots fine but you can forget about using it. The PI is 100 times faster than this

One snag though was that UAE4Arm which is the highly optimized version of the Amiga Emulator doesnt exist for the Tinkerboard. In theory it should compile without problems since the latest version uses SDL exclusively. So there is no odd, deprecated UI toolkit like the old UAE4Arm used.

The only thing available was fs-uae. And while that is a very popular emulator on Windows and Mac, it’s sadly not optimized as much as UAE4Arm is. The latter uses lookup tables to the extreme, all if, then, else statements have been replaced by fast switches — and it’s just optimized beyond anything else running on ARM.

The result on the PI in phenomenal, but sadly fs-uae was all I could play with. I will try to compile UAE4Arm on the Tinkerboard later though. But right now im to busy.

Not optimized at all

It is hard to pinpoint exactly where the problem is, but I suspect the older codebase of fs-uae combined with poorly written drivers is what resulted in the absolutely sluggish behavor.

Dont get me wrong, if all you want to do is play some A500 or A1200 games the Tinker is more than capable. But if you want to run the same high-performance desktop as Amibian gives you on the PI – you can pretty much forget it.

I set the system to emulate an A4000, activated the JIT engine and set my desktop to match the display of the desktop. This should run like hell — yet it was slow as a snail. You could literally see the mouse-pointer slowly trying to keep up as I moved it across the desktop.

One interesting thing though. When I ran fs-uae on the first image, the beta image where everything was crap, it actually ran exceptionally well. But here on the so-called stable and “up to date” version, it was useless for anything but clean floppy gaming.

There are many factors involved in this. The drivers sits at the bottom just above the kernel and exposes the hardware so it can be used through a common API. On top of this you have stuff like OpenGL and on top of that again you have SDL which simplifies typical gaming or graphics tasks. This may sound like a long road for code to travel, but it’s actually very fast. If the drivers expose the power of the GPU that is.

I can only conclude at this point that the Tinkerboard has the firepower, I have seen the result of tests performed by others (and tested a retrogaming image) – and you feel the difference almost immediately when you boot the device. But the graphics drivers seems.. crippled somehow. It’s easy to point the finger at Asus and say they have done a terrible job at writing drivers here, but it can also be a bottleneck elsewhere in the system.

This is something Raspberry PI has got so right. Every part of the system has been optimized for the hardware, which makes the PI feel smooth even under a lot of stress. It delegates all the hard graphical stuff to the GPU – and the GPU just deals with it.

Sadly this is not the case with the Tinkerboard so far. And it really is such a shame because the specs speak for themselves.

Android

Next time: We will be giving Android a test and hopefully that will have drivers that are more up to date than whatever we have seen so far..

FMX4Linux is coming, and we cant wait!

May 3, 2017 1 comment

When Embarcadero announced Linux support for their Tokyo release of Delphi, my soul literally left my body for a moment. Could it really be true? After all these years have Embarcadero done what many said would be impossible?

I must admit that people saying something is impossible has lost its sting for me. Over the past 3 years I have one of these impossible things on a weekly basis, yet people are just as shocked every single time. But this time – I was the one in a state of excitement.

As an outspoken (an understatement perhaps) active blogger, my skin has grown thick over the years. I also get to see a lot of cool tech long before it’s commercially available and mainstream – so it takes more for me to be swayed and dazzled. And you grow a healthy instinct for separating bullshit from true technical achievements too.

Tokyo

Delphi Tokyo is probably one of the finest Delphi editions to date

But yes, I admit it – this time Embarcadero surprised me in every positive way imaginable. If you follow my blog you know that I call it as I see it and hold little back. But this was a purely positive experience.

No I know what you are going to say; everyone knew about this right? Yeah me too. But my mind has been elsewhere lately with work and projects, so I didn’t catch the release buzz from the closed forums (well, “closed” is a matter of perspective, I have friends from Russia to the United States, and from china to the Sudan) and for the first time since the Borland days – Embarcadero got the drop on me.

I’m the kind of guy that runs on passion. Delphi, Smart Pascal and even object pascal as a general language is not just work for me – its something I love to use. I relax and enjoy myself when coding. So you can imagine my reaction when my boss sent me a message with “Download Tokyo and give me a report”. It was close to midnight but I was out of that bed faster than bacon on toast, ran into my home office wearing nothing but boxers and a Commodore t-shirt – and threw myself into Embarcadero Developer Network’s download section.

From hero to zero in 2 seconds

I think it was around 07:00 the next morning, one hour before I was due for work that the magic phrase “command line and system services [daemons] only” hit home. And yes I “kinda” knew that before – but maybe, just maybe Embarcadero had thrown in visual applications in the 11th hour. I even had a friendly bet with Jim McKeeth that a FMX UI solution would appear less than 24 hours after release (more about that later).

linuxproj

Now that is a beautiful thing

Either way, it was quite the anti-climax after all that work in VMWare installing Delphi from scratch, waiting, hoping and praying. First of all because I remember watching an in-depth technical review about Firemonkey by David Intersimone a few years back; the one where he describes the Firemonkey architecture in detail. Especially how the abstraction layer between the visual control framework and the actual os made it possible for FMX to quickly adapt to new environments. Firemonkey is a complex and highly adaptable framework, but its biggest strength is paradoxically enough its simplicity. A simplicity achieved through abstracting the UI from the rendering functionality. At least as much as possible, you still have to deal with OS level windowing, security and all of that – so it’s no walk in the park either.

In the presentation (sadly I don’t have a link to this one) David went to great lengths to explain that regardless of operating system, as long as someone implemented a driver class that exposed the set of features FMX needs – Firemonkey would run as long as the compiler supported the instruction set. Visual engines could be DirectX, OpenGL, Cairo or whatever makes sense on that particular platform. As long as the “bridge class” talking with the operating system is there – Firemonkey can run on a toaster if so be.

So Firemonkey has the same abstraction concept that we use in the VJL (Visual JavaScript Component Library) for Smart Pascal (differences not withstanding). If it’s Windows you are running on, DirectX is used; If it’s OS X then Apple’s implementation of OpenGL runs the show – and if you are on Linux you can pick between OpenGL and Cairo. I must admit I havent looked too closely at Cairo, but I know it was designed to make advanced composition and UI rendering more efficient.

Why it Tokyo didn’t ship with visual application support is beyond me, but considering the timing of what happened next, I have made my own conclusions. It doesnt really matter to be honest – the point is we got it and it rocks!

FMX4Linux to the rescue!

Remember the wager I mentioned with Jim McKeeth? It wasnt a serious wager, I just commented and said “a Linux FMX solution will appear within 24hrs, you can bet on it” and added a smiley. I had no idea who or how, I just knew it’s going to turn up.

Because one thing that is a sure bet – it’s that the Delphi community is a group made up of highly resourceful, inventive and clever people. And I was pretty sure that it wouldn’t take many hours before someone came up with a patch or framework to fill the void. And right I was. Less than 24 hours later and it was fact rather than conjecture.

fmxlinux

This is just a must have. There is no debate.

So less than 24 hours after Delphi Tokyo hit the shelves. Eugene Kryukov and Alexey Sharagin presented “FMX for Linux”. Giving you both the missing rendering back-end that talks to the system – and some kind of “widget mapping” (as far as I can understand, I wont pretend to know how they did it) that renders your UI according to GTK. So it’s not just a simple “patch”, theme or class that calls a handful of external routines; it’s a full visual implementation of FMX for Linux. Impressive? Oh yeah, and then some!

Let us explore!

Over the next few days I will be giving you an in-depth look at how this system works. I’m also going to test drive Html Components and see if we can get that running under Linux as well. Since FMX for Linux is still in development we have to take height for that – but being able to target Ubuntu is pretty cool! And Remobjects, glorious Remobjects SDK, if that works out of the box I will dance the jig and upload it to YouTube, I swear to cow!

Remobjects_linux

Linux is about to feel the full onslaught of object pascal

There is little doubt what the next Smart Pascal IDE will be based on, and when you combine FMX for Linux, HTML components, Remobjects SDK and TMS into one – you got serious firepower to play with that will give even the most hardened C/C++ QT developer reason to be scared. And that is before the onslaught of Data Abstract and Remobjects C# native compiler.

Oh man next weekend is going to be the best ever!

Let’s not forget ARM targets

asus

The “PI killer” has arrived!

On a second note we will also be looking at the my latest embedded toys – namely the Asus Tinkerboard. I just got two of them in the mail today. Its going to be exciting to see how it fares against the Raspberry PI, ODroid XU4 and the Intel Atom based UP board.

We will also be testing how these two cards can be clustered together using node.js to work as one – and see how that impacts performance for our node.js based Smart Desktop project. These are exciting times indeed!

To make things even more interesting I will be pitching the Tinkerboard against the ODroid XU4 (original version, not the pussy passive save the environment unicorn they push now) against both the x86 UP v1 and Raspberry 3b. Although I think the Raspberry PI is in for the beating of its life when the ODroid and Tinker is overclocked to blood lust mode!

smartdesk

The Smart desktop has a powerful node.js back-end that packs a punch

So, when LLVM optimized JavaScript runs Mc68040 machine-code at 4 times the speed of a high-end Amiga 4000, I will be content.

So let’s do another “that’s impossible” shall we 🙂

Smart Puppy: Smart pascal meets linux!

April 21, 2017 Leave a comment

logo_waifu2x_art_noise1_scale_tta_1One of my absolute favorite operating-systems in the whole world has to be Puppy Linux. I discovered it just a few days ago and I have fallen completely in love with this thing. I can vaguely remember giving it a testdrive a few years back, but I didn’t know much about Linux in general so I didn’t understand what I it represented.

So if you are looking for a friendly, small, fast and easy to use Linux system – then Puppy is about as friendly as it gets. The Facebook user group with the same name is a warm and friendly place to be. Much like Delphi developer the Admin(s) take pride in keeping things orderly – and people who hang out there engage, care and help each other out.

Before you run out and download Puppy, which I hope you do later – please understand that Puppy is very different from Linux in general. You could almost say that it’s a whole alternative to mainstream Linux as we know it.

But, once you know about the differences then you are in for a treat! I will explain them in the article, so please be patient and take the time to digest.

Puppies hate fluff

One of the reasons I never converted wholesale to Linux (and yes I did try) – is that the average Linux distro is unbearable and unnecessary cryptic. For some reason Linux architects suffer from a terrible affliction, namely a shortage of characters. This sickness means that Linux don’t have enough characters for everyone, so programmers must use a maximum of five letters when naming their software. If coders ignore this shortage and blatantly name something directly or intuitively – then Richard Stallman and Lady Gaga will order a “drive-by pony tail cut” on the dude. And a Linux administrator without his pont-tail is finished (the nerd equivalent of flipping burgers at McDonalds).

Puppy Linux does contain it’s fair share of the classical Linux software (that goes without saying). But, the man behind this wonderful Linux flavour is also a level-headed, clever and resourceful man (or woman) – so he has thankfully broken with what can only be described as archaic thinking.

puppy01

Puppy Linux is not exactly software impaired

So even with my minimal Linux experience I was able to navigate around the filesystem, locate documents (which here is called “Documents” and “My Documents” even). There is a whole bunch of these tiny differences, small things that makes all the difference. From the way he (or she) has named things – to where things are stored and placed.

And it’s so small! The basic install is less than 300 megabytes in size (!) Yes you read that right. The generic Puppy Linux installation with desktop and a few popular applications is less than 300 megabytes.

In my case I can have a fully loaded development studio, featuring GCC, FPC (freepascal), Lazarus IDE, CodeBlocks IDE, KDevelop IDE, Anjunta developer studio – and last but never least Smart Mobile Studio on a 2 gigabyte USB stick (!) I don’t think you can even get USB sticks that small any more (?) The smallest I got is 32 gigabyte and the largest is 256 gigabyte.

But before we go on with the wonders of Puppy Linux – lets look at what Linux did wrong. Why is Linux even to this day considered hard to use? Or to put it another way: what has Windows and OS X done right to be considered easier to use yet capable of the same (and often more) ?

Naming, what Linux did wrong

One of the tenants of professional programming, is to ensure that classes, members and functions have meaningful names. There was a time when you would get away with single character class, variable and method names — but that wont fly in 2017. Your Q&A department would have you for breakfast if you checked in code like that. Classes, name-spacing and packages should be descriptive. End of story.

The reason this has become an almost sacred law, should be obvious: it may not be you that maintains the software 5, 10 or 15 years down the road. A piece of code should always be written in such a way that it can be understood and thus maintained by others within a reasonable time-frame (which also means plenty of comments and good documentation). This is not a matter of preference, but of time and money. And when you pay out salaries these factors are one and the same.

So naming elements of software in 2017 has a lot of criteria attached to it. The most obvious so far being:

  • Always name things clearly because that
    • ensures ease of use
    • simplifies maintenance
    • removes doubt as to “what is what”
    • less user-mistakes
  • The less mistakes, either in understanding something or using something, the less money a business throws out the window. Money that could be spent paying you to make something cool instead (or fix bugs that are critical).
  • The less user-mistakes caused by customers, the more your service department can focus on quality of service. When a company starts it usually have outstanding support, but as it grows their service-desk slowly become robots.
  • The easier and more intuitive a system is, the more users it will attract. If people can pick something up and just naturally figure out how things work, then statistics show that they most likely will continue using it through thick and thin.

Right. With these rules in mind – what happens if you take them but apply them to Linux instead? Not Linux code or libraries or stuff like that, but Linux the user-experience from top to bottom?

And don’t get me wrong, I think Linux is awesome so this is not an attack on Linux; I’m simply pointing out factors that could help make Linux even better.

I mean, just look at the Linux filesystem. Again you have this absurd shortage of characters. Why would anyone abbreviate the word “user[s]” into “usr” ? It make noh sense.  Same with “lib”, would it have killed you to call it “libraries”? And so it continues with “dev” – because calling it “devices” would cause the space-time-continuum to break.

Shell shocked

The shell (or command-line under Windows) and it’s commands is really the thing that annoys me the most. There is a fine line between use and abuse, and the level of abbreviation here is beyond whimsical and harmless – and well into the realm of silly and absurd

Who in their right mind would name a command “ps”? What could it possibly mean? The first thing that comes to mind is “print spool”. If you come from any other platform than Linux (and perhaps Unix, I don’t know) you would never imagine that “ps” actually means “list all running processes and their states”.

ps_command

“ps” lists the running processes and their states

Above: running “ps” from the shell lists the running processes. Would it have killed the coders to just call it, oh perhaps, “listprocesses” or “showrunningprograms”?

The “ps” command is just one in a long, long list of commands that really should be brought into the twenty-first century. The benefits should be obvious. It should not be necessary for a 43-year-old man to blog about this, because it’s been a problem for the better part of three decades.

  • Kids and teenagers is the bread and butter for all operating systems. The faster a kid of teenager can do something with a system, the more loyal that individual will be to the platform in the future.
  • Linux needs developers and users from other platforms. When someone who has been a successful developer for almost 30 years find a system cryptic and hard to use, how much harder will it be for a non-technical user?
  • Standards are important. The location of files, libraries and settings should be uniform. As of writing Linux seem to have 3 different standards (again, I am no expert): systemd, initd and “systemx”. The latter is just a name I made up, because no-one really knows what it’s called. We are now in the realm of PlayStation, ChromeOS, WebOS and systems that build on the Linux – but deviate the moment the drivers have loaded.

Again, I’m not writing this in a negative mindset. I have been using Ubuntu for a while as an alternative to Windows and OS X. But this has been a purely user-centric experience. I have not done any programming except random bits of Freepascal and node.js experiements. I have enjoyed Ubuntu purely as a user. Writing documents, checking email, browsing the web, IRC, reading news groups – ordinary stuff.

So I am very positive to Linux, but I have yet to find “my” flavour of it. A Linux distro I feel at home with and that appeals to my way of working.

Until today that is..

Enter puppy Linux

Puppy is a flavour of Linux that just demolishes some of Linux’s most holiest of concepts. Everyone will tell you never to run as root, always have the root account in peace – and keep it under lock and key just in case someone gets into your second account right?

Well not Puppy. Here you are expected to run as root and you can, if you for some reason must, jump out into a secondary user which is fake. So indeed – puppy Linux is a single user Linux system. It’s the rebel, the scoundrel and rouge of the Linux world – the distro that couldn’t care less what the other guys are doing.

gcc.png

Fancy a spot of coding? GCC is a SFS module away ..

Secondly, and this is very cool, Puppy is highly modular. No I’m not talking about packages, all Linux distros have that in some form or another. No I’m talking about something called SFS files, short for squashed file-system.

To make a long story short, Puppy allows you to mount compressed files as disks and they become a part of the system. It’s a bit like the virtual-drive API on windows (if you have ever coded against that?). You may have noticed in Windows how you can double-click on a .ISO file and suddenly the file is mounted in the file-explorer and stays mounted until you manually dis-mount the damn thing?

Well, SFS is that but also much more. Because when you mount the SFS file whatever applications it contains registers on the start-menu, adds itself to the global path and essentially becomes one with the whole system. This took me a while to wrap my head around this (good features always comes with a price, so i keept waiting for the negative. But there were none!). The people I talked to about this were not coders, so they had some very colorful explanations to how it all worked. But once I realized SFS was just a zip-file (or tarball or whatever) with a fixed structure (including mount script and dis-mount script) I got the picture.

Size and speed matters

Before I started using a PC back in the early 90’s I was a huge Amiga fan. I still am (as you no doubt have noticed). One of the first things I found, or first difference between Amiga computing and PC computing that hit me – was how wasteful PC’s were. I remember I was shocked when I saw how much space and cpu power the average programmer just wasted — because on the Amiga everyone strived to be as resourceful and efficient as possible.

We would spend days optimizing even the smallest parts of our applications just to ensure that it ran at top speed and produced as little bloat as possible. This was just baked into us, it was the way of the force and as common as your grandfather’s work ethics. Quality and achievement went hand in hand.

cb

CodeBlocks is an excellent IDE 🙂

When you fire up Puppy Linux you are instantly reminded that there are people to this day that cares about size and speed. And that maybe, just maybe, consumerism has tricked you into throwing away perfectly usable technology year after year. Machines that actually had more than enough CPU power for the tasks you wanted, but was slowed down by bloated operating-systems, poor programming and lazy code generators.

Puppy Linux is the fastest bloody Linux you will ever run. The only operatingsystem I have tried that runs faster, is Aros compiled for Arm (a distro called Aeros, a reverse engineered edition of Amiga OS). But as far as x86 and the Linux kernel goes — Puppy Linux is the bomb.

I know I’m repeating myself here but: less than 300 megabytes for a fully loaded Linux distro with text processor, browser, devkit, music player, video player and all the “typical” applications you would use for daily tasks? And it truly is the fastest hunk of junk in the galaxy without question.

Amiga coders and the cult of joy

When I started to snoop around the Puppy environment and community, I started to notice a couple of “tell-tell” signs. Tiny, subtle things that only an Amiga coder would pick up on. Enough to give you a hunch, a gut feeling – but not enough to blatantly say it out loud. “Amiga guys did this” i would whisper to myself. And it’s not really such a big surprise to find that coders now in their 40s that used to be Amiga coders.

In 30 years time there will be company owners and CEO’s that grew up with Playstation and have fond memories of that. But they wont recognize each-other by their craftmanship – that is the difference.

cult

The cult of joy lives on, albeit in new forms

The Amiga was special because it was not just a games machine. It was also a complete rewrite of what constituted the power operatingsystem of its time: Unix. In other words they copied the best stuff from Unix (which by the way had the same absurd filesystem as Linux still has) but cleaned it up. First thing to be cleaned was (drumroll) the filesystem. But that’s another story all together.

When I entered the Puppy Linux forum I naturally mentioned that I was a complete total Linux novice, and that my favorite machine before x86 was an Amiga. And what do you think happened? Let’s just say that more than a few greeted me with open arms. These were the Amiga users that went to Linux when Commodore went under all those years ago. And they had been active in shaping Linux ever since (!)

So yeah, had a great time on their forums – and it was like running into your long-lost cousin or something. Like if you havent seen a family member in 30 years and suddenly you meet them face to face.

Tired of 30 gigabyte operatingsystems?

Puppy Linux is not for everyone. It’s the kind of system you either love or hate. I have yet to find someone on a middle-ground regarding puppy. Either you love it, or you hate it. Or if you prefer: either you use it and are thrilled about it, or you never install it.

It has a lot of good things going for it:

  • It is built to be one of the smallest, working desktop environment you can get
  • It is built according to “the old ways”, where speed, efficiency and size matter
  • It runs fine on older hardware (my test machine is an 8 year old laptop) and makes stuff you would otherwise throw away become valuable again.
  • It is storage abstracted, meaning you can have all your personal stuff inside a single SFS archive (easier to back up), while the operatingsystem remains on a USB stick.
  • You don’t have to permanently install it (again, boot from a USB stick).
  • It is single user by default, which is perfect for IOT projects and devices!
  • It supports ARM, so you can now enjoy this awesome thing on Raspberry PI 3 !
  • Its Linux so it has all the benefits of a rich driver database
  • Latest Puppy is binary compatible with Ubuntu (whatever that means)
  • There are 3 different desktops for it (to my knowledge), so if you don’t like the default client just install something else
  • It is the perfect rescue USB stick. At less than 300 Mb you can fit it on any old USB stick you have around the house. I think the smallest you can buy now is 4 GB
  • It has a warm, helpful, friendly and international group of users

Oh and it’s free!

As a final note: I installed Wine, the system that makes it possible to run Windows software on Linux (not an emulator, more of a api-call middle-ware /slash/ dispatcher). I was quite surprised to see it run Smart Mobile Studio on the first try!

So fancy a bit of hacking this weekend? Why not give puppy a go?

Check it out here: http://puppylinux.org/main/Download%20Latest%20Release.htm

Smart Pascal + WebOS = true

April 15, 2017 Leave a comment

LG-WebOSIf you own a television (and who doesn’t) chances are you own an LG model. The LG brand has been on the market since before recorded history (or so it feels) and have been a major player for decades.

What is little known though, is that LG also owns and finance a unique operating system. This is not uncommon these days, most NAS and cloud vendors have their own system (there are roughly 20 cloud based systems on the market that I know of). The only way to make a Smart TV (sigh) is to add a computer to it. And if you buy a television today chances are it contains a small embedded board running some custom-made operating system designed to do just that.

Every vendor has their own system, and those that don’t usually end up forking Android and adapt that to their needs.

Luna WebOS

LG’s system is called WebOS. This may sound like yet another html eye candy front end, but hear me out because this OS is not what it seems.

VirtualBox_Developers LuneOS emulator appliance 20151006131924-stable-038-253_15_04_2017_16_01_43

Except for some font issues (see disk label) Smart loaded fine under WebOS

Remember Palm OS? If you are between 35 and 45 you should remember that before Apple utterly demolished the mobile-phone market with their iPhone, one of the most popular brands was Palm. Their core product being “Palm Pilot”, a kind of digital filo-fax (personal organizer) and mobile phone rolled into one.

WebOS is based on what used to be called Palm-OS, but it has been completely revamped, given a sexy new user interface (looks a lot like android to be honest!) and brought into the present age. And best of all: its 100% free! It runs on a plethora of systems, from x86 to Raspberry PI to Mips. It is used in televisions, terminals and set-top-boxes around the world – and is quite popular amongst engineers.

Check it out their new portal here; there is also plenty of links to pre-built images if you look around there: https://pivotce.com/

JavaScript application stack

One of the coolest features in my view, is that their applications are primarily made by JavaScript. The OS itself is native of course, but they have discovered that JavaScript and HTML5 is an excellent way to build applications. Applications that is easy to control, sandboxed and safe yet incredibly powerful and capable.

VirtualBox_Developers LuneOS emulator appliance 20151006131924-stable-038-253_15_04_2017_16_00_46

Smart Desktop booting Quake 3 in Luna OS

Well, today I sent them an Email requesting their SDK. I know they use Enyo.js as their primary and suggested framework – but that is no problem for Smart Mobile Studio. A better question is: can Luna handle our codebase?

When I loaded up Quake 3 in pure Asm.JS the TV crashed with a spectacular access violation. So they are probably not used to the level of hardcore coding Smart developers represent. But yeah, Quake III is about as advanced as it gets – and it pushes any browser to the outer limits of what is possible.

Once I get the SDK, docs and a few examples – I will begin adding support for it as quickly as possible. Which means you get a new project type especially for the platform + API units (typically stored under $RTL\Apis folder).

Why is this cool?

Because with support for Luna / WebOS, you as a Delphi or Smart developer can offer your services to companies that use WebOS or Luna (the open source version) in their devices. The list of companies is substantial – and they are well established companies. And as you have no doubt noticed, hardware engineers doesn’t always make the best software engineers. So this opens up plenty of opportunities for a good object pascal / Smart developer.

17948597_10154372538120906_1912877833_o

Luna has an Android like quality over it – except its more smooth to use

Remember – other developers will use vanilla JavaScript. You have the onslaught of the VJL and the might of our libraries at your disposal. You can produce in days what others use weeks on achieving.

These are exciting days! I’ll keep you posted on our progress!

Smart Pascal, the next generation

April 15, 2017 1 comment

I want to take the time to talk a bit about the future, because like all production companies we are all working towards lesser and greater goals. If you don’t have a goal then you are in trouble; Thankfully our goals have been very clear from the beginning. Although I must admit that our way there has been.. “colorful” at times.

When we started back in 2010 we didn’t really know what would become of our plans. We only knew that this was important; there was a sense of urgency and “we have to build this” in the air; I think everyone involved felt that this was the case, without any rational explanation as to why. Like all products of passion it can consume you in a way – and you work day and night on turning an idea into something real. From the intangible to the tangible.

transitions_callback

It seems like yesterday, but it was 5 years ago!

By the end of 2011 / early 2012, Eric and myself had pretty much proven that this could be done. At the time there were more than enough nay-sayers and I think both of us got flamed quite often for daring to think different. People would scoff at me and say I was insane to even contemplate that object pascal could ever be implemented for something as insignificant and mediocre as JavaScript. This was my first meeting with a sub-culture of the Delphi and C++ community, a constellation I have gone head-to-head with on many occasions. But they have never managed to shake my resolve as much as inch.

 

 

When we released version 1.0 in 2012 some ideas about what could be possible started to form. Jørn defined plans for a system we later dubbed “Smart net”. In essence it would be something you logged onto from the IDE – allowing you to store your projects in the cloud, compile in the cloud (connected with Adobe build services) and essentially move parts of your eco-system to the cloud. Keep in mind this was when people still associated cloud with “storage”; they had not yet seen things like Uber or Netflix or played Quake 3 at 160 frames per second, courtesy of asm.js in their browser.

The second part would be a website where you could do the same, including a live editor, access to the compiler and also the ability to buy and sell components, solutions and products. But for that we needed a desktop environment (which is where the Quartex Media Desktop came in later).

cool

The first version of the Media Desktop, small but powerful. Here running in touch-screen mode with classical mobile device layout (full screen forms).

Well, we have hit many bumps along the road since then. But I must be honest and say, some of our detours have also been the most valuable. Had it not been for the often absurd (to the person looking in) research and demo escapades I did, the RTL wouldn’t be half as powerful as it is today. It would deliver the basics, perhaps piggyback on Ext.js or some lame, run of the mill framework – and that would be that. Boring, flat and limited.

What we really wanted to deliver was a platform. Not just a website, but a rich environment for creating, delivering and enjoying web and cloud based applications. And without much fanfare – that is ultimately what the Smart Desktop and it’s sexy node.js back-end is all about is all about.

We have many project types in the pipeline, but the Smart Desktop type is by far the most interesting and powerful. And its 100% under your control. You can create both the desktop itself as a project – and also applications that should run on that desktop as separate projects.

This is perfectly suited for NAS design (network active storage devices can usually be accessed through a web portal on the device), embedded boards, intranets and even intranets for that matter.

You get to enjoy all the perks of a multi-user desktop, one capable of both remote desktop access, telnet access, sharing files and media, playing music and video (we even compiled the mp4 codec from C to JavaScript so you can play mp4 movies without the need for a server backend).

The Smart Desktop

The Smart Desktop project is not just for fun and games. We have big plans for it. And once its solid and complete (we are closing in on 46% done), my next side project will not be more emulators or demos – it will be to move our compiler(s) to Amazon, and write the IDE itself from scratch in Smart Pascal.

smart desktop

The Smart Desktop – A full desktop in the true sense of the word

And yeah, we have plans for EmScripten as well – which takes C/C++ and compiles it into asm.js. It will take a herculean effort to merge our RTL with their sandboxed infrastructure – but the benefits are too great to ignore.

As a bonus you get to run native 68k applications (read: Amiga applications) via emulation. While I realize this will be mostly interesting for people that grew up with that machine – it is still a testament to the power of Smart and how much you can do if you really put your mind to it.

Naturally, the native IDE wont vanish. We have a few new directions we are investigating here – but native will absolutely not go anywhere. But cloud, the desktop system we are creating, is starting to become what we set out to make five years ago (has it been half a decade already? Tempus fugit!). As you all know it was initially designed as an example of how you could write full-screen applications for Raspberry PI and similar embedded devices. But now its a full platform in its own right – with a Linux core and node.js heart, there really is very little you cannot do here.

scsc

The Smart Pascal compiler is one of our tools that is ready for cloud-i-fication

Being able to login to the Smart company servers, fire up the IDE and just code – no matter if you are: be it Spain, Italy, Egypt, China or good old USA — is a pretty awesome thing!

Clicking compile and the server does the grunt work and you can test your apps live in a virtual window; switch between device layouts and targets — then hit “publish” and it goes to Cordova (or Delphi) and voila – you get a message back when binaries for 9 mobile devices is ready for deployment. One click to publish your applications on Appstore, Google play and Microsoft marketplace.

Object pascal works

People may have brushed off object pascal (and from experience those people have a very narrow view of what object pascal is all about), but when they see what Smart delivers, which in itself is written in Delphi, powered by Delphi and should be in every Delphi developer’s toolbox — i think it should draw attention to both Delphi as a product, object pascal as a language – and smart as a solution.

With Smart it doesn’t matter what computer you use. You can sit at home with the new A1222 PPC Amiga, or a kick-ass Intel i7 beast that chew virtual machines for breakfast. If your computer can handle a modern website, then you can learn object pascal and work directly in the cloud.

desktop_embedded

The Smart Desktop running on cheap embedded hardware. The results are fantastic and the financial savings of using Smart Pascal on the kiosk client is $400 per unit in this case

Heck you can work off a $60 ODroid XU4, it has more than enough horsepower to drive the latest chrome or Firefox engines. All the compilation takes place on the server anyways. And if you want a Delphi vessel rather than phonegap (so that it’s a Delphi application that opens up a web-view in full-screen and expose features to your smart code) then you will be happy to know that this is being investigated.

More targets

There are a lot of systems out there in the world, some of which did not exist just a couple of years ago. FriendOS is a cloud based operating system we really want to support, so we are eager to get cracking on their SDK when that comes out. Being able to target FriendOS from Smart is valuable, because some of the stuff you can do in SMS with just a bit of code – would take weeks to hand write in JavaScript. So you get a productive edge unlike anything else – which is good to have when a new market opens.

As far as Delphi is concerned there are smaller systems that Embarcadero may not be interested in, for example the many embedded systems that have come out lately. If Embarcadero tried to target them all – it would be a never-ending cat and mouse game. It seems like almost every month there is a new board on the market. So I fully understand why Embarcadero sticks to the most established vendors.

ov-4f-img

Smart technology allows you to cover all your bases regardless of device

But for you, the programmer, these smaller boards can repsent thousands of dollars worth of saving. Perhaps you are building a kiosk system and need to have a good-looking user interface that is not carved in stone, touch capabilities, low-latency full-duplex communication with a server; not much you can do about that if Delphi doesnt target it. And Delphi is a work horse so it demands a lot more cpu than a low-budget ARM SoC can deliver. But web-tech can thrive in these low-end environments. So again we see that Smart can compliment and be a valuable addition to Delphi. It helps you as a Delphi developer to act on opportunities that would otherwise pass you by.

So in the above scenario you can double down. You can use Smart for the user-interface on a low power, low-cost SoC (system on a chip) kiosk — and Delphi on the server.

It all depends on what you are interfacing with. If you have a full Delphi backend (which I presume you have) then writing the interface server in Delphi obviously makes more sense.

If you don’t have any back-end then, depending on your needs or future plans, it could be wise to investigate if node.js is right for you. If it’s not – go with what you know. You can make use of Smart’s capabilities either way to deliver cost-effective, good-looking device front-ends of mobile apps. Its valuable tool in your Delphi toolbox.

Better infrastructure and rooting

So far our support for various systems has been in the form of APIs or “wrapper units”. This is good if you are a low-level coder like myself, but if you are coming directly from Delphi without any background in web technology – you wouldn’t even know where to start.

So starting with the next IDE update each platform we support will not just have low-level wrapper units, but project types and units written and adapted by human beings. This means extra work for us – but that is the way it has to be.

As of writing the following projects can be created:

  • HTML5 mobile applications
  • HTML5 mobile console applications
  • Node.js console applications
  • node.js server applications
  • node.js service applications (requires PM2)
  • Web worker project (deprecated, web-workers can now be created anywhere)

We also have support for the following operating systems:

  • Chrome OS
  • Mozilla OS
  • Samsung Tizen OS

The following API’s have shipped with Smart since version 1.2:

  • Khronos browser extensions
  • Firefox spesific API
  • NodeWebkit
  • Phonegap
    • Phonegap provides native access to roughly 9 operating systems. It is however cumbersome to work with and beta-test if you are unfamiliar with the “tools of the trade” in the JavaScript world.
  • Whatwg
  • WAC Apis

Future goals

The first thing we need to do is to update and re-generate ALL header files (or pascal units that interface with the JavaScript libraries) and make what we already have polished, available, documented and ready for enterprise level use.

kiosk-systems

Why pay $400 to power your kiosk when $99 and Smart can do a better job?

Secondly, project types must be established where they make sense. Not all frameworks are suitable for full project isolation, but act more like utility libraries (like jQuery or similar training-wheels). And as much as possible of the RTL made platform independent and organized following our namespace scheme.

But there are also other operating systems we want to support:

  • Norwegian made Friend OS, which is a business oriented cloud desktop
  • Node.js OS is very exciting
  • LG WebOS, and their Enyo application framework
  • Asustor DLM web operating system is also a highly attractive system to support
  • OpenNAS has a very powerful JavaScript application framework
  • Segate Nas OS 4 likewise use JavaScript for visual, universal applications
  • Microsoft Universal Platform allows you to create truly portable, native speed JavaScript applications
  • QNap QTS web operating system [now at version 4.2]

All of these are separate from our own NAS and embedded device system: Smart Desktop, which uses node.js as a backend and will run on anything as long as node and a modern browser is present.

Final words

I hope you guys have enjoyed my little trip down memory lane, and also the plans we have for the future. Personally I am super excited about moving the IDE to the cloud and making Smart available 24/7 globally – so that everyone can use it to design, implement and build software for the future right now.

Smart Pascal Builder (or whatever nickname we give it) is probably the first of its kind in the world. There are a ton of “write code on the web” pages out there, but so far there is not a single hard-core development studio like I have in mind here.

So hold on, because the future is just around the corner 😉

Smart Pascal: Changes

February 6, 2017 3 comments

The changes to Smart Mobile Studio over the past 12 months have been tremendous. At first glance they might not seem huge, but if you do a raw compare on the upcoming RTL and the now more than a year old RTL – I think you will find that there are very few places in the code where I havent done improvements.

In this post I will try to collect some of the changes. A final change log will be released together with the update, but at least this “preview” will tell you something about the hours, days, weeks and months I have put into the RTL.

The Smart Lab, this is where most of my ideas turn into code :)

The Smart Lab, this is where most of my ideas turn into code 🙂

There will also be changes to the IDE; We have 3 members that both have added new features in the past, and members that are adding changes right now. The immediate being an update from Chromium Embedded CEF2 to CEF4, which is a huge change in speed and preview quality. And there are other more important changes being worked on, the most pressing being the designer and “live” rendering of controls.

Ok, let’s just dive into it!

RTL Changes

  • The units that make up the RTL are now organized according to a namespace scheme
    • Visual units are prefixed with SmartCL
    • Universal units are prefixed with System
    • Low level API units for node.js are prefixed with NodeJS
    • High level Node classes are prefixed with SmartNJ
  • Better fragmentation: Code that was previously a part of one very large unit has been divided into smaller files to avoid large binaries. For example:
    •  TRect, TPoint etc. are now in System.Types.Graphics
    • System.Time is extended with SmartCL.Time which gives graphical timing functions like TW3Dispatch.RequestAnimationFrame()
  • The unit System.Objects has been added, isolating classes that work both in browsers, node and headless runtime environments:
    • TW3ErrorObjectOptions
    • TW3ErrorObject
    • TW3OwnedErrorObject
    • TW3HandleBasedObject
  • TW3Label has been completely re-written and is now faster, has synchronized state management, resize on ready-state and much more
  • The unit System.NameValuePairs was added. This implements a traditional name/value list (or dictionary) that is used by various classes and standards throughout the RTL (http headers being an example)
  • A show stopping bug in SmartCL.Effects has been fixed. Effects now works as expected (simply add SmartCL.Effects to your uses clause)
  • TFileStream has been added to SmartNJ.Streams. This is unique for node.js since only node (and some hybrid runtime engines) support direct file access.
  • TBinaryData is extended in SmartNJ.Streams.pas to emit and consume node.js buffers which are different from traditional JS buffers. ToNodeBuffer() and FromNodeBuffer() allows you to consume special node buffer types.
  • A full software tweening engine has been written from scratch in pure Smart Pascal and added to the SmartCL namespace. The following units have been added:
    • SmartCL.Tween
      • TW3TweenElement
      • TW3TweenEngine
      • function TweenEngine: TW3TweenEngine
    • SmartCL.Tween.Effect
      • TW3CustomEffect
      • TW3CustomTweenEffect
      • TW3ControlTweenEffect
      • TW3MoveXEffect
      • TW3MoveYEffect
      • TW3MoveToEffect
      • TW3ColorMorphEffect
      • TW3CustomOpacityEffect
      • TW3FadeInEffect
      • TW3FadeOutEffect
      • TW3OpacityEffect
    • SmartCL.Tween.Ease
      • TW3TweenEase
      • TW3TweenEaseLinear
      • TW3TweenEaseQuadIn
      • TW3TweenEaseQuadOut
      • TW3TweenEaseQuadInOut
      • TW3TweenEaseCubeIn
      • TW3TweenEaseCubeOut
      • TW3TweenEaseCubeInOut
      • TW3TweenEaseQuartIn
      • TW3TweenEaseQuartOut
      • TW3TweenEaseQuartInOut
      • TW3TweenEaseQuintIn
      • TW3TweenEaseQuintOut
      • TW3TweenEaseQuintInOut
      • TW3TweenEaseSineIn
      • TW3TweenEaseSineOut
      • TW3TweenEaseSineInOut
      • TW3TweenEaseExpoIn
      • TW3TweenEaseExpoOut
      • TW3TweenEaseExpoInOut
      • TW3TweenEaseCollection
      • function TweenEaseCollection: TW3TweenEaseCollection
  • Support for require.js (simplified resource management and more) is now a part of the RTL. The unit SmartCL.Require gives you:
    • TRequireError
    • TW3RequireJSConfig
    • TW3RequireJS
    • function Require: TW3RequireJS;
    • procedure Require(Files: TStrArray);
    • procedure Require(Files: TStrArray; const Success: TProcedureRef);
    • procedure Require(Files: TStrArray; const Success: TProcedureRef; const Failure: TW3RequireErrHandler);
  • SmartCL (browser) based communication has been consolidated into more appropriate named units:
    • SmartCL.Net.http
    • SmartCL.Net.http.headers
    • SmartCL.Net.websocket
    • SmartCL.Net.jsonp
    • SmartCL.Net.socketIO
    • SmartCL.Net.rest
  • A callback bug in the REST api has been fixed (called wrong handler on exit)
  • Suppport for mutation events has been added to the RTL. This can be found in the unit SmartCL.Observer. Mutation observer allows you to listen for changes to any HTML element, its properties or attributes. This is a very important unit with regards to data-aware controls:
    • TMutationObserverOptions
    • TMutationObserver
  • A fallback shim for mutation observing has been added to the $RTL\Shims folder. This ensures that mutation events can be observed even on older browsers.
  • A MSIE legacy shim that patches all missing IE functionality from the current back to IE5 has been added. This allows Smart applications to execute without problems on older browsers (which there are surpricingly many of)
  • The unit SmartCL.Styles has been added. This contains classes and methods that create a stylesheet at runtime. The stylesheet is deleted when the object instance is disposed. This allows for easier styling from within your code rather than having to pre-define it in the global css file.
  • Polygon helper classes have been isolated in SmartCL.Polygons. These expose functionality to TPointArray and TPointFArray:
    • TPolygonHelper
    • TPolygonFHelper
  • Support for socket.io has been added for the browser. The unit SmartCL.Net.SocketIO contains the following client class and methods:
    • TW3SocketIOClient
      • procedure Connect(RemoteHost: string)
      • procedure Disconnect
      • procedure Emit(EventName: string; Data: variant)
      • procedure On(EventName: string; Handler: TW3SocketIOHandler)
  • Support for socket.io has been added to the new SmartNJ namespace as well, allowing you to write fast, good performance SocketIO servers.
  • Support for WebSocketIO, an amalgamation of classic WebSocket and SocketIO has been added. This is the default socketio server type for our node.js system
  • Support for user-attributes has been added and isolated in SmartCL.Attributes. This allows you to easily add, read, write and remove custom attributes to any tag. This is a very important feature that is used both by the effects framework – and also by the database framework.
  • SmartCl.Buffers unit has been updated. Now uses the latest methods of the RTL and now works on all browsers, not just webkit and Firefox.
  • $RTL\SmartCL\Controllers\SmartCL.Scroll.Momentum has been removed
  • $RTL\SmartCL\Controllers\SmartCL.Scroll.plain has been removed
  • All visual controls have been updated, rewritten and thoroughly tested
  • The RTL now has support for ACE, the #1 JavaScript code editor. This is more or less a JavaScript implementation of Delphi’s Synedit, but with functionality closer to Sublime. Ace comprises over 100 files. In this first release we have focused on getting a respectable basis in place.
    • The wrapper units is SmartCL.AceEditor, which gives you the following classes:
      • TW3AceMode
      • TW3AceTheme
      • TW3AceEditor
        • Text: string
        • SelectedText: string
        • LineCount: integer
        • WordWrap: boolean {get; set;}
        • ShowPrintMargin: boolean {get; set;}
        • EditorMode: TW3AceMode {get; set;}
        • Theme: TW3AceTheme {get; set;}
    • Modes and syntax we added support for:
      • AceEdit.Mode.Abap
      • AceEdit.Mode.Abc
      • AceEdit.Mode.ActionScript
      • AceEdit.Mode.Ada
      • AceEdit.Mode.Apache
      • AceEdit.Mode.AppleScript
      • AceEdit.Mode.Ascii
      • AceEdit.Mode.AutoHotKey
      • AceEdit.Mode.Base
      • AceEdit.Mode.BatchFile
      • AceEdit.Mode.C9Search
      • AceEdit.Mode.Cirru
      • AceEdit.Mode.Clojure
      • AceEdit.Mode.Cobol
      • AceEdit.Mode.Coffee
      • AceEdit.Mode.ColdFusion
      • AceEdit.Mode.Cpp
      • AceEdit.Mode.CSharp
      • AceEdit.Mode.CSS
      • AceEdit.Mode.Curly
      • AceEdit.Mode.D
      • AceEdit.Mode.Dart
      • AceEdit.Mode.Diff
      • AceEdit.Mode.django
      • AceEdit.Mode.Pascal
      • AceEdit.Mode.VbScript
      • AceEdit.Mode.X86Asm
    • Themes we support:
      • AceEdit.Theme.Ambiance
      • AceEdit.Theme.Base
      • AceEdit.Theme.Chaos
      • AceEdit.Theme.Chrome
      • AceEdit.Theme.CloudMidnight
      • AceEdit.Theme.Clouds
      • AceEdit.Theme.Cobalt
      • AceEdit.Theme.CrimsonEdit
      • AceEdit.Theme.Dawn
      • AceEdit.Theme.Dreamweaver
      • AceEdit.Theme.eclipse
      • AceEdit.Theme.Github
      • AceEdit.Theme.Monokai
      • AceEdit.Theme.Twilight
  • The RTL has a completely re-written ready-state engine. Since JavaScript is ASync by nature, the handle or reference to the TAG a class manages may not be ready during the constructor. The ready-state engine waits in the background for the handle to become valid and the control available in the DOM (document object model). It then sets the appropriate component-state flags and issues a resize to ensure that the control looks as it should.
  • Support for Control-State (which is common in Delphi and Lazarus) has been added to visual controls. The following state flags can be read or set:
    • csCreating
    • csLoading
    • csReady
    • csSized
    • csMoved
    • csDestroying
  • TW3CustomControl now supports creation-flags. This is also a feature common to Delphi’s VCL and Lazarus’s LCL. It allows you to set some flags that disable fundamental behavior when you don’t need it. For instance, a label which will always have a fixed size or be in a fixed place does not need Resize() management. Turning off resize-checking makes you application execute faster. The following creation flags can be defined:
    • cfIgnoreReadyState
    • cfSupportAdjustment
    • cfReportChildAddition
    • cfReportChildRemoval
    • cfReportMovement
    • cfReportResize
    • cfAllowSelection
    • cfKeyCapture
  • To enable or disable component states, use the following methods of TW3CustomControl:
    • procedure AddToComponentState(const Flags: TComponentState)
    • procedure RemoveFromComponentState(Const Flags: TComponentState)
  • Support for keyboard input and charcode capture has been added to the RTL. Simply add the flag cfKeyCapture to the CreationFlags() function of your custom-control and the control’s OnKeyPress event will fire when the control has focus and a key is pressed. By default this is turned off.
  • Support for text-selection and disabling text-selection has been added. Simply add or remove cfAllowSelection from your controls CreationFlags() function. By default text selection is turned off (except for obvious controls like text-edit).
  • TW3CustomControl now has a zIndex property. This represents the z-order of a control (which control is in front of others).
  • The function TW3CustomControl.GetZOrderList(sort: boolean) can be called to get a sorted or un-sorted list of child elements. This provides both the handle for each control and its zindex value.
  • The method TW3CustomControl.Showing() has been updated, it now takes height for IFrame and CSS4 GPU positioning (read: it will understand that its offscreen even if you use CSS3 to move it).
  • TW3CustomControl now has a Cursor property. This allows you to read and set the mouse cursor for a control. The following cursors are supported:
    • crAuto
    • crDefault
    • crInherited
    • crURL
    • crCrossHair
    • crHelp
    • crMove
    • crPointer
    • crProgress
    • crText
    • crWait
    • crNResize
    • crSResize
    • crEResize
    • crWResize
    • crNEResize
    • crNWResize
    • crNSResize
    • crSEResize
    • crSWResize
    • crEWResize
  • To simplify mouse cursor handling, the class TW3MouseCursor, which is a static class, has been added to SmartCL.System.pas. It expose the following methods:
    • function  CursorByName(const CursorName: string): TCursor
    • function  NameByCursor(const Cursor: TCursor): String
    • function  GetCursorFromElement(const Handle: TControlHandle): TCursor
    • procedure SetCursorForElement (const Handle: TControlHandle; const Cursor: TCursor)
  • TW3CustomControl have two new methods to deal with cursor changes. These are virtual and can be overriden:
    •  function GetMouseCursor: TCursor;
    • procedure SetMouseCursor(const NewCursor: TCursor);
  • Basic device capabillity examination support has been added. The class TW3DOMDeviceCapabilities has been added to SmartCL.System. This exposes the following methods and properties:
    • DevicePixelRatio: float
    • DisplayPixelsPerInch: TPixelsPerInch
    • GetMouseSupport: boolean
    • GetTouchSupport: boolean
    • GetGamePadSupport: boolean
    • GetKeyboardSupported: boolean
    • GetDevicePixelRatio: float
    • GetDisplayPixelsPerInch: TPixelsPerInch
  • In order to make the VJL more architectually compatible with Delphi’s VCL and Freepascal’s LCL – TW3Component as name has been pushed back. TW3TagObj used to be the root class for visual components, followed by TW3Component, TW3MovableControl and finally TW3CustomControl. However, since we want to use TW3Component as a common non-visual control, this name was pushed back. So TW3TagObj now inherits from TW3Component, and TW3TagContainer has taken the place TW3Component once had. Here is the new inheritance chain
    • TW3CustomComponent
      • TW3Component
        • TW3TagObj
          • TW3TagContainer
            • TW3MovableControl
              • TW3GraphicControl
              • TW3CustomControl
  • TW3Component is now the basis for non-visual components, which opens up for a whole new set of controls that can be dragged onto a form or datamodule. Sadly Datamodules did not make it into this update, but it is the next logical step – hopefully we will have it in place with the IDE update that will appear after this.
    • Controllers will eventually be re-incarnated as TW3Component’s
    • Rouge classes will be implemented as TW3Components
    • Database classes will become non-visual TW3Components
    • Communication classes will become non-visual TW3Components
  • TW3Timer is moved to the unit System.Time.pas
  • TW3Timer now inherits from TW3Component
  • The following procedures have been deprecated and moved to System.Time.pas. Please use the corresponding methods in TW3Dispatch (Note: The w3_* methods will be deleted in the next update!):
    • w3_Callback = TW3Dispatch.Execute()
    • w3_SetInterval = TW3Dispatch.SetInterval()
    • w3_ClearInterval = TW3Dispatch.ClearInterval()
    • w3_SetTimeout = TW3Dispatch.SetTimeOut()
    • w3_ClearTimeout = TW3Dispatch.ClearTimeOut()
  • TW3Dispatch, which is a class dealing with timing and scheduling, now support the following new methods:
    • class function JsNow: JDate;
    • class function Ticks: integer;
    •  class function TicksOf(const Present: TDateTime): integer;
    • class function TicksBetween(const Past, Future: TDateTime): integer;
    • class procedure RepeatExecute(const Entrypoint: TProcedureRef; const RepeatCount: integer; const IntervalInMs: integer);
  • SQLite has been recompiled from C to JS and is now version 3.8.7.4
  • The SQLite units have been moved from SmartCL.SQLite to System.SQLite
  • The following SQLite classes have been given an overhaul:
    • TSQLiteDatabase
    • TSQLiteDBObject
    • TSQLiteRowValues
    • TSQLiteResult
    • TSQLParamData
    • TSQLitePair
    • TSQLiteParams
    • TSQLiteStatement
  • The SQLiteInitialize() global method is no longer requires and has been removed. When you include System.SQLite.pas in your project, the whole database engine is automatically linked and loaded into memory on execute. Use the SQLiteReady() global function to check if the database engine has loaded before use.
  • TReader, TWriter which was exclusively a re-implementation based on LCL and Delphi has been fragmented. TReader and TWriter are now base classes that takes an interface as parameter in their constructor (IBinaryTransport). Both TStream and TBinaryData, which are the easiest ways of dealing with binary files and memory – implements the IBinaryTransport interface.
  • TStreamReader and TStreamWriter has been added to the RTL, these are implemented in System.Stream.Reader.pas and System.Stream.Writer.pas. When working with streams, make sure you use TStreamReader and TStreamWriter. Using TReader and TWriter directly may cause problems if you are not intimately familiar with the RTL.
  • The unit System.Structure has been added to the RTL. This is to simplify working with structured records, abstracting you from the underlying format. The following classes are exposed in System.Structure:
    • EW3Structure [exception]
    • TW3Structure
      • procedure WriteString(Name: string; Value: string; const Encode: boolean);
      • procedure WriteInt(const Name: string; value: integer);
      • procedure WriteBool(const Name: string; value: boolean);
      • procedure WriteFloat(const Name: string; value: float);
      • procedure WriteDateTime(const Name: string; value: TDateTime);
      • function ReadString(const Name: string): string;
      • function ReadInt(const Name: string): integer;
      • function ReadBool(const Name: string): boolean;
      • function ReadFloat(const Name: string): float;
      • function ReadDateTime(const Name: string): TDateTime;
      • function Read(const Name: string): variant;
      • procedure Write(const Name: string; const Value: variant);
      • procedure Clear;
      • procedure SaveToStream(const Stream: TStream);
      • procedure LoadFromStream(const Stream: TStream);
      • procedure LoadFromFile(Url: string; const callback: TW3StructureLoadedCallback);
  • The unit System.Structure.JSON implements a TW3Structure class that stores data as JSON. The data is stored in memory as a single record, but naturally you can add further records as child properties – and then export the complete data as binary format to a stream, or as a single string through the ToString() method.
  • The unit System.Structure.XML implements a TW3Structure class which emits XML. Since the XML interface in browsers is both unstable and wildly different between vendors, we use our own BTree engine to manage the data. The class emits valid XML and will also deal with sub-records just as TW3JSonStructure.
  • The unit System.JSON has been added, which contains the following classes and methods. TJSON is a highly effective wrapper over the built-in JSON API, making it easier to work with JSON in general. The constructor is highly overloaded so you can wrap existing JS elements directly. This class is used by TW3JSONStructure to simplify its work:
    • TJSONObjectOptions
    • EJSONObject [exception]
    • TJSONObject
    • TJSON
  • The Delphi parser library TextCraft has been added to the RTL. When I write added that is not really correct. TextCraft was first written in Smart and then post converted to Delphi. But the RTL now has the latest update of this library, making advanced, recursive text-parsing possible:
    • System.Text.Parser.pas
    • System.Text.Parser.Words.pas
  • TW3Borders has been re-coded and optimized.
    • TW3Border.EdgeString() now use a lookup table for maximum efficiency
    • TW3Border’s edges now use a lookup table to quickly map types to strings
    • All superfluous reference testing has been removed, resulting in much faster and efficient code
    • Styles are no longer read via the older w3_read/write mechanisms but rather directly from the controlhandle
  • Two new classes has been added to deal with border-radius.
    • All controls can set the radius for all edges using the older TW3MovableControl.BorderRadius property, but this has now been expanded on. The following classes has been added. TW3BorderRadius can be accessed via the new TW3CustomControl.EdgeRadius property, which creates an instance on demand (first access):
      • TW3BorderEdgeRadius
      • TW3BorderEdgeTopRadius
      • TW3BorderEdgeBottomRadius
      • TW3BorderRadius
  • TStringBuilder has been added to System.Type. It deviates some from the Lazarus and Delphi variations because it adds functions that actually makes sense ! 🙂
  • All custom controls have a cool background class that allows you to set color, add images and export pixmaps. A new class called TW3ControlBackgroundSize has been added to this (TW3CustomControl.Background.Size) making things even easier. Especially detecting if a background is a graphic and its true size (!)
    • function  BackgroundIsImage: boolean;
    • property Mode: TW3ControlBackgroundSizeMode
    • property Width: integer;
    • property Height: integer;
  • TW3Constraints (inherits from TW3OwnedObject) has been added to the RTL. This is a class that deals with size constraints for controls. Please note that this is experimental (!) When active it forces size restrictions much like the VCL and LCL — but it can cause havoc in a design that auto-scales (!). It implements the following:
    • property Enabled: boolean
    • property MinWidth: integer
    • property MinHeight: integer
    • property MaxWidth: integer
    • property MaxHeight: integer
    • procedure   ApplyToOwner
    • function GetMaxWidth: integer;
    • function GetMaxHeight: integer;
    • procedure SetMaxWidth(const NewMaxWidth: integer);
    • procedure SetMaxHeight(const NewMaxHeight: integer);
    • function GetMinWidth: integer;
    • function GetMinHeight: integer;
    • procedure SetMinWidth(aValue: integer);
    • procedure SetMinHeight(aValue: integer);
    • procedure   SetEnabled(const NewValue: boolean); virtual;
  • Event classes has finally been added! Need another OnClick event? just Create an event object and attach it to your control. The following event classes are now in SmartCL.Events :
    • TW3DOMEvent
    • TW3StandardDOMEvent
    • TW3MouseEnterEvent
    • TW3MouseLeaveEvent
    • TW3MouseDownEvent
    • TW3MouseMoveEvent
    • TW3MouseUpEvent
    • TW3ElementRemovedEvent
    • TW3ElementAddedEvent
    • TW3ElementContextMenuEvent
    • TW3ElementClickEvent
    • TW3ElementDblClickEvent
    • TW3ElementMouseOverEvent
    • TW3ElementKeyDownEvent
    • TW3ElementKeyPressEvent
    • TW3ElementKeyUpEvent
    • TW3ElementChangeEvent
    • TW3ElementMouseWheelEvent
    • TW3DOMEventAPI
      • class procedure RegisterEvent(Handle: TControlHandle; EventName: string; EventHandler:TW3JSEventHandler; Mode: TW3DOMEventMode); static;
      • class procedure UnRegisterEvent(Handle: TControlHandle; EventName: string; EventHandler:TW3JSEventHandler; Mode: TW3DOMEventMode); static;
  • Since events are not DOM or browser bound, they also occur in node.js and other JavaScript virtual machines (and you can also register your own events just like you do in Delphi or C#). As such I added System.Events.pas which contains the basic functionality for OOP events. The following classes and methods are added:
    • TW3SystemEventObject
      • procedure Attach(NameOfEvent: string);
      • procedure Detach;
      • property  Attached: boolean
      • property  EventName: string
  • TW3CustomBrowserAPI.Styles references “window.styles” which is no longer supported by browsers. This has been updated to “window.document.head.style”
  • BrowserAPI() now expose key browser objects as actual class objects. The following objects are exposed as handles only (as they have been for some time):
    • Document
    • Body
    • Window
    • Styles
    • Console
    • Navigator
    • Self
    • Event
  • Besides the above handles, we now expose direct access to the mapped class objects directly. The W3C units have been updated to reflect modern browsers (so the objects have the latest methods exposed):
    • DocumentObject: JDocument
    • BodyObject: JHTMLElement
    • WindowObject: JWindow
    • StylesObject: JCSSStyleDeclaration
    • NavigatorObject: JNavigator
    • EventObject: JEvent
  • Added missing W3C.localStorage unit file which expose the localstorage API. This can now be accessed via BrowserAPI().WindowObject.localStorage. It supports the following functions:
    • property  key[const keyId: string]: variant
    • property  length: integer
    • procedure setItem(const key: string; const Value: variant)
    • procedure removeItem(const key: string)
    • procedure clear
    • function  valueOf: variant [* prototype]
    • function  hasOwnProperty(const key: string): boolean [* prototype]
  • TW3Image now has a fitstyle property which wraps the “object-fit” css style. This gives you finer control over how the image is presented inside the control (or the background of a control). The following presentation styles are supported:
    •  fsNone
      Image will ignore the height and width of the parent and retain its original size.
    • fsFill
      This is the default value which stretches the image to fit the content box, regardless of its aspect-ratio.
    • fsContain
      Increases or decreases the size of the image to fill the box whilst preserving its aspect-ratio.
    • fsCover
      The image will fill the height and width of its box, once again maintaining its aspect ratio but often cropping the image in the process.
    • fsScaleDown
      The control will compare the difference between fsNone and fsContain in order to find the smallest concrete object size.
  • TW3Image now supports binary IO besides the common HTML src property. The following methods have been added to TW3Image:
    • function  ToBuffer: TBinaryData;
    • function  ToStream: TStream;
    • function  ToDataUrl: string;
    • function  ToImageData: TW3ImageData;
    • procedure LoadFromURL(aURL: String);
    • procedure LoadFromImageData(Const Data:TW3ImageData);
    • procedure LoadFromBinaryData(const Memory:TBinaryData);
    • procedure LoadFromStream(const Stream:TStream);
    • procedure Allocate(aWidth,aHeight:Integer);
  • TW3Image now expose 3 new properties that are very helpful. The PixelWidth and PixelHeight gives you the actual size of an image, regardless of scale or viewport proportions. This is handy when using the Allocate() method to create a pixel-buffer that matches an existing image:
    • property  PixelWidth: integer
    • property  PixelHeight: integer
    • property  Complete: boolean
  • The following procedures and functions have been removed. Faster to assign and deal with events directly on the handle – no need for the extra call:
    • procedure w3_bind
    • procedure w3_bind2
    • procedure w3_unbind
    • procedure w3_unbind2
  • TStreamHelper has been added to SmartCL.System. It adds data objectification. Generating an internal URL to access the data via a link. These functions wraps the createObjectURL API (https://developer.mozilla.org/en-US/docs/Web/API/URL/createObjectURL) The following methods are in the helper:
    • function  GetObjectURL: string;
    • procedure RevokeObjectURL(const ObjectUrl: string);
  • TAllocationHelper has been added to SmartCL.System. It does exactly the same as TStreamHelper but for the memory class TAllocation (system.memory.allocation.pas).
  • TBinaryData which is a very flexible and fast memory buffer class (system.memory.buffer.pas), inherits from TAllocation – as such TAllocationHelper also affects this class.
  • The class TW3URLObject, which is a static class, has been added to SmartCL.System.pas. It wraps the same API as TStreamHelper and TAllocationHelper, but operates independent of medium. This class also adds forced download methods – where you can start a “save as” of binary data (or any data) with a single call. The class interface contains the following:
    • class function  GetObjectURL(const Text, Encoding, ContentType, Charset: string): string;
    • class function  GetObjectURL(const Text: string): string;
    • class function  GetObjectURL(const Stream: TStream): string;
    • class function  GetObjectURL(const Data: TAllocation): string;
    • class procedure RevokeObjectURL(const ObjectUrl: string);
    • class procedure Download(const ObjectURL: string; Filename: string);
    • class procedure Download(const ObjectURL: string; Filename: string;
      const OnStarted: TProcedureRefId);
  • TW3CustomControl now have automatic ZIndex assignment. On creation a ZIndex value is automatically assigned to each control (auto-inc number). The following methods have been added for that value:
    • function  GetZIndex: integer;
    • procedure SetZIndex(const NewZIndex: integer);
  • To better deal with z-order and the creation sequence of child elements, the following methods have been added to TW3CustomControl. This makes it easier to control how child elements stack and overlap:
    • function  GetZOrderList(const Sort: boolean): TW3StackingOrderList;
    • function  GetChildrenSortedByYPos: TW3TagContainerArray;
    • function  GetChildrenSortedByXpos: TW3TagContainerArray;
  • TW3CustomControl’s Invalidate() method has previously done nothing. It has been present for TW3GraphicControl (which inherits from TW3CustomControl) as means of redrawing or refreshing graphics.
    Invalidate for non-graphic controls now calls resize() to update the layout. While coders should use BeginUpdate / EndUpdate, there are times when a call to Invalidate() is all you need.
  • Text-shadow, which is a very effective CSS effect for text, has been isolated in its own class (TW3TextShadow) and is now exposed as TW3CustomControl’s TextShadow property. The class is created on-demand and will not consume memory until used.
  • TW3TagStyle, a class that parses and allows you to add, check and remove CSS style-rules to your control has been updated. The previous parsing routine has been replaced by a much faster string.split() call – removing the need for the for/next loop used to extract styles previously. This class is exposed as TW3CustomControl’s “TagStyle” property. Like Text-Shadow it is created on demand.
    Note: This replaces the older “CssClasses” property which has now been deprecated.
  • The TW3AnimationFrame class has been moved from SmartCL.Components.pas to SmartCL.Animation.pas which seems more fitting.
  • TControlHandleHelper is a strict helper-class for TControlHandle, which is the handle type used for HTML elements in the DOM. Many of the rouge utility functions that were scattered around the RTL have now been isolated here. It now gives you the following functionality:
    • function Defined: boolean;
    • function UnDefined: boolean;
    • function Valid: boolean;
    • function Equals(const Source: TControlHandle): boolean;
    • function Ready: Boolean;
    • function GetChildById(TagId: string): TControlHandle;
    • function GetChildCount: integer;
    • function GetChildByIndex(const Index: integer): TControlHandle;
    • function GetChildOf(TagType: string; var Items: TControlHandleArray): boolean;
    • function GetChildren: TControlHandleArray;
    • function HandleParent: TControlHandle;
    • function HandleRoot: TControlHandle;
    • procedure ReadyExecute(OnReady: TProcedureRef);
    • procedure ReadyExecuteEx(const Tag: TObject; OnReady: TProcedureRefEx);
    • procedure ReadyExecuteAnimFrame(OnReady: TProcedureRef);
  • function w3_GetCursorCSSName, w3_GetCursorFromCSSName has been removed from SmartCL.System.pas – use the methods in the static class TW3MouseCursor instead
  • w3_DOMReady, w3_CSSPrefix and w3_CSSPrefixDef has been removed from SmartCL.System.pas – use the methods in BrowserAPI instead
  • The RTL now has its own stylesheet classes. The unit SmartCL.CSS.StyleSheet.pas and SmartCL.CSS.Classes represents the CSS management units for the system.

    TSuperstyle (see below) generates animation styles on-the-fly and returns the name for it. You can add this name to a control with Control.TagStyle.add().

    TCSS is a helper class for building complex styles via code. TSuperstyle uses this to create the animation effects. Note that these two classes are meant as examples (!) SmartCL.CSS.StyleSheet contains the following classes:

    • TW3StyleSheet
      • property    Handle: TControlHandle
      • property    StyleSheet: JCSSStyleSheet
      • property    StyleRules: JCSSRuleList
      • property    Count: integer
      • property    Items[const Index: integer]: string
      • class function  CreateStyleId: string;
      • class function  CreateStylesheetId: string;
      • class function  CreateStyleElement: TControlHandle;
      • class procedure DisposeStyleElement(const Handle: TControlHandle);
      • class function DefaultStylesheet: TW3StyleSheet;
      • function GetSheetObject: THandle;
      • function GetRuleObject: THandle;
      • function GetCount: integer;
      • function GetItem(const Index: integer): string;
    • TSuperStyle
      • class function EdgeRound(Size: integer): string; overload;
      • class function EdgeRound(TopLeftP, TopRightP, BottomLeftP, BottomRightP: integer):String; overload;
      • class function EdgeTopaz: string;
      • class function EdgeAngaro: string;
      • class function AnimGlow(GlowFrom, GlowTo: TColor): string;
      • class procedure AnimStart(Handle: TControlHandle; animName: String);
    • TCSS
      • function KeyFrames: TCSS;
      • function From: TCSS;
      • function &To: TCSS;
      • function Enter: TCSS;
      • function Leave: TCSS;
      • function PercentOf(Value: integer): TCSS;
      • function IntOf(Value: integer): TCSS;
      • function ColorOf(Value: TColor): TCSS;
      • function &inc(Value: string): TCSS;
      • function CRLF: TCSS;
      • function OpenParam: TCSS;
      • function CloseParam: TCSS;
      • function Background: TCSS;
      • function BoxShadow(Left, Top, Right, Bottom: integer): TCSS;overload;
      • function BoxShadow(Left, Top, Right, Bottom: integer; Color:TColor):TCSS;overload;
      • function LinearGradientV(aFrom, aTo: TColor): TCSS;
      • function LinearGradientH(aFrom, aTo: TColor): TCSS;
      • function LinearGradientTL(aFrom, aTo: TColor): TCSS;
      • function LinearGradientTR(aFrom, aTo: TColor): TCSS;
      • function BeginComplexGradient(Angle: integer): TCSS;
      • function EndComplexGradient: TCSS;
      • function ColorPercent(PercentOf: integer; Color: TColor): TCSS;
      • function LinearGradientAngle(aFrom, aTo: TColor; const Angle: float): TCSS;overload;
      • function LinearGradientAngle(Colors: Array of TColor; const Angle: float): TCSS; overload;
      • function AnimationName(Name: string): TCSS;
      • function AnimationDuration(Secs, MSecs: integer): TCSS;
      • function AnimationInfinite: TCSS;
      • class function Make: TCSS;
      • function Assign(Value: string): TCSS;

Controllers

A new concept has been added to the RTL, namely something called controllers. These are classes that attach to a control and either alter its behavior or exposes functionality that is not available by default.

SmartCL.Controller.EdgeSense

This unit exposes the class TW3EdgeSenseController, which can be attached to any visual control. Once attached the controller will fire events whenever the mouse-pointer or touch is close to the border of the control. This is typically used by controls that is expected to be resized, like a grid header column.

SmartCL.Controller.Swipe

This unit provides the following classes:

  • TW3SwipeControllerOptions
  • TW3SwipeRange
  • TW3SwipeController

The TW3SwipeController attaches itself to any visual control and will trigger an event if a swipe gesture is performed on the target control. Since gesures such as swipe is one of the most common gestures, it made sense to isolate this in particular.

This controller is presently used by one of the new components, TW3Win10Header, which is a html5 implementation of the Windows 10 mobile header. This is a good-looking control that allows you to swipe through categories, much like a page-control for Windows applications, but better suited for mobile designs.

SmartCL.Controller.TagAttributes

This controller implements access to custom data attributes for any visual control. It is essentially the functionality of TW3ElementAttributes (unit: SmartCL.Attributes= but here implemented as a controller.

The class used by the RTL for this behavior today is TW3ElementAttributes, but this will be deprecated in the future in favour of the controller.

SmartCL.Scroll.IScroll

iScroll is now the default scrolling mechanism for visual Smart Mobile Studio applications. The new TW3ScrollWindow and TW3CustomScrollList visual classes all derive their fantastic accurate scrolling and movement capabilities from iScroll – which by many is regarded as the defacto JavaScript scrolling library.

To make iScroll easier to use when writing custom controls that don’t inherit from TW3ScrollWindow or TW3CustomScrollList – you can use the controller instead. Simply create an instance, attach it to the parent container (which contains the elements you want to scroll) and you pretty much have instant iOS / Android scrolling.

Note: $Libraries\IScroll\iScroll.js has been updated to iScroll 5.2.0, which works fine on older Android devices (there were some problems with iScroll before). I have also updated our class implementation to include ALL options, events and functionality.

New Visual Controls

Indeed. While more or more controls is going to be added once the IDE update is in place, we at least have a few new controls in place that can liven up your applications.

First of all, iScroll is now (as explained) the default scroll mechanism in Smart Mobile Studio. When you combine that with the already impressive list of effects (smartcl.effects.pas) writing controls that gives visual feedback when you touch them, that scroll smoothly with momentum, bounce and snap to – is now almost ridiculously simple.

TW3DBGrid

First up is the long-awaited DB grid. This is a fast paced grid that can show thousands of elements without any significant speed penalty. This is because it creates the elements as they come into view – and it uses a very effective cache system to keep track of which elements (rows / cols) have been created, and which needs some attention.

grid

It also allows you to resize and move columns as you would expect from a native grid, so hopefully this first incarnation is just what you expected.

TW3CategoryHeader

Next we have the fancy Windows 10 mobile category header control. This is a touch driven, swipe responsive header that shows both a category and a small ingress text.

categories

This is an excellent control to create with Application.Display as a parent. That way it remains on top regardless of form – and the user can quickly navigate through swipes.

TW3SimpleLabel

We have also added a more lightweight label control. This is best suited for static text that don’t need horizontal alignment. It is a lot more resource friendly than TW3Label which is a composite custom control.

Ace editor control

Ace, the #1 Javascript code editor, akin to Delphi’s SynEdit or the popular Sublime editor, has likewise been added. This should make a lot of things easier, especially if you are creating front-end code for your cloud services (which you can now write in pristine node.js).

ace

TW3LayoutControl

TW3LayoutControl is another nice addition. This is essentially a scrollbox with classical rulers top and left. Perfect for image display programs, paint programs, layout designers and similar tasks. You can override and adapt this heavily and make it the basis for a calendar grid if you like.

We have also completely updated, re-written and quality checked every single control in the RTL. They are now more robust, they update and synchronize with their ready-state more or less like native controls do – and I have spent quite some time making sure they behave as they should.

Oh and yes, there are more controls (this post is getting long).

NodeJS high-level classes

Without a doubt the feature that has received most attention in this update are the Node.JS high level classes and units.

The previous NodeJS namespace and project type gave you more or less the bare-bones of node.js. This could be quite confusing for developers coming straight from Delphi or Lazarus. Especially if you don’t really want to dive to deep into the world of JavaScript – but enjoy some abstraction and ease of use.

The new SmartNJ namespace, where the files also are prefixed with the sane, implements common classes for both clients and servers. Writing an HTTP server is for instance very simple – and the code is in many cases as close to the other server types as possible.

The idea is that if you know how to write one type of server, you should also be able to write other types as well; because they share the same ancestor and commonalities.

As of writing the following server types are supported:

  • UDP
  • HTTP
  • Websocket (WebSocketIO)

The class architecture looks more or less like this:

  • TNJCustomServer
    • TNJUDPService
      • TNJUDPServer
    • TNJHTTPServer
    • TNJWebSocketServer
  • TNJCustomClient
    • TNJUDPService
      • TNJUDPClient
    • TNJHttpClient
    • TNJWebSocketClient

Clients and servers have traditional auxillary objects, such as request and response objects:

  • TNJServerChildClass
    • TNJCustomServerRequest
      • TNJHttpRequest
    • TNJCustomServerResponse
      • TNJHttpResponse

The following units make up the SmartNJ namespace today:

  • SmartNJ.Filesystem
  • SmartNJ.Network.Bindings
  • SmartNJ.Network
  • SmartNJ.Server
  • SmartNJ.Server.Http
  • SmartNJ.Server.Udp
  • SmartNJ.Server.WebSocket
  • SmartNJ.Streams
  • SmartNJ.System
  • SmartNJ.Udp
  • SmartNJ.WebSocket

Unified filesystem

In the previous versions of the RTL we presented several ways to store information. We had thin wrappers over cookies, local storage and filestorage. You could also use one of the 3 database engines to store data (sandboxed).

These unit still ship with the RTL, so should you want to pick one in particular that is of course not a problem. But in order to streamline and make IO as uniform as possible, I have introduced the notion of a standard “filesystem”.

The filesystem base-class can be found (as expected) in the system.filesystem.pas file. This contains the abstract class that makes up the most common filesystem operations.

NodeJS gives you a filesystem class that has direct access to the real files and folders, so be careful when you use that. As expected this filesystem implementation can be found under SmartNJ.Filesystem.pas.

And last but not least you will find a filesystem unit for visual browser applications, under SmartCL.Filesystem.pas.

The idea here is that you can write the exact same code and it will work regardless of which platform you run on.The same code you write for node.js will then save or load data in the browser as well.

This system will be expanded to include Cordova Phonegap, NodeWebkit (a hybrid post-compiler to make desktop applications) and embedded platforms.

Note: NodeJS is special in that it gives you both an asynchronous filesystem and blocking filesystem. If you want to use the blocking version, simply access it via the blocking interface (see unit file). But we strongly suggest that you avoid this since that will have a huge impact on performance. You must never use the blocking IO interface on servers, expect perhaps during startup – it will completely undermine the node.js runtime scheme and yield terrible performance.

Database support

This has been a pressing issue for quite some time, and while we are close, we still need to add more infrastructure before we can wrap the engines we want. Although classes will be available soon – being able to visually hook up data in common RAD style is what we want.

For that to happen I need the following features in the IDE:

  • Datamodules
  • Designer drop-zone for non-visual components
  • Design support for property mapping

Without these features database support under node.js will be flawed at best. But fret not, it is being worked on and we are presently looking at the following engines:

Client side we will provide the same DB API, but here it will wrap the existing engines:

  • WebSQL [deprecated by W3C]
  • SQLite [default]
  • TW3Dataset
  • IndexDB

The reason this is more complicated than it may look is first of all because IndexDB and MongoDB are object based databases, not SQL based. So the architecture really needs property mapping and the IDE to back it up. Naturally it can be done in pure code (which we have started) but the IDE with the built-in database designer is what will make this more sane to work with.

Action support

Yes, finally it is here. TCustomAction and TAction has been added to the RTL, and appropriate controls now expose an Action property that you can assign to.

Again we need the next update before this becomes more easy to work with. You need to be able to drop a TW3ActionManager on your form or datamodule, and then assign that via the property inspector to the control.

The IDE will automatically generate the “glue” code on compile for you, and it will execute as a part of your {$I ‘form: impl’} section and set things up.

Right now you have to create and assign actions manually — but at least it’s finally in place. Which is really good because that allows me to write default-actions for various tasks just like you have in Delphi, C++ builder and Lazarus.

Final words

This has been a preview, not the full load. There are many other changes to the codebase, some of the changes quite radical. Hopefully you find this interesting and will enjoy using it as much as we have enjoyed creating it.

Things that havent been covered yet are:

  • Pixi.js
  • Plumb.js
  • Stacktrace support
  • Codec framework
  • eSpeak
  • Vector graphics display control
  • Support for Scetchpad 3d models
Loading HD 3d models into your smart applications is now possible (and easy). Stills doesnt do this model justice ..

Loading HD 3d models into your smart applications is now possible (and easy). Stills doesnt do this model justice ..

And yes, I have also added support for asset management. This wont be fully visible before the IDE update, but being able to pre-load pictures, sounds and resources before your application starts is now simplicity itself. As a bonus we also added support for Require() which is more or less the de-facto JavaScript way of loading assets (and more) these days. You even get a sweet callback when its done or when something goes wrong.

Stay tuned! I’m working as fast as I can

/Jon

Smart Pascal: Update specifics

January 20, 2017 Leave a comment

With what is probably the biggest update Smart Mobile Studio has seen to date only a couple of weeks away, what should the Smart Pascal programmer know about the new RTL?

I have written quite a few articles on the changes, so you may want to backtrack and look at them first – but in this post I will try to summerize two of the most important changes.

Unit files, a background

The first version of Smart Mobile Studio (sms) was designed purely for webkit based, HTML5 applications. As the name implies the initial release was designed purely for mobile application development; with the initial target firmly fixed on IPhone and IPad. As a result, getting rid of webkit exclusive code has been very time-consuming to say the least.

At the same time we added support for embedded devices, like the Espruino micro-controller, and in leu of that process -it became clear that the number of devices we can support is huge. As of writing we have code for more than 40 embedded devices lined up, ready to be added to the RTL (after the new RTL is in circulation).

Add to this the fact that node.js is now the most powerful cloud technology in the world, the old “DOM only”, webkit oriented codebase was hopelessly narrow minded and incapable of doing what we wanted. Something had to give, otherwise this would be impossible to maintain – not to mention document in any reasonable logical form.

A better unit system

I had to come up with a system that preserved universal code – yet exposed special features depending on the target you compile for.

So when you compile for HTML5 you get all the cool stuff the document object model (DOM) has to offer, but also full access to the universal units. Same goes for node.js, its has all the standard stuff that is universal – but expose a ton of JS objects that exist nowhere else.

What I ended up with was that each namespace should have the same units, but adapted for the platform it represents. So you will have units like System.Time.pas, which represents universal functionality regarding timing and synchronization; But you are supposed to use the one prefixed by your target namespace. For instance, SmartCL.Time or SmartNJ.Time. In cases where the platform doesnt expose anything beyond the universal code, you simply fall back on the System.*.pas unit.

namespace_scheme

It may look complex, but it’s super easy once you get it

The rules are super simple:

  • Coding for the browser? Use the units prefixed with SmartCL.*
  • Coding for node.js? Use the units prefixed with SmartNJ.*
  • Coding for embedded? Use the units prefixed with SmartIOT.*

Remember that you should always include the system version of the unit as well. So if you include SmartCL.Time.pas, add System.Time.pas to the uses list as well. It makes it easier to navigate should you want to CTRL + Click on a symbol.

Better synchronization engine

JavaScript is async by nature. Getting JavaScript to behave like traditional object pascal is not only hard, it is futile. Besides, you dont want to be treated like a child and shielded from the reality of JavaScript – because that’s where all the new cool stuff is at.

In Delphi and Lazarus we are used to our code being blocking. So if you create child objects in your constructur, these objects are available there and then. Well, thats not how JavaScript works. You can create an object, the code continues, but in reality the object is not finished yet. It may in fact be assembling in the background (!)

To make writing custom-controls sane and human, we had to change a few things. So in your constructor (which is InitializeObject in our RTL, not the actual class constructor) you are expected to create child object – but you should never use them.

Instead there is a method called ObjectReady() which naturally fires when all your child elements have been constructed properly and the control has been safely injected into the document object model. This is where you set properties and start to manipulate your child objects.

synchro

So finally the setup process is stable, fast and working as you expect. As long as you follow this simple rule, your controls will appear and function exactly like you expect them to.

  • Never override Create and Destroy, use InitializeObject and FinalizeObject
  • Override ObjectReady() to set initial values

Creation flags

We have also added creation flags to visual controls. This is a notion familiar to both Delphi’s VCl and Lazarus LCL component structures. It allows you to define a few flags that determines behavior.

For instance, if your control is sized manually and will never really need to adapt – then it’s a waste of time to have Resize() fire when the size is altered. Your code will run much faster if the RTL can just ignore these changes. The flags you can set are:

  TW3CreationFlags = set of
    (
    cfIgnoreReadyState,     // Ignore waiting for readystate
    cfSupportAdjustment,    // Controls requires boxing adjustment (default!)
    cfReportChildAddition,  // Dont call ChildAdded() on element insertion
    cfReportChildRemoval,   // Dont call ChildRemoved() on element removal
    cfReportMovement,       // Report movements? Managed call to Moved()
    cfReportResize,         // Report resize? Manages call to Resize()
    cfAllowSelection,       // Allow for text selection
    cfKeyCapture            // assign tabindex, causing the control to issue key and focus events
    );

To alter the default behavior you override the method CreationFlags(), which is a class function:

class function TW3TagObj.CreationFlags: TW3CreationFlags;
begin
  result := [
    // cfAllowSelection,
    cfSupportAdjustment,
    cfReportChildAddition,
    cfReportChildRemoval,
    cfReportMovement,
    cfReportReSize
    ];
end;

Notice the AlloSelection flag? This determines if mouse selection of content is allowed. If you are making a text display, or editor, or anything that the user should be allowed to select – then you want this flag set.

The cfKeyCapture is also a flag many people have asked about. It determines if keyboard input should be allowed for the element. By default this is set to OFF. But now you dont have to manually mess around with element attributes. Just add that flag and your control will fire the OnKeyPress event whenever it has focus and the user press a key.

Tweening and fallback effects

In the last update a bug managed to sneak into our SmartCL.effect.pas library, which means a lot of people havent been able to fully use all the effects. This bug has now been fixed, but we have also added a highly efficient tweening library. The concept is that effects will be implemented both as hardware acellerated, GPU powered effects – but also with CPU based tweening alternatives.

For mobile devices it is highly adviced to use the GPU versions (the normal SmartCl.Effects.pas versions).

When you add the unit SmartCL.Effects.pas to your forms, what happens is that ALL TW3MovableObject based controls are suddenly extended with around 20 fx prefixed methods. This is because we use partial classes (which means classes can be extended).

This is super powerful, and with a single line of code you can make elements fly over the screen, shrink, zoom out or scale in size. You even get a callback when the effect has finished – and you can daisy-chain calls to execute a whole list of effects in sequence.

FMyButton.fxSizeTo(10, 10, 200, 200, 0.8, procedure ()
  begin
   writeln("Effect is done!");
  end);

And execute in sequence. The effects will wait their turn and execute one by one.

FMyButton.fxMoveTo(10,10, 0.8)
  .fxSizeTo(100,100, 0.8)
  .fxFadeTo(0.3, 0.8);

Stay tuned for more info!

Smart Pascal: Arduino and 39 other boards

January 17, 2017 Leave a comment

Did you know that you can use Smart Mobile Studio to write your Arduino controller code? People think they need to dive into C, native Delphi or (shrug) Java to work with controller boards or embedded SoC’s, but fact is – Smart Pascal is easier to set up, cheaper and a hell of lot more productive than C.

Arduino getting you down? Why bother, whip out Smart Pascal and do some mild wrapping and kill 40 birds with one stone

Arduino getting you down? Why bother, whip out Smart Pascal and do some mild wrapping and kill 40 birds with one stone

What may come as a surprice to you though is that Smart allows you to write code for no less than 40 micro-controllers and embedded devices! Yes you read right, 40 different boards from the same codebase.

Johnny number five

jfiveNode.js is the new favorite automation platform in the world of hardware, and board’s both large and small either support JavaScript directly, or at the very least try to be JS friendly. And since Smart Mobile Studio support Node.js very well (and it’s about to get even better support), that should give you some ideas about the advantages here.

The magic library is called Johnny number five (JN5) and it gives you a more or less unified API for working with GPIO and a ton of sensors. And it’s completely free!

As of writing I havent gotten around to writing the wrapper code for JN5, but the library is so clean and well-defined that it’s something that should not take more than a day to port over. You can also take a shortcut via our Typescript importer and use our import tool.

Check out the API here: http://johnny-five.io/api/

Here is a list of the sensors JN5 supports:

  • Altimeter
  • Animation
  • Barometer
  • Button
  • Compass
  • ESC
  • ESCs
  • Expander
  • Fn
  • GPS
  • Gyro
  • Hygrometer
  • IMU
  • IR.Reflect.Array
  • Joystick
  • Keypad
  • LCD
  • Led
  • Led.Digits
  • Led.Matrix
  • Led.RGB
  • Leds
  • Light
  • Motion
  • Motor
  • Motors
  • Multi
  • Piezo
  • Pin
  • Proximity
  • Relay
  • Relays
  • Sensor
  • Servo
  • Servos
  • ShiftRegister
  • Stepper
  • Switch
  • Thermometer

And enjoy one of the many tutorials here: http://johnny-five.io/articles/

Up board, a perfect embedded board

December 23, 2016 2 comments

In my previous article about the UP single board computer, my primary focus was to use it purely for emulating a next generation PPC based Amiga, running Amiga OS 4.1 final edition. This is probably the hardest, most challenging task you can give a computer. Even by modern standards emulating a 68k and ppc based hybrid platform is a proverbial assault on the hardware.

Next generation Amiga, it works but the board is not powerful enough to deliver an enjoyable experience. Classic Amiga's however is the best you can imagine!

Next generation Amiga, it works but the board is not powerful enough to deliver an enjoyable experience. Classic Amiga’s however is the best you can imagine!

Just contemplate it for a second: if ppc emulation isnt formidable enough, the target of my emulation was a hybrid platform in the true sense of the word. You have the older Amiga chipset which is extremely complex. Attached to that (and working in symphony) is a ppc accelerator board designed to do the grunt work.

The UP single board computer came up short for that highly specialized task. But like I wrote, it missed the mark by just a handful of Hz. Had the CPU been just slightly more powerful (and a more suitable storage device), the UP board would have ace’ed it. Which would have been nothing short of miraculous! The fact that it was capable of running Amiga OS 4 at working speed to begin with is extremely impressive! Remember this is a tiny, single board computer retailing at $150 !

And the most amazing part? The CPU emulated the next generation Amiga running at a steady 60% cpu utilization. Not once did it spike up to 100%. So the Windows process scheduler held it back for obvious reasons.

Ok, but what about other tasks?

Just because you don’t have the power to knock out Mike Tyson doesn’t necessarily mean you’re a bad fighter. And the same can be said about technology and computers. The UP board came up a little short for next generation Amiga emulation, but so what? The PC I owned back in 2013 and had been using for Delphi development for 3 years couldn’t even start OS 4 of its life depended on it.

Truth be told, the UP board is the most exciting product I have owned for years. I was only this excited about something when I got my first Raspberry PI. I’m even more excited about the UP board because it’s infinitely more capable!

x86 UP board, same size as the PI3 but packing a hell of a punch!

x86 UP board, same size as the PI3 but packing a hell of a punch!

And when I get excited, you should to. I am privileged and blessed through my work to have access to all the latest gadgets. Be it embedded boards to the latest phones, tablets, graphics cards, operative systems — you name it, I got it or have access to it. But of all the awesome tech I have played with since 2010, the UP board is the most fun. It’s just perfect and has everything a professional software developer, retrogamer or product integrator could possibly want in a SoC.

Setting up the UP board

The first thing you want to do is get Windows 10 installed. If Linux is your thing then I can strongly recommend Ubuntu, it’s beautiful on the UP and runs like a dream.

But for this article I’ll be going with Windows 10, and the easiest way to get that installed, unless you have an external DVD drive and bootable disks is the following:

  1. Have a USB stick with at least 4 gigabyte
  2. Download Rufus
  3. Download the Windows 10 iso file from Microsoft
  4. Burn the Windows iso file to the USB stick
  5. Plug the USB stick into the UP board and fire it up

Rufus is a cool little program that allows you to burn ISO files to a USB stick and then make the stick bootable. This is not the same as (for example) Windows image writer, which just burns a disk image (*.img file) verbatim to whatever storage media you pick.

Once you have burnt the ISO to your usb stick, it’s just a matter of sticking it into one of the free usb-slots on the device and power it up. The normal Windows installer should show up as normal and you just follow the installation process. I’m guessing you have done this a few times in your life, so no need to outline that part.

Enable remote desktop

Once Windows is installed and registered, you want to give the computer a name that makes a little more sense. Windows comes up with these absurd identifiers based on whatever, like “UPx7621ab” and similar stuff.

I renamed my board to “UPBoard” and checked the “Allow remote connections”. I then added my admin account (the “select users” button) so I can login remotely. If you havent done this before then just google around and you’ll find out how:

You dont need to setup a full remote desktop (which is something new in Windows 10), you just need to enable remote access

You don’t need to set up a full remote desktop (which is something new in Windows 10), you just need to enable remote access

Once you have changed all that you need to reboot the device, so just do that. In the meantime go over to your work PC and download “Microsoft remote desktop preview” which is their new Remote desktop client. Yes, the good old remote desktop application that have shipped with Windows since the civil war is no longer there. So head over to Microsoft and grab the new one.

Connect to the Up-Board from your PC

OK, with that in place I guess it’s time to play! Fire up the remote-desktop application, add a new location, use the name “UPBoard” as the host, then add your login credentials. Click connect and voila! You now have remote access to your fancy new mini computer!

Click

Click “Add” to add your new remote connection

Then just double-click on the icon that says “UPBoard” in the fancy menu, and that’s it! With this in place you don’t even need a monitor or keyboard for the board, just get a nice case for it and plug it into the router. You can even leave it besides your router if you like.

Fast, responsive and one hell of a system to work with!

Fast, responsive and one hell of a system to work with!

As a developer I take it for granted that you know how to share out a drive or folder, so I’m not going to pussy feet you through that.

Getting the UP drivers

You really want the UP drivers, this means faster graphics and that Windows knows what it can and cannot do with the hardware. So head over to the UP website and download the drivers. Then just install them one by one.

Click here to visit the download page.

Developing on the board

I do most of my development through VMWare. Since I have to use various versions of Delphi, Xamarin studio, Visual studio and other things I can’t even spell – an ordinary PC just wont cut it. I think the smallest PC I have is a Intel i7 octa-core running at 4.1 GHz with 32 gigabyte memory and NVidia Geforce GTX 970. This is because I run 3 virtual environments at the same time (client, server and database).

But you don’t need specs like that to write Smart Pascal based node.js services or servers. That’s one of the great things about Smart Mobile Studio. A normal PC will do the trick. And the UP board is much smoother to work on than one of my old machines, that’s for damn sure.

$150 for a fully equipped workstation? Indeed! The UP-board performs brilliantly for Smart Mobile Studio development. So you can easily code, test and deploy on the board itself.

$150 for a fully equipped workstation? Indeed! The UP-board performs brilliantly for Smart Mobile Studio development. So you can easily code, test and deploy on the board itself.

When it comes to CPU use that’s really not a problem. I tried it with several custom applications, including the Smart pascal kiosk software. This has a node.js server running in the background providing content, while a webkit browser renders the front-end display. Amazingly the whole thing hardly registers. The system that usually runs at around 60% on the Raspberry PI, here clocks in at a measly 5% (!)

5% performance hit. Holy cow!

5% performance hit. Holy cow!

Nothing to say except awesome, I absolutely love this board!

EMMC, the only real bottleneck of the product

The only real bottleneck, a topic I exhausted in my previous post because it pissed me off like nothing else, is the storage device.

But it’s not really a big problem under Windows or Ubuntu. You do notice it when you download large files or decompress large archives. In order to get my developer tools over I just zipped them down and transferred it over. And while getting the data from A to B was simple enough – it was when I started unpacking the data I noticed how slow it was.

But here is the thing: on a dedicated product, these things don’t matter. If you are building your own games machine, your own router, or designing a new top of the line NAS destined for mass production – it doesn’t matter if unzipping 2 gigabytes of source code is slow. Because that’s not what your product will be doing anyway.

Your product, what you use the UP board to make, will ship with a custom program (or a series of programs) designed by you. They will be pre-installed on the machine as your customer buys it, and unless you are born in a cave – I hardly think you will be forcing your customers to download gigabytes of data and unpacking it. That would be a horrible design mistake regardless of product.

Classic Amiga emulation is flawless and perfect on the UP board. This is probably the best emulation experience I have ever had

Classic Amiga emulation is flawless and perfect on the UP board. This is probably the best emulation experience I have ever had

Let’s for sake of argument we say you want to use the board to create the coolest retrogaming machine ever invented. You wont be copying over all the roms every single time right? You will copy it over once, or even better – just download it directly since you have Windows or Ubuntu to play with. So again, the somewhat dull storage device is not going to be a problem for you.

And should you build a movie server or media streaming service, like Plex, then you are going to attach an external hard disk anyway and just use the internal storage for booting up. Once again, not a factor at all.

Factor in a drive

The red thread that you see in the 3 posts I have made about the UP board, is all about the storage device. I have bitched enough about their choice of EMMC storage I think, but instead of just bitching – I want to give you a tip: always factor in a fast USB 3.0 drive when planning a product or home appliance based on the UP single board computer. And that goes double if you plan on developing on the board itself (which it is more than capable of handling).

The CPU and graphics chipset is more than enough to drive Visual Studio, Mono, Delphi, Lazarus or Smart Mobile Studio – but do yourself a favour and reserve the buildt-in EMMC device for Windows only. Never, ever on pain of death install any software bigger than notepad on it because it will bug you for all eternity.

As such I cannot recommend that you buy the biggest model, because the only thing separating the biggest UP model from the second best, is more bloody EMMC storage. The model just below the best is equipped with same amount of ram (4 gigabytes) but only half the disk space. And I so regret not getting that instead. Those $20 would go a long way paying for a plastic casing instead.

Unless you want to run a fixed, single application and could use the extra space, it's $20 straight down the toilet.

Unless you want to run a fixed, single application and could use the extra space, it’s $20 straight down the toilet.

So word to the wise: always factor in a fast USB 3.0 external disk if you plan on using the UP board for anything disk intensive. And with disk intensive I also mean compiling code, because that typically plows through hundreds if not thousands of files. So yeah, the EMMC really sucks.

But you can also factor in a SATA drive. Turns out, if you visit the UP web store that you can buy a SATA shield board that turns one of your USB ports into a blistering fast SATA drive slot. It will set you back $85, but in return you get two USB ports and a SATA drive slot. This makes the UP board a suitable candidate for NAS production. Considering the horsepower you could easily surpass brands like Asustor (which just happens to be my favorite NAS devices).

It would also make the ultimate home streaming service, capable of streaming full HD without breaking a sweat!

Peripherals that make sense

One of the things I hate about the Raspberry PI community is that more often than not, they take for granted that everyone is an electrician or have a background in electronics. So when the moment comes that you want to broaden your horizon, you are faced with thousands of “add-on” products that you dont have a clue how works.

A good example is a remote control I ordered earlier. Seems like a fairly straight forward thing right? And the advert said “only 3 wires to solder, easy for beginners”. What I got was a smart looking remote, but the part that needs to be soldered have no clear markings of what exactly to solder. There are 8 pins to pick from, no “how to” manual — and this is just typical for the Raspberry community (it’s not the first time I have recieved stuff like this).

The UP webshop makes more sense. They dont have thousands of chips or gadgets, because you are expected to deal with things that run on USB and are supported by the operative system.

As such they sell you the most obvious parts:

  • 10″ touch screen
  • Sata drive shield
  • Plastic casing
  • Converters (USB 3 male/female etc.)
  • Wifi dongle
  • .. and so on

When it comes to remote control, which is really handy when coding kiosk software that eventually will be keyboard-less (so its the ultimate admin magic wand) take your pick. There are thousands of wireless USB remote control packages out there, and if they are compatible with Windows, they can be used with UP.

Also a hell of a lot easier to code for!

Personal verdict

This is by far the most exciting gadget I have seen in years. Much more exciting than the Sony Virtual Reality gogles for Playstation 4 (rev 2) that just came out. Dont think I was ever so disapointed as when I got that. Been waiting 20 years for VR to become a reality, only to realize you need a padded room if you want to move around.

So instead of buying the Playstation VR for XMas, get yourself 3 UP boards for the same price. I can guarantee that you will get much more joy from owning them than the Playstation Virtual Reality headset.

My personal verdict for the UP board is a shining 5 out of 6 stars!

No other board except the Raspberry PI 3 has ever gotten that kind of score on my website. Not even the iPad when that came out.

What about classic Emulation? How does that work?

How does it work? It is so smooth that im at a loss for words. Seriously, I dont even know where to begin describing how sweet this is.

Let me say it like t his: When I fire up Amiga Forever and run the pimped up 3.x “enhanced experience” (read: one pimped up Amiga 1200), start IBrowse and visit Amigaworld.net, the webpage renders just as fast as Chromium renders. Scrolling with the mouse-wheel is blistering faster and you actually dont notice any difference between using the emulated Amiga or Windows.

This setup is pure joy on Amibian on the Raspberry PI. On the UP board its bliss, bliss on tap, bliss on demand. A perfect emulation experience all the way through.

This setup is pure joy on Amibian on the Raspberry PI. On the UP board its bliss, bliss on tap, bliss on demand. A perfect emulation experience all the way through.

Images download and render near instantly, music is crisp and clear, not a flicker or jitter in sight. And the CPU has depth, thats the main difference between Raspberry PI and the UP board. Let me explain what I mean with that.

I downloaded the latest freepascal (version 3.1) which is the same version that I use on Linux and Windows to produce modern products. Now on the Raspberry PI whenever you attempt to compile anything substantial, the whole thing goes into a grinding halt. The Raspberry PI is excellent for superficial things, especially desktop and games where it can fall back on the GPU and custom chips to get the job done.

Compiling anything, even older stuff that is a joke by today standard, is painful on the Raspberry PI. Here showing my retro-fitted A500 PI with sexy led keyboard. It will soon get a makeover with an UP board :)

Compiling anything, even older stuff that is a joke by today standard, is painful on the Raspberry PI. Here showing my retro-fitted A500 PI with sexy led keyboard. It will soon get a makeover with an UP board 🙂

But compilation is pure CPU work and this is where the Raspberry PI cannot hide what it is. A slow, underpowered ARM system on a chip, posing as a real computer.

I compiled a small shell-utility i coded especially for the Amiga, called AmigaZipper. A very humble program with barely 8000 lines of code. When compiling on my UAE Amiga running on the PI, it took almost an hour to produce an executable binary.

How long did it take on the UP board? oh, 5 seconds give or take. So where the Raspberry looks great for superficial things it lacks grit. The UP board however has so much more power it can apply to a task, and plows through it like a warm knife through butter.

Classic Amiga runs along nicely, eating around 40% of the cpu running in 1920x1080x32 bpp. Even with the latest freepascal (3.1) compiling on the 68k it never broke 60%. Perfect. Just perfect!

Classic Amiga runs along nicely, eating around 40% of the cpu running in 1920x1080x32 bpp. Even with the latest freepascal (3.1) compiling on the 68k it never broke 60%. Perfect. Just perfect!

If you are serious about emulating Amiga up to OS 3.9, then this is absolutely the board to get! With the Raspberry foundation releasing Pixel for x86, we can only hope Gunnar Kristiansson creates Amibian for x86 as well — then all will be right with the world and the force will be in balance once again.

Merry Xmas!

UP board, first impressions for emulation

December 21, 2016 2 comments

To get the most out of this post please read my previous post, Embedded boards, finally. For general use and emulating classical Amiga, also read my next article: UP board, a perfect embedded board.

In the previous post I went through the hardware specs for both the ODroid XU4 ARM embedded board, as well as the x86 based UP SoC board. Based on their specs I also defined some predictions regarding the performance I expect them to deliver, what tasks they would be best suited for – and what they would mean for you as a developer – or indeed an emulation enthusiast

In this post we are going to start digging into the practical side of things. And what is probably on everyone’s mind is: will the UP board be powerful enough to emulate and run Amiga OS 4.1 final edition? What about retrogaming, HTML5 and general use?

Well, let’s dig into it and find out!

Note: If emulation is not your cup of tea, I urge you to reconsider. I am a dedicated developer and can tell you that emulation is one of the best methods to find out what your hardware is capable of. Nothing is as demanding for a computer as emulating a completely different cpu and chipset, especially one as elaborate and complex as the Commodore Amiga. But I also use a Smart Pascal demo as a general performance test -so even if gaming is not you thing, the board and its capabilities might be.

EMMC storage, a tragedy without end

EMMC is cheap and easily available

EMMC is cheap and easily available

The UP-board uses something called EMMC storage; this is quite common for embedded devices. Your TV tuner probably has one of these, your $100+ router and in all likelihood so your NAS or laser printer. To make a long story short this storage medium is flexible, cheap and easy for vendors to add to their custom chipset or SoC. It is marketed as a reasonable alternative to SSD, but sadly these two technologies have absolutely nothing in common except perhaps -that they both are devices used to store data. But that’s where any similarities stop; and truth be told the same would be the case for pen and pencil.

EMMC is an appalling technology, honestly. It works for some products, in the sense that you would gladly wear a cardboard box if the alternative is to walk around naked in public. For devices where responsiveness and efficiency is not the most pressing factor (like routers, TV tuners, set-top boxes and similar devices) it can even work well, but for devices where responsiveness and data throughput is the most important thing of all (like a desktop or emulation) EMMC is a ridiculous choice.

Just imagine, a powerful x86 embedded board with 4 gigabytes of ram, 4 x USB 3 ports, outstanding graphical performance, excellent audio capabilities – and all of it for $80?

Honestly, there are many cases where embedded boards are better off without them. I used to hate that the Raspberry PI 3 didn’t ship with EMMC, but that was before I got the pleasure of trying the storage medium myself and experience how utter shit it truly is. It reminds of ZIP disks, remember those? Lumpy overgrown floppy disks that went extinct in the 90’s without realizing it?

In terms of speed EMMC sits somewhere between USB 2 and your average SD card. The EMMC disk on the UP-board falls in the usable category at best. It works ok-ish with a modern operative system like Windows 10 or Ubuntu, but just like the Latte Panda and similar devices are haunted by a constant lag or dullness whenever IO is involved, the same is true for the UP-board. It saturates the whole user experience and its like the computer is constantly procrastinating. Downloading a file that should take 5 minutes suddenly takes 20 minutes. Copying a large file over the network, like a bluray HD file is borderline absurd. It took over half an hour! It normally takes less that 20 seconds on my desktop PC.

I might get flamed for this but my Raspberry PI 3 actually performed better when using a high-speed USB stick. I did the exact same test:

  • Download the same file from my NAS right here at home
  • The data goes straight from my NAS to the router and straight to disk
  • On the PI i used a Sandisk 256 gigabyte USB pen-drive

I don’t have the exact number at hand, but we are talking 10-12 minutes tops, not half an hour. The PI is hopelessly inferior to the UP-board in every aspect, but at least the PI foundation was smart enough to choose low-price over completeness. The people behind UP could learn a thing or two here.

Top the x86 UP board, left bottom a Raspberry PI 3, bottom right the ODroid XU4

Top the x86 UP board, left bottom a Raspberry PI 3, bottom right the ODroid XU4

I simply cannot understand what they were thinking. Consider the following factors:

  • The speed of running a server is now determined by the read/write performance of your storage device. Forget about factors like memory, number of active threads, socket capacity or network throughput.
  • Should the operating system start paging, for example if you run a program that exceed the memory Windows has left (Windows eats 1 gigabyte of ram for breakfast) you are screwed. The system will jitter and lag while Windows desperately map regions of the pagefile to a fixed memory address, perform some work, then page it back again before switching to the next process.

I really wish the architects of the UP-board had just ditched EMMC completely because it creates more problems than it solves. The product would be in a different league had they instead given it four USB 3 ports (it presently have only one USB 3 port, the rest are USB 2). While I can only speculate, I imagine the EMMC unit costs between $20 to $40 depending on model (8 Gb, 16Gb or 32Gb). The entire kickstarter project would be way more successful if they had cut that cost completely. Just imagine, a powerful x86 embedded board with 4 gigabytes of ram, 4 x USB 3 ports, outstanding graphical performance, excellent audio capabilities – and all of it for $80?

It would be nothing short of a revolution.

Graphics

When it comes to graphics the board just rocks! This is where you notice how slow Raspberry PI 3 truly is. Raspberry PI 3 (RPI3) ships with a fairly decent GPU, and that GPU has been instrumental for the success of the whole endeavour. Without it all you have is a slow and boring ARM processor barely capable of running Linux. Try to compile something on the RPI3 (like node.js from source like I did) and it will quickly burst your bubble.

The UP board ships with a very capable little graphics chipset:

  • Intel® HD 400 Graphics
  • 12 EU GEN 8 up to 500MHz
  • Support DirectX 11 through 12
  • Supports Open GL* 4.2, Open CL*1.2, OGL ES3.0
  • Built in H.264, HEVC, VP8 encoding/decoding

The demo I use when testing boards is a JavaScript demo. You can watch it yourself here.

The particles javascript canvas demo was coded in Smart Mobile Studio and push the HTML5 graphics engine to the edge

The particles JavaScript canvas demo was coded in Smart Mobile Studio and push the HTML5 graphics engine to the edge

Here are some figures for this particular demo. It will help you get a feel for the performance:

  • Raspberry PI 2b, 1 frames per second
    • Overclocked 2 frames per second
  • Raspberry PI 3b, 3 frames per second
    • Overclocked 7 frames per second
  • UP-board 18 frames per second
    • Overclocked: not overclocked yet

As you can see, out of the box the UP board delivers 6 times the graphics throughput of a Raspberry PI 3b. And remember, this is a browser demo. I use this because webkit (the browser engine) involves so many factors, from floating point math to memory management, sub pixel rendering to GPU powered CSS3 effects.

What really counts though is that the PI’s CPU utilization is always at 100% from beginning to end; this demo just exhausts the RPI3 completely. This is because this JavaScript demo does not use the GPU to draw primitives. It uses the GPU to display a pixel buffer, but drawing pixels is purely done by the processor.

Where the RPI3 went through the roof and is almost incapable to responding while running that demo in full-screen, the UP-board hardly breaks a sweat.

It’s strolls along with a CPU utilization at 23%, which is nothing short of fantastic! Needless to say, its allocating only a fraction of its potential for the task. It is possible under Windows to force an application to run at a higher priority. The absolute highest mode is tp_critical, which means your program hogs the whole processor. Then you have tp_highest, tp_higher, tp_high, to_normal – and then some slow settings (tp_idle for example) that is irrelevant here.

Still, six times faster than the Raspberry PI 3b yet only utilizing 23% of its available horsepower? That means we got some awesome possibilities at our fingertips! This board is perfect for the dedicated retrogamer or people who want to emulate more modern consoles, emulation that is simply too much for a Raspberry PI 3 or ODroid to handle. Personally I’m looking forward to the following consoles:

  • Playstation 1 and 2
  • Nintendo Wii
  • XBox one
  • Nintendo Gamecube
  • Sega Saturn

Keep in mind that we are running Windows 10, not some esoteric homebrew Linux distro. You can run the very best emulators the scene has to offer. Hyperspin? No problem. Playstation 2 emulation? No problem. Sega Saturn emulation? A walk in the park. And before you object to Saturn, this console is actually a very difficult chipset to emulate. It has not one but 3 risc processors, a high quality sound chip and a distributed graphics chipset. It represents one of the most complex architectures even to this day.

As you can imagine, emulating such a complex piece of hardware requires a PC with some punch in it. But that’s not a problem for the UP board. It will happily emulate the Saturn while playing you favorite MP3’s and stream a movie from NetFlix.

Conclusion: the UP-board delivers a really good graphics experience. It has support for the latest DirectX API and drivers; OpenGL (and associated libraries) is likewise not a problem. This board will deliver top retro gaming experiences for years to come.

Amiga Forever

Right, since I own Amiga forever from Cloanto and recently bought Amiga OS 4.1 to use exclusively with that – it made more sense for me to just install Amiga forever directly on the UP-board and copy the pre-installed disk image over. I do have the ordinary and latest build of UAE, but I wanted to just see how it performed.

15675761_10154062423875906_2808754875293679888_o

This is where I first noticed just how darn slow EMMC is. Even simple things like downloading Amiga Forever from Cloanto’s website, not to mention copying the OS 4.1 disk image over the local network took 10-15 minutes each (!). This is something that on my ordinary work PC would have taken seconds.

At this point I became painfully aware of its limitations. This is just a low-priced x86 embedded board after all, it’s not going to cure cancer or do your dishes. Just like the Raspberry PI 3 suffers whenever you perform disk IO – so will all embedded boards bound to less than optimal storage devices. So the CPU is awesome, memory is great, USB 3 port blazing, graphics and GPU way ahead of the competition; in short: the UP board is a fantastic platform! So don’t read my negative points about EMMC storage as being my verdict on the board itself.

Now let’s look at probably the most advanced emulation task in the world: to emulate the Commodore Amiga with a PPC accelerator.

Emulating Amiga OS 4.1, is the UP board capable of it?

If we look away from the staggering achievement of emulating not one chipset, but two (both 68k and PPC) side by side, I can tell you that AmigaOS 4.1 works. I write works because in all honesty, you are not going to enjoy using Amiga OS 4.1 on this board. Right now I would categorize it as interesting and impressive, but it’s not fast and it’s not enjoyable. Then again, OS 4.1 and PPC emulation is probably the hardest and most demanding emulation there is. It is a monumental achievement that it even works, let alone boots up a $150 embedded computer the size of a credit card.

Next generation Amiga, it works but the board is not powerful enough to deliver an enjoyable experience. Classic Amiga's however is the best you can imagine.

Next generation Amiga, it works but the board is not powerful enough to deliver an enjoyable experience. Classic Amiga’s however is the best you can imagine.

So if you are considering the UP-board solely to emulate a next generation Amiga, then you either wait for the “UP 2 board” that will be released quite soon (march 2017) or look at alternatives. I honestly predicted that it would pull this off, but sadly that was not the case. It lacks that little extra, just a little bit more horse-power and it will be a thing of beauty.

Storage blues

Had I known how slow the EMMC storage was I would have gone for the UDOO ultra instead, which is a big step up both is power and price. It retails at a hefty $100 above the UP board, but it also gives you a much faster CPU, 8 gigabyte memory and hopefully – a faster variation of EMMC storage. But truth be told I sincerely doubt the disk IO on the EMMC is significantly faster than for the UP-board.

Either way, if fast and furious PPC emulation is your thing, then $250 for the ODOO Ultra is still half the price of the A1222 PPC board. I mean, the A1222 next generation Amiga motherboard is just that: a motherboard. You don’t get a nice reasonably priced Amiga in a sexy case or anything like that. You get the motherboard and then you need to buy everything else on the side. $500 buys you a lot of x86 power these days so there is no way in hell im buying a PPC based Amiga motherboard for that price. Had it been on sale 10-15 years ago it would have been a revolution, but those specs in 2017? PPC based? Not gonna happen.

So if you really want to enjoy OS 4.1 and use it as a real desktop, then I have to just say go for the real deal. Get the A1220 if you really want to run OS 4.1 as your primary desktop. I think it’s both a waste of time and money, but for someone who loves the idea of a next generation Amiga, just get it over with and fork out the $500 for the real thing.

Having said all this, emulating OS 4.1 on the UP board is not terrible, but it’s not particularly usable either. If you are just curious and perhaps fire up OS 4 on a rare occasion – it may be enough to satisfy your curiosity; but if you are a serious user or software developer then you can forget about the UP-board. Here it’s not just the EMMC that is a factor, the CPU simply don’t have the juice.

Classic Amiga is a whole different story. Traditional UAE emulating an Amiga 4000 or 1200 is way beyond anything the Raspberry or ODroid can deliver. The same goes for retrogaming or using the board for software development.

Unless you are prepared to do a little adaptation that is.

Overcoming the limitations

Getting around the slow boot time (for OS 4) is in many ways simple, as is giving the CPU a bit of kick to remove some of the dullness that is typical for embedded boards. The rules are simple:

  • Files that are assessed often, especially large files should be dumped on a USB thumb-drive. Make sure you buy a USB 3.0 compliant drive and that its class 10 (the fastest read/write speed). And naturally, use the USB 3.0 socket for this one
  • Adding a fan and then doing some mild tweaking in CPU-Z and GPU-Z overclocking tools for Windows. As mention in other articles, you don’t want to mess around with overclocking if you don’t have the basic setup. Lack of cooling will burn your SoC in a couple of minutes. There is also much to be said about restraint. I find that mild overclocking does wonders, it also helps preserve the cpu for years to come (as opposed to running the metal like a hound from hell, burning it to a crisp within a year or less).
  • Drop using the EMMC completely and install Windows on a dedicated USB 3.0 external harddisk. But again, is it a full PC you want? Or is it a nice little embedded board to play with?

Since Amiga forever and Amiga OS 4.1 was where I really had problems, the first thing I did was to get the Amiga system files out of the weird RP9 file format. Its is basically just a zip-file containing the harddisk image, configuration files and everything you may need to run that Amiga virtual machine. But on a system with limited IO capacity, that idea is useless.

Once I got the harddisk HDF file exported, I mounted that and also added harddisk folder. Then its just a matter of copying over the whole Amiga OS disk to the folder. This means that the Amiga files will now reside on the PC drive directly rather than in some exotic structured storage file mimicking a harddisk from the mid 90’s.

As expected, booting from Windows to Workbench went from 1 minute (yes, I kid you not!) to 20 seconds. Still a long wait by any measure — but this can be easily solved. It become clear to me that maybe, just maybe, the architect of an otherwise excellent embedded board had a slightly different approach to storage than we do.

I know for a fact that it’s quite common to use EMMC purely as a boot device, and then distribute IO payload to external drives or USB sticks. Some also do the opposite and place the OS on a high-speed USB stick (as mentioned above) and use the EMMC to store their work. With “work” I am here referring to documents, email, perhaps some music, images and your garden variety assortment of data.

Add overclocking to the mix and you can squeeze out much better performance of this fantastic little machine. I still cant believe this tiny little thing is capable of running Windows 10 and Ubuntu so well.

Final verdict

I could play with this board all day long, but I think people get the picture by now.

The UP-board is fantastic and I love it. I was a bit let down by not having enough juice to run Amiga OS  4.1 final edition, but in all honesty it was opportunistic to expect that from an Intel Atom processor. I’m actually impressed that it could run it at all.

I was also extremely annoyed with the EMMC storage device (a topic I have exhausted in this article I think), and in the end I just disabled the whole bloody thing and settled on a high-quality USB 3 stick with plenty of space. So it’s not the end of the world, but it does feel like I have just thrown $50 down the toilet for a feature I will probably never use. But who knows, when I use it to run my own programs and design a full system – perhaps it wont matter as much.

Is it worth $150 for high-end model? I cannot get myself to say yes. It is worth it compared to the high-end ARM boards that go for $80-$120, especially since x86 runs Windows, and that opens up for a whole universe of software that Linux just don’t have; at least not with the same level of user-friendlyness.

Having said that, there are two new x86 boards just around the corner, both cheaper and more powerful. So would I buy this board again if I could return it? No.

I love what I can do with it, and its way ahead of Raspberry PI or the ODroid boards I have gotten used to and love, but the EMMC storage just ruins the whole thing.

Like I wrote – perhaps it will grow on me, but right now I feel it’s overpriced compared to what I would have gotten elsewhere.

UP-board for software developers

So far I have focused exclusively on retrogaming and emulating the next generation PPC based Amiga systems. This is an extremely demanding task, way beyond what a normal Windows application would ever expect of the system. So I havent really written anything substantial about what UP has to offer software developers or system integrators.

Using this board to deliver custom applications written in Delphi, C++ or Smart Pascal [node.js] is a whole different ballgame. The criteria in my line of work are very different and it’s rare that I would push the board like I have done here. It may happen naturally, perhaps if I’m coding a movie streaming server that needs to perform conversion on demand. Even a SVN or Github server can get a CPU spike if enough people download a repository in ZIP format (where a previously made file is not in the cache). But if you have ever worked with embedded boards you should know what to avoid and that you cannot write the code like you would for the commercial desktop market.

The UP board is more than suitable for products like a NAS, personal cloud server or kiosk systems running node.js and rendering with webkit. Native Delphi or C++ applications perform even better.

The UP board is more than suitable for products like a NAS, personal cloud server or kiosk systems running node.js and rendering with webkit. Native Delphi or C++ applications perform even better.

 

If you need an affordable, powerful and highly versatile embedded board for your Delphi, C++ or Smart Pascal [node.js] products, then I can recommend the UP boards without any hesitation!. The board rocks and has more than enough juice to cover a wide range of appliances and custom devices.

As a Delphi or Smart Pascal centric board it’s absolutely brilliant. If you work with kiosk systems, information booths or media servers along the lines of PLEX or Asustor, then no other board on the market gives you the same bang for your bucks. There is simply no traditional embedded retailer than can offer anything close to UP in the $90- $150 range.

If we compare it to traditional embedded boards, for instance a similar configuration sold by Advantec, you save a whopping 50% by getting the UP board instead (!)

Take the MIO-2261N-S6A1E embedded board. This has roughly the same specs (or in the same ballpark if you will), but if you shop at Advantec you have to fork out 215 euros for the motherboard alone! No ram, no storage – just the actual motherboard. You don’t even get a power supply.

What you do get however, is a kick ass sata interface, but you still have to get a drive.

If we try to match that board to what UP gives you for $150 (and that is the high-end UP board, not the cheap model) you hit the 300 euro mark straight away, just by getting the ram chips and a power supply. And should you add a tiny SSD disk to that equation, you have now reached a price tag of 350 euros ($366). So the UP-board is not just competitive, it’s trend setting!

So even though I would refrain from getting the UP board purely for emulating next generation PPC Amiga computers, I will most definitively be using it for work! It is more than capable of becomming a kick-ass NAS, a fast and responsive multimedia center, a web server for small businesses, a node.js service stack or cloud machine – and when it comes to kiosk systems, the UP-board is perfect!

So for developers I give it 4 out of 6 stars!

 

Build your own Delphi based operative system

October 19, 2014 8 comments

Those of us that grew up in the hacker community of the 80’s and 90’s have a different mindset from kids today. It’s sort of built into us that if we put our minds to it — we can do anything. Any system can be hacked, modified, built. Why? Because we can that’s why 🙂

I still remember getting so angry at loosing in Soccer Manager that I got my drill, black & decker’ed my way through the F1 key, then soldered it shut in order to win the tournament! Yeah we Norwegians have stubbornness issues. But on the positive side — we get stuff done!

Operative systems

While games was my dream job as a teenager, another side of my aspirations were building my own programming language (sort of did that) and my own operative system. Not a system like Linux or Windows, but rather just my own desktop environment. Back then the Amiga had it’s kernel on ROM’s, so the only thing that loaded when you booted the machine were drivers and the desktop environment. So naturally everyone wanted to pimp or replace that with their own.

Xamarin rewrote Android to C#

Xamarin rewrote Android to C#

Well, the Amiga is long gone and buried (sadly) and exists now only in the virtual world of emulation. But would it be possible to create your own, pascal driven desktop environment for x86 computers?

Now if you are thinking about a fully-fledged operative system, with it’s own hardware drivers (this is really the biggest challenge, because there are so many configurations out there), kernel and software — then I would say no. Not because it’s impossible — but rather improbable that anyone would throw away the amount of time and money to do so. Having said that several operative systems have been written in freepascal, the latest endeavor being FPOS. But as you probably imagine, it takes a long time to truly build an OS from scratch.

But, creating your own environment which is 100% controlled by your code — that is more than possible. It’s already been done by several companies in the field of backup-software and embedded NAS systems.

Xamarin, the guys behind mono even took the time to re-write Android in .net a while back, called XOBOTOS. You can check that out on GitHub here: https://github.com/xamarin/XobotOS

Delphi and DOS

You probably think that Delphi is only for the desktop. Well, that’s how it was designed. But the code generator inside Delphi produces ordinary, bog standard machine code; meaning that it only does what it’s designed to do. Namely to produce machine code that the x86 processor can execute.

What we need is something which emulates the absolute bare-bone functionality of Windows. I am talking about the true low-level stuff like allocMem(), the ability to read mouse co-ordinates, keyboard input and so on.

Delphi for DOS, works like a charm

Delphi for DOS, works like a charm

Well it just so happens that such a system exists. It’s called WDOSX and was primarily adapted for Delphi 5 and 7. What it does is take your ordinary Delphi .exe file and patch it to use the WDOSX extender. This basically turns your Windows program into a DOS program. The extender emulates the windows functions (memory allocation and so on) to work with DOS rather than WinAPI — effectively fooling your application into thinking it’s running under Windows.

Limitations

The limitations are obvious: no windowing toolkit, no threading and pretty much what you would expect from a DOS program. On the positive side though – you do get a flat memory-model, so if you have 16 gigabytes of ram, you will be able to use it as normal. No “high” memory and “low” memory, or 64k segments to worry about.

You will also enjoy a very VCL like RTL, namely CLX (Kylix). DWPL patches the CLX source-code so it works under DOS. It especially patches client/server units, so when it comes to networking you will be surprised just how powerful it is.

Add full-screen access, mouse, sound and complete access to the disk and you have probably the most powerful “DOS” development platform ever created. CLX is the open-source port of VCL to Linux (read: platform independent). A variation of CLX is used by Lazarus, so it’s pretty solid stuff.

Kylix, a project years ahead of its time

Kylix, a project years ahead of its time

Applications

It wont take long before you ask “but what about applications?”. Well you are in luck, because Delphi has the best scripting engines in the world. You could use RemObjects’s script to build applications, or what about DWScript (which is much better)? I seem to remember that the latest (read: last) WDOSX extender included support for blue-threads (so there is a TThread class in there somewhere). With a bit of work you could run several DWScript based applications at once.

And the speed would be phenomenal. Because your application would in fact be the only “real” application running on the entire machine. With exception of freedos and 2-3 drivers naturally.

A NAS server written in Delphi for embedded platforms? Yes you can!

Graphics

The Delphi package to use WDOSX (DWPL) is actually very good! It maps the VESA screen buffer into a TScreen class, making it a snap to take full control of the computer’s graphics. The downside is, naturally, that you wont get access to any GPU functionality since VESA means you use the processor to draw everything. Which is much slower.

It also uses FreeType, the same font renderer used by OS X and Linux to draw text on screen. This turns your home-brew OS into a more polished looking system.

Networking

Believe it or not but that is also possible. It requires a network card with a packet driver for DOS. Those are rare today, but not as rare as you might think – because DOS is still the #1 system for offshore embedded systems and military boards. Strange as it may sound DOS is alive and kicking to this day.

Libraries

Believe it or not but WDOSX supports the loadlibrary() api. Meaning that you can load and use DLL files (as long as these libraries dont collide with the limitations). That is pretty cool. You could in effect isolate common functionality in libraries for easier maintenance — or define your .exe format as DLL’s (so any applications for your desktop would ship in the form of DLL files, since that is the way to get executable code loaded).

FreeDOS

You still need a system that does all the dirty-work, talking to hardware interrupts and taking care of device IO. Well this is where freeDOS, the open-source and free DOS clone comes in.

So your system would in effect look like this:

  • FreeDOS
  • Mouse driver
  • Graphics driver
  • Network packet driver
  • WDOSX
  • Your application

Desktop environment

Creating your own desktop is, well, not hard but time-consuming. A window is basically just a graphical region with various options for how it’s displayed, positioned and drawn. It’s more complex but once you have the basics implemented, it’s quite fun to program 🙂

Possible markets

The only viable, economic market i can think of is that of backup and disk-utilities. In fact, most of the recovery software that boots off a USB or cd-rom is based on DOS. People use freeDOS for a lot of things, but very few have used Delphi for this. It’s primarily the domain of the now discontinued C++ Builder.

But yes, you could with some effort create a fully independent desktop environment written completely in Delphi. I have also seen full kick-start (boot code) written in freepascal (believe it or not).

But for sake of sanity I would stick to creating a desktop-system designed for a specific task, like back/restore of a disk (you have raw access to sectors so it’s a perfect situation).

A possibly second market is embedded systems dealing with server’s. Dedicated NAS servers are cheap these days, but you can actually build your own. It completely depends on your motivation (fun, money or hobby).

Either way — if someone took the time to work on this for a few months, Delphi would have a pretty good setup for creating embedded applications. The embedded market is very lucrative, especially the companies that target the oil industry.

FreePascal

I actually havent tested WDOSX with freepascal, but I suspect it will work. I did a quick google and found that WDOSX is indeed mentioned under supported systems (Ubuntu man pages).

If the idea of writing your own desktop-environment for embedded systems tickle’s your fancy, then why not investigate 🙂