Archive

Posts Tagged ‘Linux’

ARM Linux Services with Oxygene and Elements

October 14, 2019 Leave a comment

Linux is one of those systems that just appeals to me out of the box. I work with Windows on a daily basis, but at this point there is really nothing in the way of me jumping ship all together. I mean, whenever i need something that is Windows specific, I can just fire up a virtual-machine and get the job done there.

The only thing that is stopping me from going “all in” with Linux (and believe me I have tried) is that finding proper documentation for Linux written with Windows converts in mind, is actually a challenge in itself. Most tutorials are either meant for non-developers, like how to install a program via Synaptic and so on; which is brilliant if you have no experience with Linux whatsoever. But finding articles that aims to help a Windows developer get up to speed on Linux, that’s the tricky bit.

Screenshot at 2019-10-13 15-51-18

Top-Left bash window shows the output of my Elements compiled micro-service

One of the features I wanted to learn about, was how to run a program as a service on Linux. Under Windows this is quite easy. You have the service manager that gives you a good overview of registered services. And programatically a service is ultimately just a normal WinAPI program that supports the service-api messages. Writing services in either Object-Pascal or C# is pretty straight-forward. I also do a lot of service work via Quartex Pascal (my own toolchain) that compiles to JavaScript. Node.js is actually a very capable service host once you understand the infrastructure.

Writing Daemons with Oxygene and Elements

Since the Elements compiler generates code for ARM Linux, learning how to get a service registered and started on boot is something that I think many developers will be interested in. It was one of the first questions I had when I started looking at Linux, and it took a while to find a clean cut answer.

In this little article I will show you how I went about this, but please keep in mind that Linux never has “one way” of doing something. Part of the strength that Linux delivers, is that you can configure and customize the system completely, from Kernel to desktop. You literally have different service sub-systems to pick from, as well as different windowing-managers, desktop systems (e.g Wayland or X) and even keyring implementations. This is what makes Linux so confusing when coming from a mono culture like Microsoft Windows.

control-linux-startup-670x335As for hardware, i’m using an ODroid N2, which is a very powerful ARM based SBC (single board computer). You can use more or less any ARM device with Elements, providing the Linux distribution is based on Debian. So a Raspberry PI v4 with Ubuntu or Lubuntu will work fine. I’m using the ODroid N2 “full disk image” with Ubuntu Mate. So nothing out of the ordinary.

To make something clear off the bat: a Linux service (called a daemon, the ancient greek word for “helper” and “informer”) is just an ordinary shell application. You don’t have to do anything in particular in terms of code. Once your service is registered, you can start and stop it with the systemctl shell command like any other Linux service.

Note: There is also fork() mechanisms (cloning processes), but that’s out of scope for this little post.

Service manifest

Before we can get your binary registered as a service, we need to write a service manifest file. This is just a normal text-file in INI format that defines how you want your service to run. Here is an example of such a file:

[Unit]
Description=Elements Service
After=network.target
StartLimitIntervalSec=0

[Service]
Type=simple
Restart=always
RestartSec=1
User=qtx
ExecStart=/usr/bin /usr/bin/ElementsService.exe

[Install]
WantedBy=multi-user.target

Before you save this file, remember to replace the username (user=) and the name of your executable.

Note: The ExecStart property can be defined in 3 ways:

  • Direct path to the executable
  • Current working directory + path to executable (like above)
  • Current working directory + path to executable + parameters

You can read more about each property here: System.d service info

Systemd

For debian based distributions (Ubuntu branches) the most common service-host (or process manager) is called systemd. I am not even going to pretend to know the differences between systemd and the older Init. There are fierce debates in the Linux community around these two alternatives. But unless you are a Linux C developer that likes to roll your own Kernel in the weekends, it’s not really relevant for our goals in this post. Our task here is to write useful services and make them run side-by-side with other services.

With the service-manifest file done, we need to copy the service manifest in place where systemd can find it. So start by saving the manifest file as “elements.service” here:

/etc/systemd/system/elements.service

As you probably guessed from the ExecPath property, your service executable goes in:

/usr/bin/ElementsService.exe

If all went well you can now start your service from the command-line:

systemctl start elements

And you can stop the service with:

systemctl stop elements

Resident services

Starting and stopping a service is all good and well, but that doesn’t mean it will automatically start when you reboot your Linux box. In order to make the service resident (persisted, so Linux remembers to fire it up on boot), you have to enable the service:

systemctl enable elements

If you want to stop the service from starting on boot, just disable it:

systemctl disable elements

Now there is a ton of things you can tweak and change in the service-manifest file. For example, do you want Linux to restart your service if it crashes? How many times should Linux attempt to bring the service back up? Should it only bring it back up if the exit-code is zero?

If you want Linux to always restart a service if it stops (regardless of reason), you set the following flag in the service-manifest:

Restart=always

If you want Linux to only restart if the service fails, meaning that the exit-code of the application is <> 0, then you use this value instead:

Restart=on-failure

You can also set the service to only start after some other service, for example if your service has networking as a criteria (which is what we set in the service-manifest above), or a database engine.

There is a ton of different settings you can apply to the service-manifest, but listing them all here would be a small book. Better to just check the documentation and experiment a bit. So check the link and pick the ones that makes sense to your particular service.

Reflections

You should be very careful with how you define restart options. If something goes wrong and your service crash on start, Linux will keep restarting it en-mass. Automatic restart creates a loop, and it can be wise to make sure it doesn’t get stuck. I would set restart to “on-error” exclusively, so that your service has a chance to exit gracefully.

Happy coding! And a special thanks to Benjamin Morel for his helpful posts.

 

 

Using multiple languages is the same project

August 21, 2019 1 comment

Most compilers can only handle a single syntax for any project, but the Elements compiler from RemObjects deals with 5 (five!) different languages -even within the same project. That’s pretty awesome and opens up for some considerable savings.

I mean, it’s not always easy to find developers for a single language, but when you can approach your codebase from C#, Java, Go, Swift and Oxygene (object pascal) at the same time (inside the same project even!), you suddenly have some options.  Especially since you can pick exotic targets like WebAssembly. Or what about compiling Java to .net bytecodes? Or using the VCL from C#? It’s pretty awesome stuff!

Check out Marc Hoffmans article on the Elements compiler toolchain and how you can mix and match between languages, picking the best from each — while still compiling to a single binary of llvm optimized code:

mixins

Click on the picture to be redirected

 

Linux: political correctness vs Gnu-Linux hacker spirit

September 26, 2018 6 comments

Unless you have been living under a rock, the turmoil and crisis within the Linux community these past weeks cannot have escaped you. And while not directly connected to Delphi or the Delphi Developer group on Facebook, the effects of a potential collapse within the core Linux development team will absolutely affect how Delphi developers go about their business. In the worst possible scenario, should the core team and it’s immediate extended developers decide to walk away, their code walks with them. Rendering much of the work countless companies have invested in the platform unreliable at best – or in need of a rewrite at worst (there is a legal blind-spot in GPL revision 1 and 2, allowing developers to rescind their code).

Large parts of the kernel would have to be re-invented, a huge chunk of the sub-strata and bedrock that distributions like Ubuntu, Mint, Kali and others rests on – could risk being removed, or rescinded as the term may be, from the core repositories. And perhaps worst of all, the hundreds of patches and new features yet to be released might never see the light of day.

To underline just how dire the situation has been the past couple of weeks, Richard Stallman, Eric S. Raymond, Linus Torvalds and others are threatening, openly and legally, to pull all their code (September 20th, Linux Kernel Mailing Listif the bullying by a handful of activist groups doesn’t stop. Linus is still in limbo having accepted the code of conduct these activist demand implemented, but has yet to return to work.

Cohen-Linus-Torvalds

Linus Torvalds is famous for many things, but his personality is not one of them

But the interesting part of the Linux debacle is not the if’s and but’s, but rather the methods used by these groups to get their way. How can you enforce a “code of conduct” using methods that themselves are in violation with that code of conduct? It really is a case of “do as I say, not as I do”; And it has escalated into a gutter fight masquerading as social warfare where slander, stigmata, false accusations and personal attacks of the worst possible type are the weapons. All of which is now having a real and tangible impact on business and technology.

Morally bankrupt actions is not activism

These activists, if they deserve that title, even went as far as deciding to dig into the sexual-life of one of the kernel developers. And when finding out that he was into BDSM (a form of sexual role-play), they publicly stigmatized the coder as rape sympathizer (!). Not because it’s true, but because the verbal association alone makes it easier for bullies like Coraline to justify the social execution of a man in public.

What makes my jaw drop in all this, is the complete lack of compassion these so-called activists demonstrate. They seem blind to the price of such stigmata for the innocent; not to mention the insult to people who have suffered sexual abuse in their lives. For each false accusation of rape that is made, the difficulty for actual abuse victims to seek justice increases exponentially. It is a heartless, unforgivable act.

Personally, I can’t say I understand the many sexual preferences people have these days. I find myself googling what the different abbreviations mean. The movie 50 shades of gray revolved around this stuff. But one thing is clear:  as long as there are consenting adults involved, it is none of our business. If there is evidence of a crime, then it should be brought before the courts. And no matter what we might feel about the subject at hand, it can never justify stigmatizing a person for the rest of his life. Not only is this a violation of the very code of conduct these groups wants implemented – it’s also illegal in most of the civilized world. And damn immoral and out-of-line if you ask me.

The goal cannot justify the means

The irony in all of this, is that the accusation came from Coraline herself. A transgender woman born in the wrong body; a furious feminist now busy fighting to put an end to bullying  of transgender minorities in the workplace (which she claims is the reason she got fired from Github). Yet she has no problems being the worst kind of bully herself on a global scale. I question if Coraline is even morally fit to represent a code of conduct. I mean, to even use slander such as rape-sympathizer in context with getting a code of conduct implemented? Digging into someones personal life and then using their sexual preference as leverage? It is utterly outrageous!

It is unacceptable and has no place in civilized society. Nor does a code of conduct, beyond ordinary expectations of decency and tolerance, have any place in a rebel motivated R&D movement like Linux.

Linux is not Windows or OS X. It was born out of the free software movement back in the late 1960’s (Stallman with GNU) and the Scandinavian demo and hacker scene during the 80’s and 90’s (the Linux kernel that GNU rests on). This is hacker territory and what people might feel about this in 2018 it utterly irrelevant. These are people that start the day with 4Chan for pete sake! The primary motivation of Stallman and Linus is to undermine, destroy and bury Microsoft and Apple in particular. And they have made no secret of this agenda.

Expecting Linux or their makers to be politically correct is infantile and naive, because Linux is at its heart a rebellion, “a protest of technical excellence and access to technology undermining commercial tyranny and corporate slavery”. That is not my personal opinion, that is straight out of a Richard Stallman book Free as in Freedom; His papers reads more like a religious manifesto; a philosophical foundation for a technological utopia, seeded and influenced by the hippie spirit of the 1960s. Which is where Stallman’s heart comes from.

You cannot but admire Stallman for sticking to his principles for 50+ years. And thinking he is going to just roll over because activists in this particular decade has a beef with how hackers address each other or comment their code, well — I don’t think these activists understand the hacker community at all. If they did they would back off and stop poking dragons.

Linux vs the sensitivity movement?

Yesterday I posted a video article that explained some of this in simple, easy terms on Delphi Developer. I picked the video that summed up the absurdities involved (as outlined above) as quickly as possible, rather than some 80 minute talk on YouTube. We have a long tradition of posting interesting IT news, things that are indirectly connected with Delphi, C++ builder or programming in general. We also post articles that have no direct connection at all – except being headlines within the world of IT. This helps people stay aware of interesting developments, trends and potential investments.

42318056_270283593825810_4377349158193856512_o

The head of the “moral codex” doesn’t strike me as unbiased and without an axe to grind

As far as politics is concerned I have no interest what so ever. Nor would I post political material in the group because our focus is technology, Delphi, object pascal and software development in general. The exception being if there is a bill or law passed in the US or EU that affects how we build applications or handle data.

Well, this post was no different.

What was different was that some individuals are so acclimatized to political debate that they interpret everything as a political statement. So criticism of the methods used are made synonymous with criticism of a cause. This can happen to the best of us; human beings are passionate animals and I think we can all agree that politics has taken up an unusual amount of space lately. I can’t ever remember politics causing so much bitterness, divide and hate as it does today. Nor can I remember sound reason being pushed aside in favour of immediate emotional trends. And it really scares me.

Anyways, I wrote that “I stand by my god given rights to write obscene comments in my code“. Which is a reference to one of the topics Linus is being flamed for, namely his use of the F word in his own code. My argument is that, the kernel is ultimately Torvalds work, and it’s something he gives away for free. I dont have any need for obscenity in my code, but I sure as hell reserve the right to do so in my personal projects. How two external groups (in this case a very aggressive feminist group combined with LGBTQIA) should have any say in how Linus formats his code (or you for that matter) or the comments he writes – it makes no sense. It’s free, take it or leave it. And if you join a team and feel offended by how things are done, you either ignore it or leave.

It might not be appropriate of Linus to use obscenity in his comments, but do you really want people to tell you what you can or cannot write in your own code? Lord knows there are pascal units online that have language unfit for publishing, but nobody is forcing you to use them. I cant stand Java but I dont join their forums and sit there like a 12 year old bitching about how terrible Java is. It’s just infantile, absurd mentality.

So that is what my reference was to, and I took for granted that people would pick up on that since Linus is infamous for his spectacular rants in the kernel (and verbally in interviews). Some of his commits have more rants than code, which I find hilarious. There is a collection of them online and people read them for kicks because he is, for all means and purposes, the Gordon Ramsey of programming.

And I also made a reference to “tree hugging millennial moralists”. Not exactly hard-core statements in these trying times. We live in a decade where vegan customers are looking to sue restaurants for serving meat. Maybe I’m old-fashioned but for me, that is like something out of Monty Python or Mad Magazine. I respect vegans, but I will not be dictated by them.

I mean, the group people call millennials is after all recognized as a distinct generation due to a pattern of unreasonable demands on society (and in extreme cases, reality itself). In some parts of the world this is a real problem, because you have a whole generation that expects to bag upper-management salary on a paper route. When this is not met you face a tantrum and aggressiveness that should not exist beyond a certain age. Having a meltdown like a six-year-old when you are twenty-six is, well, not something I’m even remotely interested in dealing with.

And I speak from experience here, I had the misfortune of working with one extreme case for a couple of years. He had a meltdown roughly once a month and verbally abused everyone in the office. Including his boss. I still can’t believe he put up with it for so long, a lesser man would have physically educated him on the spot.

The sensitivity movement

But (and this is important) like always, a stereotype is never absolute. The majority within the millennial age group are nothing like these extreme cases. In fact we have two administrators in Delphi Developer that both fall under the millennial age group – yet they are the exact opposite of the stereotype. They are extremely hard-working, demonstrate good moral and good behavior, they give of themselves to the community and are people I am proud to call my friends.

The people I refer to as the sensitivity movement consists of men and women that hold, in my view, demands to life that are unreasonable. We live in times where for some reason, and don’t ask me why, minorities have gotten away with terrible things (slander, straw-men tactics, blame shifting, perversion of facts, verbal abuse, planting dangerous rumours and false accusation; things that can ruin a person for life) to impose their needs opposed to the greater good and majority. And no, this has nothing to do with politics, it has to do with expectation of normal decency and minding your own business. As a teenager I had my share of rebellion (some would say three shares), but I never blamed society; instead I wanted to understand why society was the way it is, which led me to studying history, comparative religion and philosophy.

The minorities of 2018 have no interest in understanding why, they mistake preference with offence, confuse kindness with weakness – and are painfully unable to discern knowledge from wisdom. The difference between fear and respect might be subtle, but on reflection a person should make important discoveries about their own nature. Yet this seem utterly lost on men and women in their 20s today.

And just to make things crystal clear: the minorities I am referring to here as the so-called sensitivity movement, are not the underprivileged or individuals suffering a disadvantage. The minorities are in fact highly privileged individuals – enjoying the very freedom of expression they so eagerly want taken away from people they don’t like. That is a very dangerous path.

Linux, the bedrock of the anti-establishment movement

The Linux community has a history of being difficult. Personally I find them both helpful and kind, but the core motivation behind Linux as a phenomenon cannot be swept under the carpet or ignored: these are rebels, rogues, people who refuse to bend the knee.

Linux itself is an act of defiance, and it exists due to two key individuals who both are extremely passionate by nature, namely Richard Stallman and Linus Torvalds.

Attacking these from all sides is shameful. I find no other words for it. Especially since its not a matter of principles or sound moral values, but rather a matter of pride and selfish ideals.

Name calling will not be tolerated

The reason I wrote this post was not to involve everyone in the dire situation of Linux, at least not to bring an external problem into our community and make it our problem. It was news that is of some importance.

I wrote this blogpost because a member somehow nicknamed me as “maga right-wing” something. And I’m not even sure how to respond to something like that.

First of all I have no clue what maga even is, I think it’s that cap slogan trump uses? Make america great again or something like that? Secondly, I live in Norway and know very little of the intricacies of domestic american politics. I have voted left for some 20 years, with exception of last norwegian election when I voted center. How my respect for Stallman and Linus, and how the hacker community operates (I grew up in the hacker community) – somehow connects me to some political agenda on another continent, is quite frankly beyond me.

But this is exactly the thing I wrote about above – the method being deployed by these groups. A person read something he or she doesn’t like, connects that to a pre-defined personality type, this is then supposed to justify wild accusations – and he or she then proceeded directly to treating someone accordingly. THAT behavior IS offensive to me, because there should be a dialog quite early in that chain of events. We have dialog to avoid causing harm – not as a means to cause further damage.

Is it the end of Linux as we know it?

No. Linus has been a loud mouth for ages, and he actually have people who purge his code of swear words (which is kinda funny) – but he has accepted the code of conduct and taken some time off.

The threat Stallman and the core team has made however is very real, meaning that the inner circle of Linux developers can flick the kill switch if they want to, but I think the negative press Coraline and those forcing their agenda onto the Linux foundation is getting, will make them regret it. And of course, articles like the New Yorker published didn’t help the situation.

Having said that, these developers are not normal people. Normal is a cut of average behavior. And neither Stallman, Linus of the hacker community fall under the term “normal” in the absolutesense of the word. Not a single individual that has done something of importance technologically fall under that group. Nor do they have any desire to be normal either, which is a death sentence in the hacker community. The lowest, most worthless status you can hold as a hacker, is normal.

These are people who build operating systems for fun. They are passion driven, artistic and highly emotional. And as such they could, should more gutter tricks be deployed, decide to burn the house down before they hand it over.

So it’s an interesting case well worth keeping an eye on. Preferably one that doesn’t add or subtract from what is there.

Smart Pascal file enumeration under node.js

May 10, 2018 Leave a comment

Ok. I admit it. Writing an RTL from scratch has been one of the hardest tasks I have ever undertaken. Thankfully I have not been alone, but since I am the lead developer for the RTL, it naturally falls on me to keep track of the thousands of classes it comprises; how each affect the next, the many inheritance chains and subsequent causality timelines that each namespace represents.

We were the first company in the world to do this, to establish the compiler technology and then author a full RTL on top of that – designed to wrap and run on top of the JavaScript virtual machine. To be blunt, we didn’t have the luxury to looking at what others had done before us. For every challenge we have had to come up with solutions ourselves.

Be that as it may, after seven years we have gotten quite good at framework architecture. So whenever we need to deal with a new runtime environment such as node.js – we  have already built up a lot of experience with async JSVM development, so we are able to absorb and adapt much faster than our competitors.

Digging into a new platform

Whenever I learn a new language, I typically make a little list of “how do I do this?” type questions. It can be simple, like writing text to stdout, or more elaborate like memory mapped files, inheritance model, raw memory access and similar topics.

But one of the questions have always been: how do I enumerate files in a folder?

While this question is trivial at best, it stabs at the heart of the sub structure of any language. On operating systems like Linux a file is not just data on a disk like we are used to from Windows. A file can be a socket, a virtual access point exposed by the kernel, a domain link, a symbolic link or a stream. So my simple question is actually motivated to expose the depth of the language im learning. I then write down whatever topics come up and then research / experiment on them separately.

Node, like the browser, executes code asynchronously. This means that the code you write cannot be blocking (note: node does support synchronous file IO methods, but you really don’t want to use them in a server. They are typically used before the server is started to load preferences files and data).

As you can imagine, this throws conventional coding out the window. Node exposes a single function that returns an array of filenames (array of string), which helps, but it tells you nothing about the files. You don’t get the size, the type, create and modify timestamps – just the names.

To get the information I just mentioned you have to call a function called “fs.stat”. This is a common POSIX filesystem command. But again we face the fact that everything is async, so that “for / next” loop is useless.

Luke Filewalker

In version 3.0 of Smart Mobile Studio our Node.JS namespace (collection of units with code) has been upgraded and expanded considerably. We have thrown out almost all our older dependencies (like utf8.js and base64.js) and implemented these as proper codec classes in Smart Pascal directly.

Our websocket framework has been re-written from scratch. We threw out the now outdated websocket-io and instead use the standard “ws” framework that is the most popular and actively maintained module on NPM.

We have also implemented the same storage-device class that is available in the browser, so that you can write file-io code that works the same both server-side and client-side. The changes are in the hundreds so I wont iterate through them all here, they will be listed in detail on the release-notes document when the time comes.

But what is a server without a fast, reliable way of enumerating files?

Well, here at the Smart Company we use our own products. So when writing servers and node micro-services we face the exact same challenges as our customers would. Our job is to write ready solutions for these problems, so that you don’t have to spend days and weeks re-inventing the wheel.

Enumerating files is handled by the class TNJFileWalker (I was so tempted to call it Luke). This takes care of everything for you, all the nitty-gritty is neatly packed into a single, easy to use class.

Here is an example:

luke

Enumerating files has literally been reduced to childs play

The class also expose the events you would expect, including a filtering event where you can validate if a file should be included in the final result. You can even control the dispatching speed (or delay between item processing) which is helpful for payload balancing. If you have 100 active users all scanning their files at the same time -you probably want to give node the chance to breathe (20ms is a good value).

The interface for the class is equally elegant and easy to understand:

luke2

What would you prefer to maintain? 500.000 lines of JavaScript or 20.000 lines of pascal?

Compare that to some of the spaghetti JavaScript developers have to live with just to perform a file-walk and then do a recursive “delete folder”. Sure hope they check for “/” so they don’t kill the filesystem root by accident.

const fs = require('fs');
const path = require('path');

function filewalker(dir, done) {
    let results = [];

    fs.readdir(dir, function(err, list) {
        if (err) return done(err);

        var pending = list.length;

        if (!pending) return done(null, results);

        list.forEach(function(file){
            file = path.resolve(dir, file);

            fs.stat(file, function(err, stat){
                // If directory, execute a recursive call
                if (stat && stat.isDirectory()) {
                    // Add directory to array [comment if you need to remove the directories from the array]
                    results.push(file);

                    filewalker(file, function(err, res){
                        results = results.concat(res);
                        if (!--pending) done(null, results);
                    });
                } else {
                    results.push(file);

                    if (!--pending) done(null, results);
                }
            });
        });
    });
};

function deleteFile(dir, file) {
    return new Promise(function (resolve, reject) {
        var filePath = path.join(dir, file);
        fs.lstat(filePath, function (err, stats) {
            if (err) {
                return reject(err);
            }
            if (stats.isDirectory()) {
                resolve(deleteDirectory(filePath));
            } else {
                fs.unlink(filePath, function (err) {
                    if (err) {
                        return reject(err);
                    }
                    resolve();
                });
            }
        });
    });
};

function deleteDirectory(dir) {
    return new Promise(function (resolve, reject) {
        fs.access(dir, function (err) {
            if (err) {
                return reject(err);
            }
            fs.readdir(dir, function (err, files) {
                if (err) {
                    return reject(err);
                }
                Promise.all(files.map(function (file) {
                    return deleteFile(dir, file);
                })).then(function () {
                    fs.rmdir(dir, function (err) {
                        if (err) {
                            return reject(err);
                        }
                        resolve();
                    });
                }).catch(reject);
            });
        });
    });
};

Using Smart Mobile Studio under Linux

April 22, 2018 Leave a comment

Every now and then when I post something about Smart Mobile Studio, an individual or two wants to inform me how they cannot use Smart because it’s not available for Linux. While more rare, the same experience happens now and then with OS X.

linux

The Smart desktop demo running in Firefox on Ubuntu, with Quake 3 at 60 fps

While the request for Linux or OS X support is both valid and understandable (and something we take seriously), more often than not these questions can be a pre-cursor to a larger picture; one that touches on open-source, pricing and personal philosophical points of view; often with remarks on pricing.

Truth be told, the price we ask for Smart Mobile Studio is close to symbolic. Especially if you take the time to look at the body of work Smart delivers. We are talking hundreds of hand written units with thousands of classes, each spesifically adapted for HTML5, Phonegap and Node.js. Not to mention ports of popular JavaScript frameworks.

If you compare this to native object pascal development tools with similar functionality, they can set you back thousands of dollars. Our asking price of $149 for the pro edition, $399 for the enterprise edition, and a symbolic $42 for the basic edition, that is an affordable solution. Also keep in mind that this gives you access to updates for a duration of 12 months. When was the last time you bought a full development suite that allows you to write mobile applications, platform independent servers, platform independent system services and HTML5 web applications for less that $400 ?

prices

Our price model is more than reasonable considering what you get

With platform independent we mean that in the true sense of the word: once compiled, no changes are required. You can write a system service on Windows and it will run just fine under Linux or OS X. No re-compile needed. You can take a server and copy it to Amazon or Azure, run it in a cluster or scale it from a single instance to 100 instances without any change. That is unheard of for object pascal until now.

Smart Mobile Studio is the only object pascal development system that delivers a stand-alone IDE, stand-alone compiler, a wast object-oriented run-time library written from scratch to cover HTML5, Node.js and embedded systems that run JavaScript.

And yes, we know there are other systems in circulation, but none of them are even close to the functionality that we deliver. Functionality that has been polished for seven years now. And our RTL is growing every day to capture and expose more and more advanced functionality that you can use to enrich your products.

21272810_1585822718147745_4358597225084661216_o

The RTL class browser shows the depth of our RTL

As of writing we have a team of six people working on Smart Mobile Studio. We have things in our labs that is going to change the way people build applications forever. We were the first to venture into this new landscape. There were nobody we could imitate, draw inspiration from or learn from. We had to literally make the path as we moved forward.

And our vision and goal remains the same today as it was seven years ago: to empower object pascal developers – and to secure their investment in the language and methodology that is object pascal.

Discipline and purpose

There is so much I would like to work on right now. Things I want to add to Smart Mobile Studio because I find them interesting, powerful and I know people are going to love them. But that style of development, the “Garage days” as people call it, is over. It does wonders in the beginning of a project maybe, but eventually you reach a stage where a formal timeline and business plan must be carved in stone.

And once defined, you have to stick to it. It would be an insult to our customers if we pivoted left and right on a whim. Like every company we have room for research, even a couple of skunkwork projects, but our primary focus is to make our foundation rock solid before further growth.

j5

By tapping into established JavaScript frameworks you can cover more than 40+ embedded systems and micro-controllers. More and more hardware supports JS for automation

Our “garage days” ended around three years ago, and through hard work we defined our timeline, business model and investor program. In 2017 we secured enough capital to continue full-time development.

Our timeline has been published earlier, but we can re-visit some core points here:

The visual components that shipped with Smart Mobile Studio in the beginning, were meant more as examples than actual ready-to-use modules. This was common for other development platforms of the day, such as Xamarin’s C# / Mono toolchain, where you were expected to inherit from and implement aspects of a “partial class”. This is also why Smart Pascal has support for partial classes (neither Delphi or Freepascal supports this wonderful feature).

native

One of our skunkwork projects is a custom linux distro that runs your Smart applications directly in the Linux framebuffer. No X or desktop, just your code. Here running “the smart desktop” as the only visual front-end under x86 vmware

Since developers coming from Delphi had different expectations, there was only one thing to do. To completely re-write every single visual control (and add a few new controls) so that they matched our customers expectations. So the first stretch of our timeline has been 100% dedicated to the visual aspects of our RTL. While doing so we have made the RTL faster, more efficient, and added some powerful sub-systems:

  • A dedicated theme engine
  • Unified event delegation
  • Storage device classes
  • Focus and control tracking
  • Support for relative position modes
  • Support for all boxing models
  • Inline linking ( {$R “file.js”} will now statically link an external library)
  • And much, much more

So the past eight months has been all about visual components.

22814515_1630289797034370_9138255627706616601_n

Theming is important

The second stretch, which we are in right now, is dedicated to the non-visual infrastructure. This means in particular Node.js but also touches on non-visual components, TAction support and things that will appear in updates this year.

Node.js is especially important since it allows you to write platform and chipset independent web servers, system services and command-line applications. This is pioneering work and we are the first to take this road. We have successfully tamed the alien landscape of JavaScript, both for client, mobile and server – and terraformed it into a familiar, safe and productive environment for object pascal developers.

I feel the results speak for themselves, and our next update brings Smart Mobile Studio to the next level: full stack cloud and web development. We now cover back-end, middle-ware and front-end technologies. And our RTL now stretches from micro-controllers to mobile application to clustered cloud services.

This is the same technology used by Netflix to process terabytes of data every second on a global scale. Which should tell you something about the potential involved.

Working on Linux

Since Smart Mobile Studio was designed to be a swiss army knife for Delphi and Lazarus developers, capable to reaching segments of the market where native code is unsuitable – our primary focus is Microsoft Windows. At least for now.

Delphi itself is a Windows-based development system, and even though it supports multiple targets, Windows is still the bread and butter of commercial software development.

Like I mentioned above, we have a timeline to follow, and until we have reached the end of that line, we are not prepared to refactor our IDE for Linux or OS X. This might change sooner than people think, but until our timeline for the RTL is concluded, we will not allocate time for making the IDE platform independent.

But, you can actually run Smart Mobile Studio on both Linux and OS X today.

Linux has a system called Wine. This is a system that implements the Windows API, but delegates all the calls to Linux. So when a Windows program calls a WinAPI through Wine, its delegated to Linux variation of the same call. This is a massive undertaking, but it has years of work behind it and functions extremely well.

So on linux you can install it by opening up a shell and typing:

sudo apt-get install wine

I take for granted here that your Linux flavour has aperture installed (I’m using Ubuntu since that is easy to work with), which is the package manager that gives you the “apt-get” command. If you have some other system then just google how to install a package.

With Wine and it’s dependencies installed, you can run the Smart Mobile Studio installer. Wine will create a virtual, sandboxed disk for you – so that all the files end up where they should. Once finished you punch in the license serial number as normal, and voila – you can now use Smart Mobile Studio on Linux.

Note: in some cases you have to right-click the SmartMS.exe and select “run with -> Wine”, but usually you can just double-click the exe file and it runs.

Smart Mobile Studio on OSX

Wine has been ported to OS X, but it’s more user friendly. You download a program called wine-bottler, which takes Smart and bundles it with wine + any dependencies it needs. You can then start Smart Mobile Studio like it was a normal OS X application.

Themes and look

The only problem with Wine is that it doesnt support Windows themes out of the box. It would be illegal for them to ship those files. But you can manually copy over the windows theme files and install them via the Wine config application. Once installed, Smart will look as it should.

By default the old Windows 95 look & feel is used by Wine. Personally I dont care too much about this, it’s being able to code, compile and run the applications that matters to me – but if you want a more modern look then just copy over the Windows theme files and you are all set.

 

 

The Amiga ARM project

April 19, 2018 6 comments

This has been quite the turbulent week. Without getting into all the details, a post that I made with thoughts and ideas for an Amiga inspired OS for ARM escaped the safe confines of our group, Amiga Disrupt, and took on a life of its own.
This led to a few critical posts being issued publicly, which all boiled down to a misunderstanding. Thankfully this has been resolved and things are back to normal.

The question on everyone’s lips now seem to be: did Jon mean what he said or was it just venting frustration? I thought I made my points clear in my previous post, but sadly Commodore USA formulated a title open for interpretation (which is understandable considering the mayhem at the time). So let’s go thrugh the ropes and put this to rest.

Am I making an ARM based Amiga inspired OS?

Hopefully I don’t have to. My initial post, the one posted to the Amiga Disrupt comment section (and mistaken for a project release note), had a couple of very clear criteria attached:

If nothing has been done to improve the Amiga situation [with regards to ARM or x86] by the time I finish Amibian.js (*), I will take matters into my own hand and create my own alternative.

(*) As you probably know, Amibian.js is a cloud implementation of Amiga OS, designed to bring Amiga to the browser. It is powered by a node.js application server; a server that can be hosted either locally (on the same machine as the html5 client) or remotely. It runs fine on popular embedded devices such as Tinkerboard and ODroid, and when run in a full-screen browser with no X or Windows desktop behind it – it is practically indistinguishable from the real thing.

We have customers who use our prototype to deliver cloud based learning for educational institutions. Shipping ready to use hardware units with pre-baked Amibian.js installed is perfect for schools, libraries, museums, routers and various kiosk projects.

smart_desktop

Amibian.js, here running Quake 3 at 60 fps in your browser

Note: This project started years before FriendOS, so we are not a clone of their work.

Obviously this is a large task for one person, but I have written the whole system in Smart Mobile Studio, which is a product our company started some 7 years ago, and that now has a team of six people behind it. In short it takes object pascal code such as Delphi and Freepascal, and compiles this to JavaScript. Suitable for both the browser and NodeJS. It gives you a full IDE with form designer, drag & drop visual components and a wast and rich RTL (run-time library) which naturally saves me a lot of time. So this gives me an edge over other companies working with similar technology. So while it’s a huge task, it’s leveraged considerably by the toolchain I made for it.

So am I making a native OS for ARM or x86? The short answer: I will if the situation havent dramatically improved by the time Amibian.js is finished.

Instead of wasting years trying to implement everything from scratch, Pascal Papara took the Linux kernel and ran with it. So Aeros boots by virtue of the Linux Kernel, but jumps straight into Aros once the drivers has loaded

If you are thinking “so what, who the hell do you think you are?” then perhaps you should take a closer look at my work and history.

I am an ex Quartex member, which was one of the most infamous hacking cartels in europe. I have 30 years of software development behind me, having worked as a professional developer since the age of 17. I have a history of taking on “impossible” projects and finding ways to deliver them. Smart Mobile Studio itself was deemed impossible by most Delphi developers; It was close to heresy, triggering an avalanche of criticism for even entertaining the idea that object pascal could be compiled to JavaScript. Let alone thrive on JSVM (JavaScript Virtual Machine).

assembler

Amibian.js runs javascript, but also bytecodes. Here showing the assembler prototype

You can imagine the uproar when our generated JavaScript code (compiled from object pascal) actually bested native code. I must admit we didn’t expect that at all, but it changed the way Delphi and object pascal developers looked at the world – for the better I might add.

What I am good at, is taking ordinary off the shelves parts and assembling them in new and exciting ways. Often ways the original authors never intended; in order to produce something unique. My faith is not in myself, but in the ability and innate capacity of human beings to find solutions. The biggest obstacle to progress is ultimately pride and fear of losing face. Something my Buddhist training beat our of me ages ago.

So this is not an ego trip, it’s simply a coder that is completely fed-up with the perpetual mismanagement that has held Amiga OS in captivity for two decades.

Amiga OS is a formula, and formulas are bulletproof

People love different aspects of the same thing – and the Amiga is no different. For some the Amiga is the games. Others love it for its excellent sound capabilities, while some love it for the ease of coding (the 68k is the most friendly cpu ever invented in my book). And perhaps all of us love the Amiga for the memories we have. A harmless yet valuable nostalgia of better times.

image3

Amiga OS 3.1 pimped up, running on Amibian [native] Raspberry PI 3b

But for me the love was always the OS itself. The architecture of Amiga OS is so elegant and dare I say, pure, compared to other systems. And I’m comparing against both legacy and contemporary systems here. Microsoft Windows (WinAPI) comes close, but the sheer brilliance of Amiga OS is yet to be rivaled.

We are talking about a design that delivers a multimedia driven, window based desktop 10 years before the competition. A desktop that would thrive in as little as 512 kb of ram, with fast and reliable pre-emptive multitasking.

I don’t think people realize or understand the true value of Amiga OS. It’s not in the games (although games is definitively a huge part of the experience), the hardware or the programs. The reason people have been fighting bitterly over Amiga OS for a lifetime, is because the operating system architecture or “formula” is unmatched to this very day.

Can you imagine what a system that thrives under 512 KB would do to the desktop market? Or even better, what it could bring to the table for embedded and server technology?

And this is where my frustration soars up. Even though we have OS 4.1, we have been forced to idly stand by and watch, as mistake after mistake is being made. opportunities that are ripe for the taking (some of them literally placed on the doorstep of Hyperion), have been thrown by the wayside time and time again.

And they are not alone. Aros and Morphos has likewise missed a lot of opportunities. Both opportunities to generate income and secure development as well as embracing new technology. Although I must stress that I sympatize with Aros since they lack any official funding. Morphos is doing much better using a normal, commerical license.

Frustration, the mother of invention

When the Raspberry PI was first released I jumped of joy. Finally a SBC (single board computer) with enough power to run a light version of Amiga OS 4.1, with a price tag that everyone can live with. I rushed over to Hyperion to see if they had issued a statement about the PI, but nothing could be found. The AEON site was likewise empty.

The PI version 2 came and went, still no sign that Hyperion would capitalize on the situation. I expected them to issue a “Amiga OS 4.1 light” edition for ARM, which would put them on the map and help them establish a user base. Without a user base and fresh blood there is no chance in hell of selling next generation machines in large enough quantities to justify future development. But once again, opportunity after oppertunity came and went.

Sexy, fast and modern: Amiga OS 4.1

Sexy, fast and modern: Amiga OS 4.1 would do wonders on ARM

Faster and better suited SBC’s started to turn up in droves: The ODroid, Beaglebone black, The Tinkerboard, The Banana PI – and many, many others. When the SnapDragon IV CPU’s shipped on a $120 SBC, which is the same processor used by Samsung Galaxy 6S, I was sure Hyperion would wake up and bring Amiga OS to the masses. But not a word.

Instead we were told to wait for the Amiga x5000 which is based on PPC. I have no problem with PPC, it’s a great platform and packs a serious punch. But since PPC no longer sell to mainstream computer companies like it used to, the price penalty would be nothing short of astronomical. There is also the question of longevity and being able to maintain a PPC based system for the forseeable future. Where exactly is PPC in 15 years?

Note: One of the reasons PPC was selected has to do with coding infrastructure. PPC has an established standard, something ARM lacked at the time (this was first established for ARM in 2014). PPC also has an established set of development platforms that you can build on, with libraries and pre-fab modules (pre fabricated modules, think components that you can use to quickly build what you need) that have been polished for two decades now. A developer who knows PPC from the Amiga days will naturally feel more at home with PPC. But sadly PPC is the past and modern development takes place almost exclusively on ARM and x86. Even x86 is said to have an expiration date now.

The only group that genuinely tried to bring Amiga OS to ARM has been the Aros team. They got their system compiled, implemented some rudimentary drivers (information on this has been thin to say the least) and had it booting natively on the Raspberry PI 3b. Sadly they lacked a USB stack (remember I mentioned pre-fab modules above? Well, this is a typical example. PPC devtools ship with modules like this out of the box) so things like mouse, keyboard and external peripherals wouldn’t work.

3

Aeros, the fastest Amiga you will ever play with. Running on the Raspberry PI 3b

And like always, which is the curse of Amiga, “something came up”, and the whole Raspberry PI / ARM initiative was left for dead. The details around this is sketchy, but the lead developer had a personal issue that forced him to set a new direction in life. And for some reason the other Aros developers have just continued with x86, even though a polished ARM version could have made them some money, and helped finance future development. It’s the same story, again and again.

But then something amazing happened! Out of the blue came Pascal Papara with a new take on Aros, namely AEROS. This is a distro after my own heart. Instead of wasting years trying to implement everything from scratch, Pascal took the Linux kernel and ran with it. So Aeros boots by virtue of the Linux Kernel, but jumps straight into Aros once the drivers has loaded. And the result? It is the fastest desktop you will ever experience on ARM. Seriously, it runs so fast and smooth on the Raspberry PI that you could easily mistake it for a $450 Intel i3.

Sadly Pascal has been more or less alone about this development. And truth be told he has molded it to suit his own needs rather than the consumer. Since his work includes a game machine and some Linux services, the whole Linux system is exposed to the Aros desktop. This is a huge mistake.

Using the Linux kernel to capitalize on the thousands of man hours invested in that, not to mention the linux driver database which is massive, is a great idea. It’s also the first thing that came into my mind when contemplating the issue.

But when running Aros on top of this, the Linux aspect of the system should be abstracted away. Much like what Apple did with Unix. You should hardly notice that Linux is there unless you open a shell and start to investigate. The Amiga filesystem should be the only filesystem you see when accessing things from the desktop, and a nice preferences option for showing / hiding mounted Linux drives.

My plans for an ARM based Amiga inspired OS

Building an OS is not a task for the faint of heart. Yes there is a lot of embedded / pre-fab based systems to pick from out there, but you also have to be sensible. You are not going to code a better kernel than Linus Torvalds, so instead of wasting years trying to catch up with something you cannot possibly catch up with – just grab the kernel and make it work for us.

The Linux kernel solves things such as process contexts, “userland” vs “kernel space” (giving the kernel the power to kill a task and reclaim resources), multitasking / threading, thread priorities, critical sections, mutexes and global event objects; it gives us IPC (inter process communication), disk IO, established and rock solid sound and graphics frameworks; and last but perhaps most important: free access to the millions of drivers in the Linux repository.

Screenshot

Early Amibian.js login dialog

You would have to be certified insane to ignore the Linux Kernel, thinking you will somehow be the guy (or group) that can teach Linus Torvalds a lesson. This is a man who has been writing kernel’s for 20+ years, and he does nothing else. He is surrounded by a proverbial army of developers that code, test, refactor and strive to deliver optimal performance, safety and quality assurance. So sorry if I push your buttons here, but you would be a moron to take him on. Instead, absorb the kernel and gain access to the benefits it has given Linux (technically the kernel is “Linux”, the rest is GNU – but you get what I mean).

With the Linux kernel as a foundation, as much as 50% of the work involved in writing our OS is finished already. You don’t have to invent a driver API. You dont have to invent a new executable format (or write your own ELF parser if you stick with the Linux executable). You can use established compilers like GCC / Clang and Freepascal. And you can even cherry pick some low-level packages for your own native API (like SDL, OpenGL and things that would take years to finish).

But while we want to build our house on rock, we don’t want it to be yet another Linux distro. So with the kernel in place and a significant part of our work done for us, that is also where the similarities end.

The end product is Amiga OS, which means that we need compatibility with the original Amiga rom libraries (read: api). Had we started from scratch that would have been a tremendous effort, which is also why Aros is so important. Because Aros gives us a blueprint of how they have implemented these API’s.

But our main source of inspiration is not Aros, but Amithlon. What we want to do is naturally to pipe as much as we can from the Amiga API’s back to the Linux kernel. Things like device detection, memory allocation, file IO, pipes, networking — our library files will be more thin wrappers that expose Amiga compatible calls; methods that calls the Linux Kernel to do the job. So our Amiga library files will be proxy objects whenever possible.

AmithlonQEmu

Amithlon, decades ahead of it’s time

The hard work is when we get to the window manager, or Intuition. Here we can’t cheat by pushing things back to Linux. We don’t want to install X either (although we can render our system into the X framebuffer if we like), so we have to code a window manager. This is not as simple as it sounds, because our system must operate with multiple cores, be multi threaded by design and tap into the grand scheme of things. Things like messages (which is used by applications to respond to input) must be established, and all the event codes from the original Amiga OS must be replicated.

So this work wont be easy, but with the Linux kernel as a foundation – the hardest task of all is taken care of. The magic of a kernel is that of process management and task switching. This is about as hard-core as you can get. Without that you can almost forget the rest. But since we base our system on the Linux kernel, we can focus 100% on the real task – namely to deliver a modern Amiga experience, one that is platform independent (read: conforms to standard Linux and can thus be recompiled and run anywhere Linux can run), preserves as much of the initial formula as possible – and can be successfully maintained far into the future.

By pushing as much of our work as possible into user-space (the process space where ordinary programs run, the kernel runs outside this space and is thus unaffected when a program crashes) and adhering to the Linux kernel beneath the bonnet, we have created a system that can be re-compiled anywhere Linux is. And it can be done so without any change to our codebase. Linux takes care of things like drivers, OpenGL, Sound — and presents to us a clean API that is identical on every platform. It doesn’t matter if it’s ARM, PPC, 68k, x86 or MIPS. As long as we follow the standards we are home free.

Last words

I hope all of this clears up the confusion that has surrounded the subject this week. Again, the misunderstanding that led to some unfortunate posts has been resolved. So there is no negativity, no drama and we are all on the same page.

amidesk

Early Amibian.js prototype, running 68k in the browser via uae.js optimized

Just remember that I have set some restrictions for my involvement here. I sincerely hope Hyperion and the Aros development group can focus on ARM, because the community needs this. While the Raspberry PI might seem too small a form-factor to run Aros, projects like Aeros have proven just how effective the Amiga formula is. I’m sure Hyperion could find a powerful ARM SOC in the price range of $120 and sell a complete package with profit for around $200.

What the Amiga community needs now, is not expensive hardware. The userbase has to be expanded horizontally across platforms. Amiga OS / Aros has much to offer the embedded market which today is dominated by overly complex Linux libraries. The Amiga can grow laterally as a more user-friendly alternative, much like Android did for the mobile market. Once the platform is growing and established – then custom hardware could be introduced. But right now that is not what we need.

I also hope that the Aros team drops whatever they are working on, fork Pascal Paparas codebase, and spend a few weeks polishing the system. Abstract away the Linux foundation like Apple have done, get those sexy 32 bit OS4 icons (Note: The icons used by Amiga OS 4 is available for free download from the designer’s website) and a nice theme that looks similar to OS 4 (but not too similar). Get Lazarus (the freepascal IDE) going and ship the system with a ready to use Pascal, C/C++ and Basic development environments. Bring back the fun in computing! The code is already there, use it!

page2-1036-full

Aeros interfaces directly with linux, I would propose a less direct approach

Just take something simple, like a compatible browser. It’s actually not that simple, both for reasons of complexity and how memory is handled by PPC. With a Linux foundation things like Chromium Embedded could be inked into the Amiga side of things and we would have a native, fast, established and up-to-date browser.

At the same time, since we have API level compatability, people can recompile their Aros and Morphos applications and they would run more or less unchanged.

I really hope that my little protest here, if nothing else, helps people realize that there are viable options readily at hand. Commodore is not coming back, and the only future this platform has – is the one we make. So people have to ask themselves how much they want a future.

If the OS gains momentum then there will be grounds for investors to look at custom hardware. They can then choose off the shelves parts that are inexpensive to cover the normal functionality you expect in a modern computer – which more resources can go into custom hardware that sets the system apart. But we cant start there. It has to be built up brick by brich, standing on the shoulders of giants.

OK, rant over 🙂

A day with the Tinkerboard, part 2

June 10, 2017 1 comment

Please read the first article here before you continue.

Having gone through a wealth of alternative boards, alternative to the Raspberry PI 3b that is; I have to admit that I am really, really disappointed by the results. Not just with the Tinkerboard but also boards like the ODroid XU4 and the whole fruit-basket of banana, orange and pineapple PI clones.

asus

What a waste

And please keep in mind that I have not been out to discredit anyone. I have been genuinely interested in these alternatives for many reasons, not just emulation or multimedia. I have no problem paying that extra 15£ for a system that delivers twice the power of the PI, in fact I even paid close to 150£ for the UP board. Which is also the only board that so far has delivered on its promise. Microsoft Windows and the horrors of eMMC storage considered.

Part of what I work with these days include node.js and full screen browser views, often touch enabled for POS (point of sale) type machines. This is an exciting field because for the first time in history we can now create rich, responsive and rock solid solutions using off the shelves web technology.

So while emulation of older systems is interesting for hobby purposes, the performance of vital technologies like the browser – and consequently the performance of JavaScript and the underlying JIT compilation, is naturally paramount for what I do.

So I was really looking forward to the extra power these boards advertize. Both CPU, number of cores, extra memory and last but not least – the benefits of a fast GPU. The latter directly impacts the effects we can use in our user-interface. And while effects may seem superficial, its important for touch feedback and the customers experience.

desktop_embedded

We have all come across touch devices that give little or no visual feedback. We all know what its like when you type something and the UI just hangs while talking with a server. This is why web tech is so important today. In many ways Html5 and websocket is re-defining what we think of as software.

So while I bring little pie-charts and numbers to the table, I do bring honesty and perspective. It’s not all fun and games.

Was it really fair?

It has been voiced by a couple that perhaps my way of testing things is unfair. Like I mentioned in my previous article the PI has a healthy head-start and a rich community that not only consumes and demands, but also contributes substantially to the value of the product. A well established community around a product is worth its weight in gold. The alternative boards have yet to establish that and when it comes to the Tinkerboard, it’s pretty much just getting started.

Secondly, as far as emulation goes, we have to remember that those who make these products rarely have emulation in mind (and probably not Amiga which by any standard is esoteric compared to Nintendo and Sega consoles). If we look at what people use gadgets like the Raspberry PI for, I think products like Kodi, Apache and various DIY programming experiments out-rank all the retro systems combined. And it’s clear that the people behind Tinkerboard is more media oriented than anything else; it ships with hardware support for 4k video playback after all.

Having said that, I do feel I gave both the ODroid and Tinkerboard a fair trial. I did not approach these boards as a developer, but rather as a consumer. I have focused on the experience of familiar things – like using a browser and how well JavaScript runs, does the UI lag under stress; things that any customer will face after purchase.

UAE4Arm vs FS-UAE

As far as UAE (Unix Amiga Emulator) there is one thing that might be unfair. Amibian is based around something called UAE4Arm which is a highly optimized version of the Amiga emulator. I would even go so far as to say its THE most optimized UAE version on any platform – Windows and OSX included.

The Amiga was a lot more than just a games machine, it was way ahead of both Windows and OS X for a decade

The Amiga was a lot more than just a games machine, it was way ahead of both Windows and OS X for a decade

Comparing that with FS-UAE is perhaps unfair. FS-UAE is based on the older, non optimized codebase. It’s known for stable emulation and good JIT performance. Sadly the JIT engine only works on x86 and is thus disabled on ARM systems. So to be honest it can’t hold a candle against UAE4Arm. The latter uses lookup tables for everything, fast switch cases instead of long sequences of “if then else” code (easier for the compiler to optimize), and it benefits from hardcore llvm level 3 optimization. So I agree, comparing FS to UAE4Arm is like comparing a tiger with a kitten. It’s not even in the same ballpark.

Optimization on this level is almost a lost art these days. People are used to languages like C# and Java – which are hopelessly bloated (for those that have seen what raw assembler and hand optimized pascal can deliver). On the Amiga we learned how to shave off a clock-cycle here and there – and the end result really made a huge difference.

The same type of low-level, every cycle counts, type optimization is used throughout UAE4Arm. FS-Uae is safe, slow and can hardly deliver A500 speed on a Raspberry PI. Yet UAE4Arm spits out 3.2 times the power of an Amiga 4000 with a 68040 cpu. That was the flagship power machine back in the day, and this little $40 card delivers 3 times the power under emulation.

Its like comparing a muffled 1978 Lada with a pissed off 2017 Lamborghini.

Fair is fair

Having tested just about every distro I could find for the Tinker, even back-tracking to older releases in hope that the drivers there would be better suited for the board (perhaps the new drivers were released in a hurry, or a mistake was made) the last straw really was Android. My hope was that Android could have better drivers since it sees a lot of development. It has giants like Google and Samsung behind it – so the last glimmer of light was that Asus had focused on Android rather than Linux.

podcast-event

Will Android save the say and give the Tinker it’s due credit?

I downloaded the latest image, burnt it and booted – thinking that this should finally put things right. Well it didn’t. Quite the opposite.

The first boot took almost an hour. Yes, an hour. I suspect this is because it did a network restore – downloading packages and requirements from the Internet. But that is still no excuse. I have tried Android on the PI it and it was up and running in less than 10 minutes; and that includes having to set some initial user information.

When the Android desktop finally came into view, it was stripped down with no repository package manager. In other words you would have to manually install packages. But that would have been OK had it not been for the utterly sluggish performance.

Android on the PI is no race-car, but at least it doesn’t freeze and lock-up every five second. Clicking on a link in the browser froze the whole system, and you were asked if you wanted to terminate the process again and again while Android figured out the meaning of life or whatever. Every single time.

I’m sorry but this is the worst experience I have ever had with a board. The ODroid XU4 at least tries to deliver a good experience. I had no problems finding Linux editions that ran well (especially gaming systems, Kodi and some homebrew distros) – and the chrome build for the ODroid gave far better results than both the PI and Tinkerboard. It was not the power-house it had been hyped up to be, but at least it delivered a normal desktop experience (even the PI locks up when facing demanding tasks, but the Tinker has an 8 core cpu, twice the ram and a supposedly superior GPU. With the capacity of running 8 threads and doing task scheduling in batches of 8 – the Tinkerboard should remain both stable and interactive under pressure).

I am sorry but I cannot recommend the Tinkerboard at all. The ODroid gives you a slight edge over the PI and if someone baked a more optimized Linux distro for it (it ships with a variant of Ubuntu) then ODroid XU4  would kick serious ass. But the Tinkerboard is a complete disaster.

Do not waste your money on the Tinkerboard. It may have the hardware but the software to utilize and express that power is simply not there! It has been the most pointless purchase I have done this year.

A common waste

The common denominator between these alternatives is the notion that more power equals more stuff. Which ultimately results in biting over more than they can chew. A distro like Pixel is lightweight, optimized and the PI foundation has spent a considerable amount of time making sure the drivers run as fast as possible.

Weird Science, a fun but completely fictional movie from 1985. I just posted this to liven up an otherwise boring trip down memory lane

Saying it doesnt make it so, except in Weird Science

What these alternative hardware suppliers don’t seem to get is that – what is the point of having a cpu that is twice as fast as the PI, when you waste it on a more demanding version of Linux? Both the ODroid and Tinkerboard ship with Ubuntu clones. And while Ubuntu is wonderful to work with on a high-end x86 machine, 32 gigabytes of ram and a i7 CPU growling away under your desk, it is without a doubt the worst possible OS you could pick for a tiny ARM board.

To put things into perspective: The Tinkerboard has the “theoretical” cpu power of a Samsung Galaxy S4 mobile phone (2012). Yet in their infinite wisdom they decided Ubuntu is what is really going to show off this fart of a device.

It makes absolutely no sense from a consumer’s perspective. As a user you want that extra power for running programs. You want to use the extra power on your stuff. Yet these guys seem to think that wasting all the extra power on infrastructure is going to score points.

So if anyone from the Tinkerboard or ODroid camp reads this, please remember the old saying “If you cannot innovate, emulate”. Make a fast, stable and small Jesse distro like Pixel. Focus on optimization. Compile whatever you can with maximum optimization. Shave off every clock cycle you can. Hand carve the drivers in assembler if that’s what it takes.

Because the sad result is, that right now the Raspberry PI is outperforming you with lesser hardware. There is no point in buying a Tinkerboard because it actually delivers a worse experience. It has bitten off more than it can chew and it’s trying to be something it’s not.

A day with the Tinkerboard, part 1

June 7, 2017 1 comment

Anyone into IOT mini computers, be it as a hobby or professionally – have no doubt heard about the Tinkerboard. The Tinkerboard is an alternative to the Raspberry PI (aren’t they all) produced by hardware giant Asus. Everything from its form-factor (size) to its color coordinated GPIO pins is more or less the same as the PI. What separates the two is that the Tinker gives you double the CPU power, double the ram and pretty much double of everything. If we compare it with the Raspberry PI 3b that is.

But you don’t get double for free. The Tinkerboard retails on Amazon for USD 60 which means something in the ballpark of 55£ for those that live in the UK. For the rest of us that live in far-away places like Norway, prices and taxes may vary.

tinker

Before I continue I want to write a few words about pricing. The word “expensive” is one I keep hearing when it comes to the Tinkerboard, but that is extremely unfair considering what you get for your money.

The Raspberry PI 3b is an awesome product and packs a lot of goodies into a credit-card size computer. The same is true for the Tinkerboard, but they have packed even more stuff into the same space! The difference in price is just 15£ – but those 15£ gives you the power to emulate demanding platforms like Nintendo Wii, Sega Dreamcast and Playstation (if you are into retro gaming). Platforms the Raspberry PI 3b dont stand a chance emulating (at least not smoothly).

For serious applications all that extra cpu power can be the difference between success and failure. In many ways the Tinker feels a bit like working on a low-end laptop. If you are building a movie server for your living-room the Tinkerboard will ble able to convert and stream movies much smoother and better than the PI. And if you are looking for a device to power your next kiosk system or POS device, again the extra memory and cpu power is going to make all the difference. And with just 15£ between the two models, I cannot imagine why anyone would pick the PI if gaming or movie streaming is on the menu.

Oh and it has built-in wi-fi and bluetooth support! Here are the specs as listed on Amazon:

  • High performance Quad Core ARM SOC 1.8GHz with 2GB of RAM -The Tinker board features the Rock chip RK3288 Soc with Mali – T764 GPU and 2GB of Dual Channel DDR3 memory
  • Non shared GBit LAN, Shielded Wi-Fi with upgradable antenna support
  • Highly compatible PCB & Topology, Tinker board offer extensive compatibility with SBC accessories & chassis
  • HD Audio & HD & UHD Video Support – Tinker board supports HD audio 192/24bit audio along with accelerated HD & UHD ( 4K ) video playback support* requires use of Rock chip video player in TinkerOS
  • DIY Friendly Design – Tinker board features multiple DIY friendly use features including a color coded GPIO header, silkscreen PCB and color coded pull tabs

That sounds pretty nice doesnt it 🙂

One does not simply take on the mighty PI

Being an alternative to the Raspberry PI is not as easy as it sounds. You would imagine that it all boils down to the specs right? So if you beef up the cpu, ram and gpu everyone will throw away their PI’s and buy your stuff? If only that was true. In which case the Tinkerboard would be awesome since it outperforms the PI on every level.

But it’s not that simple. There is more to a product than just raw power.

One of the cool things about the Raspberry PI is the community. The knowledge base of the product if you like. In order to build a community and be a success, everything has to click. I mean just look at the third-party eco-system that exists around the PI; all those companies selling add-ons and widgets, sensors and remote controls. The PI itself is quite dull. It’s all that extra stuff that makes it so fun to play with. And then you have things like excellent customer service, a forum where people help each other out, well written documentation — the value of the product suddenly exceeds whatever you paid for it to begin with.

And the Tinkerboard is not alone in trying to get a piece of the action. It competes with boards like the ODroid XU4, the x86 based UP boards, Latte Panda, The Pine boards and a whole fruit-basket of banana, orange and whatever PI clones. All of them trying to get a bite of the phenomenon that is the Raspberry PI.

So Asus got their work cut out for them.

First impressions

Having unpacked the board and plugged in all the cables (which is exactly as dull as it sounds) the first thing you do is look for software. Which naturally should be a one-click operation on their website.

Well, the first thing that annoyed me was exactly that: the Asus website. The Tinkerboard is not something that belongs on a sterile, streamlined corporate site. It belongs on website that should be easy to navigate, with plenty of information that hobbyists and “tinkers” need to get started. The Asus website simply lacks the warmth and friendliness of the PI foundation website. I even had to google around a couple of times just to find the disk images. These should be readily available on the site, on the first page even – to make sure customers can get up and running quickly.

tinkerweb

It’s a little bit clangy and a little bit jammy

The presentation of the board was ok, but you really shouldnt have to use an external search engine to locate essential data for something you just bought. Unlike the PI foundation, Asus seem to have put little effort into what customers get once they have bought the board. As for community I dont even know if there is one.

When you finally manage to find the bloody download page, these are your options (there are 17 downloads in total, counting older OS images, config files and schematics):

  • TinkerOS 1.8
  • TinkerOS 1.6 Beta (my initial download)
  • Android, also beta

And don’t expect a polished Linux distro like Pixel either. You get a clean distro with some typical stuff pre-loaded (python, perl for some reason, and various tidbits) but nothing close to the quality of Pixel.

Note: Why they would list older beta releases in their downloads is beyond me. It only adds to the confusion of a sterile list of items. And customers who just got their boards will be eager to try it out and hence make a mistake like I did. Beta and unstable releases should be in a separate category, while the stable and up-to-date images are listed first.

The second thing that really irritated me was something as simple as adjusting the keyboard layout and setting the locale. These are simply not present in preferences at all. Once again you have to google around until you find the terminal commands to manually set these things, which utterly defeats booting into a desktop environment to begin with. This might seem trivial for someone who knows Linux well, but to a novice it can take hours. Things this fundamental should be available in preferences or at the very least as a script. Like raspi-config on the PI.

But OK. If we look away from these superficial things and focus on the actual board, things are not half bad.

It’s the insides that counts

The first thing you are going to notice is how much faster the system is compared to the PI. When you start an application on the PI that is demanding, like libre-office or Lazarus, the whole system can freeze-up for a few seconds while the CPU go through the roof. I was pleased to find that this is not the case here (at least not as disruptive). You can start a program and the system continues to be responsive while things load.

What made me furious though was the quality of the graphics drivers. Performance wise the Tinkerboard is twice (actually a bit more) as fast as the PI on all fronts – and the GPU is supposed to deliver a vastly superior graphics experience. Yet when I ran Amibian.js (the Smart Pascal JavaScript desktop system) in Chrome – the CSS hardware effects were painfully slow. I even questioned if GPU was used at all (Note: which I later confirmed was the case, so much for downloading a beta image by mistake). I was amazed at the cpu power of the Tinkerboard because despite having to do everything purely with the CPU (moving pixels in memory, allocating device independent bitmaps, rasterizing layers and complex compositions in X) and it was more than usable.

But it sure as hell was not the superior graphics experience I was hoping for.

I also noticed that when moving a normal desktop window (X window that is) rapidly around the screen, it would lag behind and re-trace your steps until it caught up with the mouse pointer. This was very odd since most composition engines would have eliminated these movements as much as possible – and the GPU would do all the work. Again, I blame the drivers and pondered what could possibly make a GPU perform so badly.

gpuwrong

Im supriced it booted at all, nothing is working

So while the system is notably faster than the PI, I get the sense that the drivers and tooling is not really polished. The hardware no doubt got juice but without drivers being carefully written for the board – most of the extra juice you pay for is wasted.

In the end I opened up chrome and typed “chrome://gpu” to see what was going on. And not surprisingly I was correct. It wasnt using the GPU at all and I had wasted hours on something that should have been clearly marked “Lacks hardware acceleration” in bold, italic, underlined big red letters. Like the PI foundation did when they shipped Raspberry PI 2 without GPU drivers. But they promptly delivered and it was ok, because you were informed up front.

 

Back to scratch

After a trip back to google I downloaded the right (or at least working) image, namely the TinkerOS 1.8 stable. It booted up, resized the partition to fill the whole SD card automatically (and other boring bits).

The first thing I did was start chrome and use the “chrome://gpu” to get some statistics. And thankfully most of the items were now working and active.

gpubetter

Thats better, although i ponder why GPU DIB’s are not active

Emulation, the name of the game

While I have serious tests to perform there is no doubt that what I was most eager to play with was UAE (the Unix Amiga Emulator). Like most retro-heads I have a ton of Raspberry PI’s and ODroid’s around the house with the sole purpose of emulating older game or computer systems. And I think everyone knows that my favorite computer in the whole wide world is the Commodore Amiga.

On the PI, Gunnar’s native Amibian distro gives you awesome performance. If you overclock the ram, gpu and cpu using safe numbers – the PI 3b spits out roughly 3.2 times the power of a Mc68040 based Amiga 4000. In other words 3 times the power of a high-end Amiga from the early 90s. That is quite an achievement for a $40 board. You can only imagine my expectations for a board running more than twice as fast as the PI does (and costing that 15£ more).

18986661_10154521981875906_472132585_o

Boots fine but you can forget about using it. The PI is 100 times faster than this

One snag though was that UAE4Arm which is the highly optimized version of the Amiga Emulator doesnt exist for the Tinkerboard. In theory it should compile without problems since the latest version uses SDL exclusively. So there is no odd, deprecated UI toolkit like the old UAE4Arm used.

The only thing available was fs-uae. And while that is a very popular emulator on Windows and Mac, it’s sadly not optimized as much as UAE4Arm is. The latter uses lookup tables to the extreme, all if, then, else statements have been replaced by fast switches — and it’s just optimized beyond anything else running on ARM.

The result on the PI in phenomenal, but sadly fs-uae was all I could play with. I will try to compile UAE4Arm on the Tinkerboard later though. But right now im to busy.

Not optimized at all

It is hard to pinpoint exactly where the problem is, but I suspect the older codebase of fs-uae combined with poorly written drivers is what resulted in the absolutely sluggish behavor.

Dont get me wrong, if all you want to do is play some A500 or A1200 games the Tinker is more than capable. But if you want to run the same high-performance desktop as Amibian gives you on the PI – you can pretty much forget it.

I set the system to emulate an A4000, activated the JIT engine and set my desktop to match the display of the desktop. This should run like hell — yet it was slow as a snail. You could literally see the mouse-pointer slowly trying to keep up as I moved it across the desktop.

One interesting thing though. When I ran fs-uae on the first image, the beta image where everything was crap, it actually ran exceptionally well. But here on the so-called stable and “up to date” version, it was useless for anything but clean floppy gaming.

There are many factors involved in this. The drivers sits at the bottom just above the kernel and exposes the hardware so it can be used through a common API. On top of this you have stuff like OpenGL and on top of that again you have SDL which simplifies typical gaming or graphics tasks. This may sound like a long road for code to travel, but it’s actually very fast. If the drivers expose the power of the GPU that is.

I can only conclude at this point that the Tinkerboard has the firepower, I have seen the result of tests performed by others (and tested a retrogaming image) – and you feel the difference almost immediately when you boot the device. But the graphics drivers seems.. crippled somehow. It’s easy to point the finger at Asus and say they have done a terrible job at writing drivers here, but it can also be a bottleneck elsewhere in the system.

This is something Raspberry PI has got so right. Every part of the system has been optimized for the hardware, which makes the PI feel smooth even under a lot of stress. It delegates all the hard graphical stuff to the GPU – and the GPU just deals with it.

Sadly this is not the case with the Tinkerboard so far. And it really is such a shame because the specs speak for themselves.

Android

Next time: We will be giving Android a test and hopefully that will have drivers that are more up to date than whatever we have seen so far..

Smart Puppy: Smart pascal meets linux!

April 21, 2017 Leave a comment

logo_waifu2x_art_noise1_scale_tta_1One of my absolute favorite operating-systems in the whole world has to be Puppy Linux. I discovered it just a few days ago and I have fallen completely in love with this thing. I can vaguely remember giving it a testdrive a few years back, but I didn’t know much about Linux in general so I didn’t understand what I it represented.

So if you are looking for a friendly, small, fast and easy to use Linux system – then Puppy is about as friendly as it gets. The Facebook user group with the same name is a warm and friendly place to be. Much like Delphi developer the Admin(s) take pride in keeping things orderly – and people who hang out there engage, care and help each other out.

Before you run out and download Puppy, which I hope you do later – please understand that Puppy is very different from Linux in general. You could almost say that it’s a whole alternative to mainstream Linux as we know it.

But, once you know about the differences then you are in for a treat! I will explain them in the article, so please be patient and take the time to digest.

Puppies hate fluff

One of the reasons I never converted wholesale to Linux (and yes I did try) – is that the average Linux distro is unbearable and unnecessary cryptic. For some reason Linux architects suffer from a terrible affliction, namely a shortage of characters. This sickness means that Linux don’t have enough characters for everyone, so programmers must use a maximum of five letters when naming their software. If coders ignore this shortage and blatantly name something directly or intuitively – then Richard Stallman and Lady Gaga will order a “drive-by pony tail cut” on the dude. And a Linux administrator without his pont-tail is finished (the nerd equivalent of flipping burgers at McDonalds).

Puppy Linux does contain it’s fair share of the classical Linux software (that goes without saying). But, the man behind this wonderful Linux flavour is also a level-headed, clever and resourceful man (or woman) – so he has thankfully broken with what can only be described as archaic thinking.

puppy01

Puppy Linux is not exactly software impaired

So even with my minimal Linux experience I was able to navigate around the filesystem, locate documents (which here is called “Documents” and “My Documents” even). There is a whole bunch of these tiny differences, small things that makes all the difference. From the way he (or she) has named things – to where things are stored and placed.

And it’s so small! The basic install is less than 300 megabytes in size (!) Yes you read that right. The generic Puppy Linux installation with desktop and a few popular applications is less than 300 megabytes.

In my case I can have a fully loaded development studio, featuring GCC, FPC (freepascal), Lazarus IDE, CodeBlocks IDE, KDevelop IDE, Anjunta developer studio – and last but never least Smart Mobile Studio on a 2 gigabyte USB stick (!) I don’t think you can even get USB sticks that small any more (?) The smallest I got is 32 gigabyte and the largest is 256 gigabyte.

But before we go on with the wonders of Puppy Linux – lets look at what Linux did wrong. Why is Linux even to this day considered hard to use? Or to put it another way: what has Windows and OS X done right to be considered easier to use yet capable of the same (and often more) ?

Naming, what Linux did wrong

One of the tenants of professional programming, is to ensure that classes, members and functions have meaningful names. There was a time when you would get away with single character class, variable and method names — but that wont fly in 2017. Your Q&A department would have you for breakfast if you checked in code like that. Classes, name-spacing and packages should be descriptive. End of story.

The reason this has become an almost sacred law, should be obvious: it may not be you that maintains the software 5, 10 or 15 years down the road. A piece of code should always be written in such a way that it can be understood and thus maintained by others within a reasonable time-frame (which also means plenty of comments and good documentation). This is not a matter of preference, but of time and money. And when you pay out salaries these factors are one and the same.

So naming elements of software in 2017 has a lot of criteria attached to it. The most obvious so far being:

  • Always name things clearly because that
    • ensures ease of use
    • simplifies maintenance
    • removes doubt as to “what is what”
    • less user-mistakes
  • The less mistakes, either in understanding something or using something, the less money a business throws out the window. Money that could be spent paying you to make something cool instead (or fix bugs that are critical).
  • The less user-mistakes caused by customers, the more your service department can focus on quality of service. When a company starts it usually have outstanding support, but as it grows their service-desk slowly become robots.
  • The easier and more intuitive a system is, the more users it will attract. If people can pick something up and just naturally figure out how things work, then statistics show that they most likely will continue using it through thick and thin.

Right. With these rules in mind – what happens if you take them but apply them to Linux instead? Not Linux code or libraries or stuff like that, but Linux the user-experience from top to bottom?

And don’t get me wrong, I think Linux is awesome so this is not an attack on Linux; I’m simply pointing out factors that could help make Linux even better.

I mean, just look at the Linux filesystem. Again you have this absurd shortage of characters. Why would anyone abbreviate the word “user[s]” into “usr” ? It make noh sense.  Same with “lib”, would it have killed you to call it “libraries”? And so it continues with “dev” – because calling it “devices” would cause the space-time-continuum to break.

Shell shocked

The shell (or command-line under Windows) and it’s commands is really the thing that annoys me the most. There is a fine line between use and abuse, and the level of abbreviation here is beyond whimsical and harmless – and well into the realm of silly and absurd

Who in their right mind would name a command “ps”? What could it possibly mean? The first thing that comes to mind is “print spool”. If you come from any other platform than Linux (and perhaps Unix, I don’t know) you would never imagine that “ps” actually means “list all running processes and their states”.

ps_command

“ps” lists the running processes and their states

Above: running “ps” from the shell lists the running processes. Would it have killed the coders to just call it, oh perhaps, “listprocesses” or “showrunningprograms”?

The “ps” command is just one in a long, long list of commands that really should be brought into the twenty-first century. The benefits should be obvious. It should not be necessary for a 43-year-old man to blog about this, because it’s been a problem for the better part of three decades.

  • Kids and teenagers is the bread and butter for all operating systems. The faster a kid of teenager can do something with a system, the more loyal that individual will be to the platform in the future.
  • Linux needs developers and users from other platforms. When someone who has been a successful developer for almost 30 years find a system cryptic and hard to use, how much harder will it be for a non-technical user?
  • Standards are important. The location of files, libraries and settings should be uniform. As of writing Linux seem to have 3 different standards (again, I am no expert): systemd, initd and “systemx”. The latter is just a name I made up, because no-one really knows what it’s called. We are now in the realm of PlayStation, ChromeOS, WebOS and systems that build on the Linux – but deviate the moment the drivers have loaded.

Again, I’m not writing this in a negative mindset. I have been using Ubuntu for a while as an alternative to Windows and OS X. But this has been a purely user-centric experience. I have not done any programming except random bits of Freepascal and node.js experiements. I have enjoyed Ubuntu purely as a user. Writing documents, checking email, browsing the web, IRC, reading news groups – ordinary stuff.

So I am very positive to Linux, but I have yet to find “my” flavour of it. A Linux distro I feel at home with and that appeals to my way of working.

Until today that is..

Enter puppy Linux

Puppy is a flavour of Linux that just demolishes some of Linux’s most holiest of concepts. Everyone will tell you never to run as root, always have the root account in peace – and keep it under lock and key just in case someone gets into your second account right?

Well not Puppy. Here you are expected to run as root and you can, if you for some reason must, jump out into a secondary user which is fake. So indeed – puppy Linux is a single user Linux system. It’s the rebel, the scoundrel and rouge of the Linux world – the distro that couldn’t care less what the other guys are doing.

gcc.png

Fancy a spot of coding? GCC is a SFS module away ..

Secondly, and this is very cool, Puppy is highly modular. No I’m not talking about packages, all Linux distros have that in some form or another. No I’m talking about something called SFS files, short for squashed file-system.

To make a long story short, Puppy allows you to mount compressed files as disks and they become a part of the system. It’s a bit like the virtual-drive API on windows (if you have ever coded against that?). You may have noticed in Windows how you can double-click on a .ISO file and suddenly the file is mounted in the file-explorer and stays mounted until you manually dis-mount the damn thing?

Well, SFS is that but also much more. Because when you mount the SFS file whatever applications it contains registers on the start-menu, adds itself to the global path and essentially becomes one with the whole system. This took me a while to wrap my head around this (good features always comes with a price, so i keept waiting for the negative. But there were none!). The people I talked to about this were not coders, so they had some very colorful explanations to how it all worked. But once I realized SFS was just a zip-file (or tarball or whatever) with a fixed structure (including mount script and dis-mount script) I got the picture.

Size and speed matters

Before I started using a PC back in the early 90’s I was a huge Amiga fan. I still am (as you no doubt have noticed). One of the first things I found, or first difference between Amiga computing and PC computing that hit me – was how wasteful PC’s were. I remember I was shocked when I saw how much space and cpu power the average programmer just wasted — because on the Amiga everyone strived to be as resourceful and efficient as possible.

We would spend days optimizing even the smallest parts of our applications just to ensure that it ran at top speed and produced as little bloat as possible. This was just baked into us, it was the way of the force and as common as your grandfather’s work ethics. Quality and achievement went hand in hand.

cb

CodeBlocks is an excellent IDE 🙂

When you fire up Puppy Linux you are instantly reminded that there are people to this day that cares about size and speed. And that maybe, just maybe, consumerism has tricked you into throwing away perfectly usable technology year after year. Machines that actually had more than enough CPU power for the tasks you wanted, but was slowed down by bloated operating-systems, poor programming and lazy code generators.

Puppy Linux is the fastest bloody Linux you will ever run. The only operatingsystem I have tried that runs faster, is Aros compiled for Arm (a distro called Aeros, a reverse engineered edition of Amiga OS). But as far as x86 and the Linux kernel goes — Puppy Linux is the bomb.

I know I’m repeating myself here but: less than 300 megabytes for a fully loaded Linux distro with text processor, browser, devkit, music player, video player and all the “typical” applications you would use for daily tasks? And it truly is the fastest hunk of junk in the galaxy without question.

Amiga coders and the cult of joy

When I started to snoop around the Puppy environment and community, I started to notice a couple of “tell-tell” signs. Tiny, subtle things that only an Amiga coder would pick up on. Enough to give you a hunch, a gut feeling – but not enough to blatantly say it out loud. “Amiga guys did this” i would whisper to myself. And it’s not really such a big surprise to find that coders now in their 40s that used to be Amiga coders.

In 30 years time there will be company owners and CEO’s that grew up with Playstation and have fond memories of that. But they wont recognize each-other by their craftmanship – that is the difference.

cult

The cult of joy lives on, albeit in new forms

The Amiga was special because it was not just a games machine. It was also a complete rewrite of what constituted the power operatingsystem of its time: Unix. In other words they copied the best stuff from Unix (which by the way had the same absurd filesystem as Linux still has) but cleaned it up. First thing to be cleaned was (drumroll) the filesystem. But that’s another story all together.

When I entered the Puppy Linux forum I naturally mentioned that I was a complete total Linux novice, and that my favorite machine before x86 was an Amiga. And what do you think happened? Let’s just say that more than a few greeted me with open arms. These were the Amiga users that went to Linux when Commodore went under all those years ago. And they had been active in shaping Linux ever since (!)

So yeah, had a great time on their forums – and it was like running into your long-lost cousin or something. Like if you havent seen a family member in 30 years and suddenly you meet them face to face.

Tired of 30 gigabyte operatingsystems?

Puppy Linux is not for everyone. It’s the kind of system you either love or hate. I have yet to find someone on a middle-ground regarding puppy. Either you love it, or you hate it. Or if you prefer: either you use it and are thrilled about it, or you never install it.

It has a lot of good things going for it:

  • It is built to be one of the smallest, working desktop environment you can get
  • It is built according to “the old ways”, where speed, efficiency and size matter
  • It runs fine on older hardware (my test machine is an 8 year old laptop) and makes stuff you would otherwise throw away become valuable again.
  • It is storage abstracted, meaning you can have all your personal stuff inside a single SFS archive (easier to back up), while the operatingsystem remains on a USB stick.
  • You don’t have to permanently install it (again, boot from a USB stick).
  • It is single user by default, which is perfect for IOT projects and devices!
  • It supports ARM, so you can now enjoy this awesome thing on Raspberry PI 3 !
  • Its Linux so it has all the benefits of a rich driver database
  • Latest Puppy is binary compatible with Ubuntu (whatever that means)
  • There are 3 different desktops for it (to my knowledge), so if you don’t like the default client just install something else
  • It is the perfect rescue USB stick. At less than 300 Mb you can fit it on any old USB stick you have around the house. I think the smallest you can buy now is 4 GB
  • It has a warm, helpful, friendly and international group of users

Oh and it’s free!

As a final note: I installed Wine, the system that makes it possible to run Windows software on Linux (not an emulator, more of a api-call middle-ware /slash/ dispatcher). I was quite surprised to see it run Smart Mobile Studio on the first try!

So fancy a bit of hacking this weekend? Why not give puppy a go?

Check it out here: http://puppylinux.org/main/Download%20Latest%20Release.htm

Smart Pascal + WebOS = true

April 15, 2017 Leave a comment

LG-WebOSIf you own a television (and who doesn’t) chances are you own an LG model. The LG brand has been on the market since before recorded history (or so it feels) and have been a major player for decades.

What is little known though, is that LG also owns and finance a unique operating system. This is not uncommon these days, most NAS and cloud vendors have their own system (there are roughly 20 cloud based systems on the market that I know of). The only way to make a Smart TV (sigh) is to add a computer to it. And if you buy a television today chances are it contains a small embedded board running some custom-made operating system designed to do just that.

Every vendor has their own system, and those that don’t usually end up forking Android and adapt that to their needs.

Luna WebOS

LG’s system is called WebOS. This may sound like yet another html eye candy front end, but hear me out because this OS is not what it seems.

VirtualBox_Developers LuneOS emulator appliance 20151006131924-stable-038-253_15_04_2017_16_01_43

Except for some font issues (see disk label) Smart loaded fine under WebOS

Remember Palm OS? If you are between 35 and 45 you should remember that before Apple utterly demolished the mobile-phone market with their iPhone, one of the most popular brands was Palm. Their core product being “Palm Pilot”, a kind of digital filo-fax (personal organizer) and mobile phone rolled into one.

WebOS is based on what used to be called Palm-OS, but it has been completely revamped, given a sexy new user interface (looks a lot like android to be honest!) and brought into the present age. And best of all: its 100% free! It runs on a plethora of systems, from x86 to Raspberry PI to Mips. It is used in televisions, terminals and set-top-boxes around the world – and is quite popular amongst engineers.

Check it out their new portal here; there is also plenty of links to pre-built images if you look around there: https://pivotce.com/

JavaScript application stack

One of the coolest features in my view, is that their applications are primarily made by JavaScript. The OS itself is native of course, but they have discovered that JavaScript and HTML5 is an excellent way to build applications. Applications that is easy to control, sandboxed and safe yet incredibly powerful and capable.

VirtualBox_Developers LuneOS emulator appliance 20151006131924-stable-038-253_15_04_2017_16_00_46

Smart Desktop booting Quake 3 in Luna OS

Well, today I sent them an Email requesting their SDK. I know they use Enyo.js as their primary and suggested framework – but that is no problem for Smart Mobile Studio. A better question is: can Luna handle our codebase?

When I loaded up Quake 3 in pure Asm.JS the TV crashed with a spectacular access violation. So they are probably not used to the level of hardcore coding Smart developers represent. But yeah, Quake III is about as advanced as it gets – and it pushes any browser to the outer limits of what is possible.

Once I get the SDK, docs and a few examples – I will begin adding support for it as quickly as possible. Which means you get a new project type especially for the platform + API units (typically stored under $RTL\Apis folder).

Why is this cool?

Because with support for Luna / WebOS, you as a Delphi or Smart developer can offer your services to companies that use WebOS or Luna (the open source version) in their devices. The list of companies is substantial – and they are well established companies. And as you have no doubt noticed, hardware engineers doesn’t always make the best software engineers. So this opens up plenty of opportunities for a good object pascal / Smart developer.

17948597_10154372538120906_1912877833_o

Luna has an Android like quality over it – except its more smooth to use

Remember – other developers will use vanilla JavaScript. You have the onslaught of the VJL and the might of our libraries at your disposal. You can produce in days what others use weeks on achieving.

These are exciting days! I’ll keep you posted on our progress!

Embedded boards, finally!

December 19, 2016 Leave a comment

I was about to conclude that this day horizontally sucked beyond measure, but just as I thought as much -the door bell rang. It was FedEx (ta da!) with not one package, but two packages of super nerdy goodness. And here I was sure these puppies wouldnt arrive until after Xmas!

Top the x86 UP board, left bottom a Raspberry PI 3, bottom right the ODroid XU4

Top the x86 UP board, left bottom a Raspberry PI 3, bottom right the ODroid XU4

The ODroid XU4

A while back I ordered the very exciting “Raspberry PI 3 killer” ODroid XU4. It’s a bit of a unicorn, said to be roughly 10 times faster than the Raspberry PI 3. Honestly, having looked at the specs I can’t imagine it being more than 3 to 4 times faster; depending greatly on what your application is doing and the operative system in question. Here is the full spec sheet:

  • Samsung Exynos 5422 Cortex™-A15 2 GHz and Cortex™-A7 Octa core CPUs
  • Mali-T628 MP6(OpenGL ES 3.0/2.0/1.1 and OpenCL 1.1 Full profile)
  • 2 Gbyte LPDDR3 RAM PoP stacked
  • eMMC 5.0 HS400 Flash Storage
  • 2 x USB 3.0 Host, 1 x USB 2.0 Host
  • Gigabit Ethernet port
  • HDMI 1.4a for display

As you can see the ODroid comes armed with 8 CPU cores while the Raspberry PI 3 (RPI3) has only 4. It also comes with twice the RAM (which really impacts performance), and tests have shown the ODroid disk/IO speed is roughly double that of the RPI3. But the cores is what caught my eye, because 8 cores means 8 simultaneous threads. This means code written for node.js, apache [php] or indeed custom, natively compiled Free Pascal servers will be able to handle twice the payload straight off the bat. For stateless protocols like http, I am guessing a performance factor of 3 to 1 compared to an RPI3.

Having said all this, there will be exceptions where the PI3 performs equal of better. The RPI3 SoC have better HD video functionality, so the ODroid have to work harder to produce the same.

For those of you thinking the ODroid will solve all your Amiga emulation problems, the answer is yes. It is significantly faster than the RPI3. But never forget that single threaded applications like UAE (Unix Amiga Emulator) involves a high degree of chipset synchronization. If the chipset performs out of sync (for instance if the blitter finishes faster than it should), all manner of problems will occur. So all that synchronization causes some parts to wait. Meaning no matter how fast your computer is (even my Intel i7 CPU) UAE will never reach peak performance. It will never use the full capacity of a single core due to synchronization.

A small note is also popularity, which means less updates. ODroid has a somewhat slower update cycle than Raspberry. It has thousands of users, but it’s not even close to getting the attention of the Raspberry PI project. And where the Raspberry website has been community oriented, with inspiring graphics, free tutorials and a huge forum from day one – the ODroid has nothing that even compares.

From what I read and hear the biggest problem has been kernel updates. But, seriously, that is often a Linux fetish. Unless it’s a super important update, like going from 32 to 64 bit or patching a really terrible security flaw – most users are not going to be bothered by a 2 versions old kernel. You still have access to the proverbial library of Alexandria that is aperture package manager (apt-get command) and compiling a few programs from code is not that hard. It’s typically download, unpack, configure, make and install – and that’s it.

Naturally, considering the faster CPU of the ODroid, double the ram, double the IO speed – emulators like UAE will be awesome. ODroid is also the only ARM SoC out there in this price range that plays Sega Saturn and PSX 2 games without any problems. And it will also be far better suited for servers, be it natively compiled freepascal servers, mono ASP.net or Smart Pascal [node.js] work.

The UP x86 board

The second package contained another embedded board, this time the x86 based UP board. I bought the most expensive model they had, containing 4 gigabytes of ram and 64 GB EMMC on-board storage. The board sports a 64 bit Intel® Atom™ x5 Z8350 processor, running as high as 1.92 GHz. Between Raspberry PI 3, ODroid XU4 and UP – let there be no doubt which model will come out on top.

  • Intel® Atom™ x5 Z8350 Processor 64 bit – up to 1.92GHz
  • Intel® HD 400 Graphics ,12 EU GEN 8, up to 500MHz Support DX*11.1/12, Open GL*4.2, Open CL*1.2 OGL ES3.0, H.264, HEVC(decode), VP8
  • 4GB DDR3L
  • 64GB eMMC
  • 4 x USB2.0 external connector
  • 2 x USB2.0 port (pin header)
  • 1 x USB 3.0 port
  • 1 x Gb Ethernet (full speed) RJ-45
  • HDMI (DSI / eDP)
  • MIPI-CSI Camera interface
  • 5V DC-in @ 3A 5.5/2.1mm jack

Where the ODroid is rumoured to be 10 times faster than a RPI3, that is a statement closer to an urban myth rather than fact; The UP board on the other hand IS without a shadow of a doubt 10 times faster (and then some), no question about it. Since this is essentially a vanilla x86 SoC PC, the world is your oyster. The full onslaught of x86 software is at your disposal, be it Windows or Linux you chose as your base.

x86 UP board, same size as the PI3 but packing a hell of a punch!

x86 UP board, same size as the PI3 but packing a hell of a punch!

The board has no problems running Windows 10, Ubuntu (full 64 bit version) and Android. And it’s actually more responsive than a few laptops on sale. I wanted a cheap laptop for dedicated Amiga Emulation – but having tested both the low-end Asus and Dell models it just left me wanting. The problem with cheap model laptops is often the absurd memory they ship with (1 to 2 gigabyte). The devices spend more time swapping data back and forth between ram and disk than they do running your code!

This is not the case for the UP board thanks to the on-board 4 gigabytes of ram.

Being a Embarcadero Delphi and Smart Mobile Studio (object pascal for JavaScript) developer this board is perfect for my needs. It has so much more to offer than the cheap ARM boards and is something I can use to create custom hardware projects and avoid many of the adaptation problems associated with ARM Linux.

While I love Raspberry PI 3, Linux takes some getting used to. There are things that takes me days to figure out on Linux that I completed in minutes on Windows (I have been using Windows since the beginning after all). This will change as my skill and insight into Linux matures, but if I can choose between the RPI3 and the UP board, I would pick the UP board every time.

Price is a factor here. RPI3 sells for between $35-40, the ODroid retails for $99 while the x86 UP board can be yours for $150. But you can also buy a cheaper model with less ram and EMMC storage. The UP provider has the following options:

  • $99 – 2 gigabyte memory, 16 gigabyte EMMC storage
  • $109 – 2 gigabyte memory, 32 gigabyte EMMC storage
  • $129 – 4 gigabyte memory, 32 gigabyte EMMC storage
  • $150 – 4 gigabyte memory, 64 gigabyte EMMC storage

If you’re thinking “oh, for that price I could get 2, 3 and 4 PI’s respectively!”, keep in mind the level of CPU power, graphics speed and available software here. The fact that you can install and run Windows 10 and just copy over your Delphi or Smart applications is really sweet.

And if you are into emulation then naturally, Windows has the latest and greatest. Things like EmulationStation is going to run so much better on this device than anything else at this price out there, especially if you get the x86 Linux edition. You can run the latest WinUAE on Windows, giving you full PPC emulation and essentially booting straight into Workbench and OS4.1. $150 for an OS4.x capable SoC that is x86 based makes a hell of a lot more sense than forking out $489 for the A1222 low-end PPC based Amiga NG board planned for 2017. I mean, who the hell buys PPC in 2017 (!) Emulation will always be a little bit slower than the real thing, but we are talking negligible.

And with the UP board you can also re-cycle the hardware when you get bored with OS 4. I mean, there are only so many things you can do with a modern Amiga. It is great fun for enthusiasts like myself, but I would much rather run a juiced up version of OS 3.9 with my massive collection of WHDLoad software, cd32 software and modern compilers (like Free Pascal 3.1 which work brilliantly on an emulated, classic Amiga).

Programming

For the developer the UP board gives you the same choices you enjoy on your Windows development machine in general. You can use it to deliver your Delphi, C++ builder or Smart Pascal solutions. If you happen to own a Microsoft Embedded license its performance will be greatly enhanced since you can drop a lot of the “standard” stuff that has to ship with Windows. Remember, a standard Windows  installation is written to work on millions of PC’s and equal number of different hardware configurations. For customized, single purpose applications (like a kiosk system, information booth, cash machine type system) you will be able to cut out quite a lot. Do you need support for printers? Do you need driver-collection for hardware that is not there? Windows embedded allows you to cut the disk image down to the bones, and it’s also written to run on slower CPU’s than what people in general own – so the performance is much better.

  • Run your Delphi projects, it’s a normal PC after all
  • Run your node.js Smart Pascal projects with ease
  • Make use of nodewebkit for Smart Pascal to create full screen, desktop oriented software, enjoy the full scope of GPU powered CSS
  • Enjoy the debugging tools you already know
  • Run familiar databases like MSSQL, MySQL and Firebird with a more friendly and developed editors and tools
  • Use the more friendly backup solution that ships with Windows rather than some cryptic Linux solution (although some Linux versions that have desktop control-panels are just as great!)
  • Use GPIO ports from Delphi and C++ builder (!)
  • Just hook your UP board into your network and install/setup via remote desktop. It saves a lot of time.

If you are a Delphi programmer looking for a reasonable embedded board to ship your killer Windows-based product, the UP board is by far the best option I have seen (and I have tested quite a few board out there!).

The problem with high-end boards is not just the initial price, which can be anything from $300 to $400. Sure, these boards Intel i2 or i3 processors (much faster), but you end up paying extra for everything. Need a new ram module? That will set you back a pretty penny! Want GPIO? Again we are talking $100+ just to get that.

By the time you sum up the expenses, you realize it would have been cheaper to just visit the local computer store and bought a mico-atx board with more ram and a faster processor (!). Micro-atx is not that big, perhaps 2 times larger than the UP board (if you place them into a square). The micro-atx is often to high to be practical embedded boards where you want to cram as much hardware as you can into something the size of a router or set-top-box. The heat sink hovers over the motherboard like the eye of london.

Here is what you should have in mind:

  • Is size a factor?
    • No
      • Buy a cheap mico-atx pc
    • Yes
      • Take a look at the boards listed below
  • Do you need Windows support?
    • No
      • Get a ARM based device
    • Yes
  • Do you need an i3 or i5 CPU?
  • Do you need GPIO built-in?

Note: The UP project is presently busy working on their second kickstarter, which is cleverly called “Up 2” (sigh). This board is slightly larger, being dubbed “UP squared”, but there is a good reason for that. First of all it ships with the more powerful Intel® Pentium™ N4200 2.5 GHz CPU, up to 8 gigabyte of memory and 128 gigabyte emmc storage. Just like the present UP board you will be able to pick a configuration that matches your wallet, but they are aiming at roughly the same price-range as they have now. Head over to the UP project and have a peek at the specs!

Negatives

So far the only negative thing about the UP board is the speed of the emmc storage. As you probably know, emmc is a storage medium designed to be a cheap alternative to SSD. But when it comes to speed it can be anything from SD card to USB 3 in actual performance. This is very vendor spesific and obviously the cheaper models are going to be the slowest. So the first thing you are going to notice is that even though this is a PC, installing things takes a lot longer.

You can however enter the bios and boot from a USB 3 stick. For homebrew projects that shouldnt matter much, and these days a sexy and small 128 or 256 gigabyte (you know, those tiny usb storage devices that is barely the USB socket and little else) is affordable.

I find myself having to look for negatives here. I do think the UP organization should do something about the startup of the device. When you boot the UP logo is displayed, but they should have added a line of text about what key to press for the bios. I ended up going through the typical ones, F1, F2, F8 and F10 which did nothing. The next key was printscreen, before i finally hit DEL and the bios editor came up.

Insignificant, but a small detail that would make their product more polished.

A far worse notion is how they charge money for branding. When the product boots the UP logo comes into view for a couple of seconds (in full-screen). If you want to replace that, the minimal fee is $500 (for a small picture). This is something that simply infuriates me, because you cant change it.

When you buy an embedded board for production purposes, hidden costs such as this for something as simple as a picture – is completely unacceptable. I sincerly hope they drop this practise, because I will not use a board with a fixed boot picture in the embedded systems I deliver. There are plenty of other boards about there with similar specs at competitive prices. Being able to brand your own system is considered normal in 2016 and it has been common for years now.

At least an option in the bios for removing or hiding the UP logo should be in place.

 

The test

For the next few days before and after Xmas, I’ll be playing with these boards as much as time allows. I have already installed the latest Ubuntu on the UP board – and it performed brilliantly. I am presently giving Windows 10 a test drive, and my primary aim will be Smart Mobile Studio graphics demos and UAE running Amiga OS 4.1 final edition. I will also test how emulators work. This is especially exciting for the ODroid since that is the one most people will pick as an alternative to Raspberry PI 3.

If you are a dedicated retro-gamer or just love all things Amiga then again, the UP board should be of special interest to you. It will set you back $150, but considering that it has the exact same form-factor as the Raspberry PI (except its components go a few mm. higher) is considerably faster (in a league way beyond both the PI and ODroid) – this could be your ticket to get a cheap “next-gen” emulated Amiga. It will run OS 4.1 final without problems under WinUAE and it will be a pleasant experience.

I will give you updates as the frame rates, execution speed and overall performance comes in. Oh and the tests will obviously use the RPI3 as the baseline.

Cheers guys!

 

Smart Pascal, supported server types

December 2, 2016 1 comment
Merry_XMAS

Use node.js to fill your xmas with fun!

Node.js is probably one of the coolest pieces of software I have had the pleasure to work with for years. It’s platform independent and available for just about every operative system you can imagine. I would even go so far as to say it has become universal.

NodeJS allows you not only to write server-side JavaScript, but also your own system level services. This is especially easy on Linux where the philosophy regarding a service is somewhat different from Microsoft Windows. On Linux, a simple bash script can be installed as a service. You can write services in python, perl or whatever tickles your fancy.

Our latest addition: UDP

Today I had a few minutes to test the UDP implementation I finished last week, and I was for some odd reason expecting an exception or “something” to come up. You know.. when something is that easy to write, there’s typically is a catch right? Or maybe im just and old, cynical Delphi developer. Either way, it worked on the first try!

So easy, so powerful and you can deploy it anywhere. An embedded system, a dedicated server - or do a push to your Amazon / Azure cloud stack. Node.js is so powerful once you understand how to use it.

Compiled SMS (node.js) talking with a Delphi application

Now I realize that UDP is not what you use for high-end, reliable communication. But that is beside the point. I want the code you get access to in our next update to be polished, easy to use and something you can rely on. And the keyword here is “co-operation”. My personal service stack that I host in my own home is written in 4 different languages. You have Delphi and C# services running under Windows, you have Lazarus daemons on my Linux box (a full PC) and last but not least — you have Smart Pascal servers running on embedded hardware.

Our list of server types now include:

  • HTTP
  • TCP
  • WebSocket
  • UDP

I use UDP more or less as a signal trigger between processes to handle wait, ready, update and even restart. So by broadcasting a single “shutdown” signal, all my machines will gracefully stop and then power down.

So, how does an UDP server look like? It borders on ridicules how little effort it takes:

procedure TNodeService1.SetupUDPServer;
var
  Server: TNJUDPServer;
begin

Server := TNJUDPServer.Create;
Server.Port := 1881;
Server.Address := '127.0.0.1';
Server.MulticastLoopback := true;
Server.Broadcast := true;
Server.Exclusive:= false;

Server.OnMessage := procedure (Sender: TObject; Data: variant;
          RequestInfo: TNJUDPRequestInfo)
begin
  writeln("Message recieved!");
end;

Server.OnError := procedure (Sender: TObject; ErrorObj: TJsErrorObject)
begin
  writeln("Error:" + ErrorObj.message);
end;

Server.OnClose := procedure (Sender: TObject; ErrorObj: TJsErrorObject)
begin
  writeln("Server closed");
end;

Server.OnAfterServerStarted := procedure (sender: TObject)
begin
  writelnF("Server started, listening on post %d @ %s",
  [Server.port, Server.Address]);
end;

Server.Active := true;
end;

That’s pretty much it. Naturally you have to fill in the blanks, but the above code is all you have to write to create a UDP server. The cool part is that all server classes inherit from a common ancestor, so once you know how to code one – you have a leg up on using the rest of them.

Running the server as a Linux Daemon

Like all languages, node.js has a few tools that are considered standard. It’s like we Delphi developers take component’s and libraries like GR32 and SynEdit for granted. These packages have become so standard that we forget how hard it can be for beginners to get an overview of what’s available.

Turns our node.js is no different, and the tool everyone is using to run their node.js code as a dedicated background service (meaning that it will start during the boot sequence) is called PM2 or node.js process manager v2.

Running your Smart Pascal server as a system-level daemon is very easy once you know what to look for :)

Running your Smart Pascal server as a system-level daemon is very easy once you know what to look for 🙂

In short, PM2 is like a watch-dog that keeps an eye on your service. If it crashed PM2 will re-started it (unless you tell it otherwise), it also does extensive logging for you – and it will even generate a startup script that you can add to the Linux (or windows) startup sequence.

But the most important aspect of PM2 is that you can easily get some “live” info on your services, like how much memory they consume, the CPU consumption and everything else. It’s also PM2 that makes it a snap to run your node.js servers in cluster mode, meaning that you can spread the workload over multiple machines and cores for better performance.

Getting PM2 up and running is easy enough. Just make sure you have installed NPM first, which is the “node package manager” front-end. To install NPM you just write:

sudo apt-get install npm -y

And with NPM available on your system, you install PM2 through NPM (PM2 is ofcourse written in node.js itself):

npm install pm2 -g

Next, having compiled our Smart Pascal project, we copy the file over to a shared folder on our raspberry PI (I installed Samba to make this easier), cd into the folder and type:

pm2 start backend/backend.js --name backend

Notice the “–name backend” part? PM2 allows you to assign names to your code. This makes it easier to manage them (not having to type the full path every single time).

To check if our service is now installed, type:

pm2 show backend

And voila! We have a live Smart Mobile Studio server running on a Raspberry PI 3! Now whip out Delphi and recycle those legacy services!

Smart Pascal, Pastafari PI, Amiga and all

September 9, 2016 Leave a comment

Hectic days so I dont have as much time to blog as I used to. Also found a new interest in electronics (total newbie, but i love it) so I jump from one thing to the next.

NodeJS, assembly and virtual machine

Whenever I have time I try to work as much on Smart Mobile Studio as I can. I keep working at a steady pace on the RTL.

At the same time I am also making headway on the assembler written in Smart Pascal. It is quite important for some of the future plans, and to be perfectly honest – it’s pretty cool! Basically I have mixed classic Acorn, MC68030 and x86 assembly language, adapted it to JavaScript (so no pointers, only references and offsets). The instruction set is fairly standard:

Unlike Java or CLR, this one is register oriented. One of the major weaknesses of Java is how it pushes everything on the stack, making calls slower than it has to be.

  • 32 data agnostic cpu registers
  • stack and frame
  • Program counter (PC)
  • Variable management
  • Inline constants
  • Asm compiles to codesegment assembly format
    • Support for const resource chunk
    • Instance frame oriented

The instruction set thus far is very fundamental. It contains instructions you will find on any processor (more or less).

asmjs

Parameters always start with the destination (destination, source). Most instructions support all 3 modus operandi (register, constant or resource identifier and inline data).

Inline data instructs the cpu to read a data segment directly following the instruction. For example, this is a perfectly valid assembly snippet:

  LDD R0, "This is a string "
  MOV R1, R0
  ADD R0, R1
  ; r0 now contains "This is a string This is a string"

You can however put that string (which is a constant) into the resource chunk of the bytecode format. Then you can reference it by id:

  ; here presuming the string has the id $200
  LDC R0, C[$200]
  MOV R1, R0
  ADD R0, R1
  ; Same result as above
  • Alloc [identifier]
  • Free [identifier]
  • LD.C [register], [resource id]
  • LD.V [register], [variable id]
  • LD.D [register], [inline data]
  • PSH.C [resource id]
  • PSH.R [register]
  • PSH.D [inline data]
  • POP.R [register]
  • POP.V [variable id]
  • MOV.R [register], [register]
  • MOV.V [variable id], [register]
  • MOV.D [variable id], [inline data]
  • ADD.C [register], [resource id]
  • ADD.V [register], [variable id]
  • ADD.D [register], [inline data]
  • SUB.C [register], [resource id]
  • SUB.V [register], [variable id]
  • SUB.D [register], [inline data]
  • JSL [offset]
  • JSE [register], [offset]
  • BNE [offset]
  • BEQ [offset]
  • RTS
  • NOOP
  • CMP.C [register], [resource id]
  • CMP.V [register], [variable id]
  • CMP.D [register], [inline data]
  • MUL.R [register], [register]
  • MUL.C [register], [resource id]
  • MUL.D [register], [inline data]
  • DIV.R [register], [register]
  • DIV.C [register], [resource id]
  • DIV.D [register], [inline data]
  • AND.R [register], [register]
  • AND.C [register], [resource id]
  • AND.D [register], [inline data]
  • OR.R [register], [register]
  • OR.C [register], [resource id]
  • OR.D [register], [inline data]
  • NOT.R [register], [register]
  • NOT.C [register], [resource id]
  • NOT.D [register], [inline data]
  • MULDIV.R [register], [register]
  • MULDIV.C [register], [resource id]
  • MULDIV.D [register], [inline data]
  • XOR.R [register], [register]
  • XOR.C [register], [resource id]
  • XOR.D [register], [inline data]
  • LSR.R [register], [register]
  • LSR.C [register], [resource id]
  • LSR.D [register], [inline data]
  • LSL.R [register], [register]
  • LSL.C [register], [resource id]
  • LSL.D [register], [inline data]
  • MOD.R [register], [register]
  • MOD.C [register], [resource id]
  • MOD.D [register], [inline data]
  • SYSCALL [method id]

The second half is the disassembler – and naturally the most important: namely the CPU or virtual machine. Like the disassembler this decodes the instruction bits and executes each instruction accordingly. At high speeds i might add.

So, what on earth is that good for you ask? Well, I have written it in a way that makes it easy to port it to Delphi and Freepascal. So if you are into creating programming languages, game engines, portable services, emulators or whatnot — then this is a very handy piece of tech.

Once you have a working virtual-machine, you can build the high-level language on top of that. And the fact that its portable and you can execute the code inside your Smart Mobile Studio applications, your NodeJS services — or Windows, Linux and OS X (as long as freepascal is there, you are good), that opens up for some interesting ideas.

Pastafari PI

Those that read my blog know that I absolutely love retro machines. Atari, Commodore 64, Amiga, Acorn — I love them all. I grew up with Zx81, Speccy, C64 and Amiga – so naturally I have so many fun memories with these systems that it’s bound to influence me as an adult.

Now I had a broken Amiga 500 in the basement, and I figured — why fork out $200+ for a PITop or some other solution when I can actually do a retro-mod myself.

After all, distros like Amibian (debian based linux) boot straight into the emulated Amiga environment. And the speed is phenomenal! The PI 3 emulates the Amiga 3.2 times faster than the most high-end Amiga ever created. With overclocking we are looking at speeds up to 4 times faster than a juiced up Amiga 4000\ 060 (!)

14237710_10153775873875906_7788151396048354313_n

Well, it’s not finished yet, but I have basically cut the case and made room for a fancy ADX gaming keyboard with sexy led lighting. I had to solder the keyboard controller to make it fit onto the Amiga keyboard back-plate – and also cut the keyboard quite heavily.

14269402_10153769032280906_1149985553_n

The idea is that when I have all the internals working – i will do a lot of epoxy and plastic work to make it look more authentic. Right now it looks very rough and rugged, but it runs like a dream!

I also bought a cheap 2.5″ multi sd-card reader. That came with an internal USB motherboard socket sadly — so I had to cut and do some soldering to make it into a normal external USB connector. And now it just plugs into the PI.

14199385_10153775874000906_2337556431494214676_n

I also adapted the floppy-drive input on the side of the Amiga, so the SD-card reader now sits in place where the original 2.5″ floppydrive once lived.

I’m just waiting for the sd-card extender circuit so i can adapt the front of the Amiga and have the SD-card slot for the PI there. This will make it a snap to change sd-card whenever I want to use another operating system. Im also soldering up a reset and shutdown switch that will also be on the front (that thin region just above the keyboard).

14238353_10153775874065906_2140049324551727462_n

To experienced technicians this no doubt look like a complete mess – but this is my first ever electronic “hands-on” project. I havent touched a soldering pen since highschool, so it was nerve-wreaking digging into the keyboard controller.

14192101_10153775480145906_9136429367109933895_n

Enjoying some Pastafari PI computing – tremble before his noodly appendages!

The final step is naturally to do some plastic work. I bought a good dose of epoxy for this. Once that is done I have to sand everything down and make the cuts straight and better looking.

And then finally I can give it two coats of black spray paint, and that final coat of transparent paint for hardening. And voila — I’ll have a pretty cool rig to test and work with my Raspberry PI 3!

Booting into UAE on ARM/RPI2

January 25, 2016 1 comment

I was just about to go to bed, but this info is to important to just leave hanging, so I have to scribble it down here for all to enjoy.

If you own a Raspberry PI and find Linux to be less than welcome, especially in the startup department, then you are not alone. And trying to get the linux gurus to shed some light on how the heck you can alter the boot process is like talking to wizards, pondering the mysteries of kernel callbacks or whatnot.

12628493_10153262975170906_1424956772931827411_o

Ah! The awesomeness! Look at the bones, look at the bones!

But thankfully today, Chips, the author of UAE4All2 (Amiga emulator for ARM) helped me out and gave the full low-down on how to boot straight into UAE with no Linux desktop getting in the way! Meaning: You can now transform your PI into a dedicated Amiga emulator! Once that doesnt start the Linux desktop at all. How cool is that!

For systemd based distros

How to boot directly to uae4arm FOR RASPBIAN JESSIE (and other linux based on systemd). First if you boot to desktop, let’s disable this by entering following line in a terminal:

sudo systemctl set-default multi-user.target

Then reboot and check: you should autologin with pi user so without any authentication…

Now enter startx to continue customization:

We will add uae4arm at each bash launch (basically each time you enter command line mode):

Enter following line in a terminal in order to edit bashrc:

leafpad ~/.bashrc &

At the bottom add lines to execute uae4arm (update directory as your installation):

cd ~/uae4arm-rpi/
./uae4arm

Next reboot you should enter automatically uae4arm :)

Initd based systems

How to boot directly to uae4arm FOR RASPBIAN WHEEZY (and other linux based on init)!
Below instruction apply for pi user but you can subtitute for any other user you created.
I prefer to edit files under desktop but you can use any other way too…

First if you boot to desktop, let’s disable this by entering following line in a terminal (raspi-config could be used to… to be confirmed):

sudo update-rc.d lightdm disable 2

Now reboot and check: you should not go anymore to desktop but instead should be ask for login in text mode. So login as pi (user then password) then enter startx to continue customization.

Now open a terminal and edit /etc/inittab by entering following line:

sudo leafpad /etc/inittab &

And add a # at the beginning of the line that ask for login, as below

#1:2345:respawn:/sbin/getty 115200 tty1

And instead we will auto-login, to do this add the following line just below the commented line

1:2345:respawn:/bin/login -f pi tty1 </dev/tty1 >/dev/tty1 2>&1

Now save and reboot and check: we should autologin with pi user so without any authentication…

Now enter startx to continue customization:

We will add uae4arm at each bash launch (basically each time you enter command line mode):

Enter following line in a terminal in order to edit bashrc:

leafpad ~/.bashrc &

At the bottom add lines to execute uae4arm (update directory as your installation):

cd ~/uae4arm-rpi/
./uae4arm

Now reboot: you should enter automatically uae4arm :)

For Raspbian jessie, it is completly different since it use systemd instead of init during booting sequence…

Smart Raspberry PI project

October 31, 2015 7 comments

Raspberry PI is a great little $35 mini computer the size of a credit card. The latest version (2.0) comes with 1 gigabyte of ram and a mid-range powerful ARM processor. Raspberry PI is used by hobbyists, schools and IOT companies to create clever consumer gadgets. The sky is the limit and what you can do with your PI is defined by imagination only (and I admit, some knowledge of Linux).

What our custom Linux distro brings, is a ready-to-use kiosk system

Smart Mobile Studio full control Raspberry PI

My good friend Glenn, a citizen of Denmark, loves all things Linux – so when I asked him if we could hijack the boot sequence, drop the desktop and boot straight into full-screen Chrome instead.. well, it didn’t take long for him to figure out a way to do that. I have to little experience with the mysteries of Linux to make that happen (it would take weeks of fiddling), so it’s good to have friends who know’s what they are doing.

A fully working desktop environment

A fully working desktop environment, took Glenn and myself 15 minutes to make

In essence we are creating a custom Linux distro which is designed to run your Smart Mobile Studio projects exclusively. This means:

  • full access to the filesystem
  • no content domain restrictions (CORS)
  • no blocking-operation restraints
  • No restriction on database or storage sizes
  • Your Smart program completely runs the show

As a gateway to the operative system and interesting functionality, I have gone for a nodeJS server running in the background (booted before the browser display) which means the desktop (or your project) can use RPC calls for advanced functions.

This is pretty cool because it means that the UI will be completely abstracted from the service functionality. The service can be local and run on the same installation – or it can run on Azure or Amazon cloud servers on the other side of the world.

Think of it this way: You dont have to run it on the same device. You can upload your desktop (or kiosk) to a normal web-host, disable CORS on the host, and then use websockets and connect to the NodeJS layer; which can be hosted on another computer or domain.

But all of that is already possible today. This is one of the simplest things to make with Smart Mobile Studio actually. A much cooler project is what we are doing now with the Raspberry PI — giving your creations the ability to live on and control a full Linux system.

So what are you guys up to?

The first project is a NetFlix like media system (or a XBMC clone) with full support for USB wireless remote controls. To make it short: a $35 home theatre that plugs into your television, which will scan your movies folder (or external drive) and present your movies like NetFlix does it. It will download info from MovieDB.com and other media services for identification. It sounds complicated but it’s actually very straight forward. One of the simplest solutions I’ve done with Smart Mobile Studio to be honest.

Using a standard USB remote control is excellent because it registers as a touch-device. So the buttons you press registers as touch-events in a predictable order (read: it works on all controls of this type).

What our custom Linux distro brings, is a ready-to-use kiosk system. You can use it to display advertizing in a shop window if you like, or add a touch screen and build your own ticket ordering station. Again, it’s only limited by your own imagination.

A JavaScript cloud desktop

Depending on the success of the project we may go full-out and create the world’s first JavaScript desktop. It is essentially Debian + Chrome + Smart Mobile Studio. This would require a bit more NodeJS magic, where each exposed node service (RPC) represents a distinct part of the “operative system”. A virtual operative system running on top of a Linux stub. Pretty darn cool if you ask me. Who knows, maybe we can define POSIX for the cloud?

A fun hobby to say the least 🙂

Delphi for Linux imminent

September 18, 2015 4 comments
Ubuntu linux is sexy and stable

Ubuntu linux is sexy and stable

A little bird at Embarcadero let it slip that “Delphi for Linux is right around the corner”. I cant talk about the source of this info (for obvious reasons), but those that follow my blog … 🙂 EMB denied for weeks that iOS support via FPC was a fact after that was posted here, no doubt we will see the same this time.

But, If this indeed turns out to be the case, then Embarcadero is about to take multi-platform native development onto a whole new level. DX already makes C# look like a joke, and Linux support would be the proverbial frosting on the cake – opening up the world of high-end ubuntu services for every Delphi developer in the world.

Linux is the fastest growing Windows alternative these days, embraced by corporations, schools and governments around the globe at an alarming rate. So it does make good sense money wise.

As for how, what and who — that is for Embarcadero to avail when the time is right.

For now, it’s just a rumor (although from the horses mouth).

So to sum up: Every object pascal developer and company has worked HARD to put object pascal back on the map after borland got whopped by Microsoft. And just look at the coverage: Object pascal for iOS, Android, OS X and Windows is delivered by Delphi. JavaScript, NodeJS and the browser is covered by Smart Mobile Studio — and every platform under the sun is supported by freepascal and derived forks.

It’s happy days 🙂

Firebird, daft or fab?

August 24, 2015 5 comments

Programmers and their databases have always been a pickle. In many ways it’s like music where everyone has their favorite band and stick to them well beyond expiration date or technical excellence. I will be frank from the start and say that I have very little experience with firebird (or interbase for that matter), but what I have seen so far is quite impressive.

Let the flame-wars begin

Firebird, the tardis of database engines!

Firebird, the tardis of database engines!

The other day I posted a link to firebird on our Delphi Developer forum at Facebook. I didn’t think much of it as it was just a case of “support our native tools” type post. I made the huge mistake of using the word “enterprise” in the title – which spawned an avalanche of criticism. Apparently firebird is (in view of some) not an enterprise storage solution, at least not in the modern sense of the word. Or at least that was the impression I was left with.

Since I have little experience with firebird I was not really in a position to defend or accept what people told me. Many of the “facts” that were aired seemed reasonable – things like clustering and automated replication are indeed features we expect from an enterprise solution these days. Not to mention fallback safety for power-cuts (which I must admit I always leave to a dedicated UPS). That said, there are also other aspects of deploying database technology in large corporations which must be taken into account. It all depends on the need, pricing and demand for scaling i suppose.

But I must also say that I find firebird quite pleasant to work with. I know of at least two companies that use it large scale, both hosted on Amazon and through a dedicated in-house server; processing thousands of requests every hour. With Amazon EC2 you at least dont have to worry about power-cuts, and replication is .. well, we knocked out a dedicated backup system for firebird using Delphi in roughly 2 days. Wrapping the FireDac services in neat specialized classes for our purposes, adding zip compression and ftp all in one go.

IBExperts

Late in the debate the founder of IBExperts, which I guess all of us have heard about at one point or another, entered the debate. His company have specialized on Interbase and Firebird technology and the thread quickly turned from doom-and-gloom to a more optimistic view of firebird and what it could do when setup correctly. And that is the key concept: we expect firebird to act and behave as a small-time embedded database, but it actually has a much wider scope. The official firebird book is actually 3 massive bibles. Could perhaps be a case where people actually need to study a bit harder to use it properly. Just look at the books produced for MSSQL about tuning, indexing and clustering.

Other users likewise came into the debate in with their experiences, from handling terabytes of data to dealing with real-life problems such as (drumroll) clustering and replication. One user had operated a massive firebird database in the worst possible electrical environment for years without so much as a hitch. Power-out’s were almost a daily occurrence, yet firebird kept on going with no big issues. Naturally I cant defend if that is indeed true, but I see no reason why a well known Delphi developer should lie about a product which is essentially free, open and available to all.

Embedding

Embedded databases have become a market in themselves over the past 10 years. My personal favorite has always been Elevate Software’s ElevateDB, the successor to DBISAM which is probably one of the fastest and robust Delphi components ever written. I stuck with Elevate for many years and have only recently, around XE3, held on to renewing my subscription. Not because I dont like it any more, but because my work shifted from purely Win32/64 client-server development to JavaScript/RTL architecture during the development of Smart Mobile Studio.

Another factor is that more and more really good databases are available, database engines which are free. MySQL embedded is really cool – albeit something of a black art to work with directly (on C level). At the same time Embarcadero have brought Interbase back from the dead and are now shipping the embedded version with Delphi itself. And a database which works universally on Windows, OS X, Android and iOS makes a lot of sense. And last but not least there is MSSQL embedded which ships pre-installed on Windows 8 and Windows 10. But for me that is the last possible option.

Added: SQLite was mentioned in a comment, this is indeed true – but my initial text here is about mid-range to high-end databases. Postgres is another system, but I must admit I’m a complete agnostic about that engine. I’ve so far only worked with Oracle, MSSQL, MySQL and for mid-range work: ElevateDB, DBISAM and DBase (way back in the day).

Daft or fab? Not quite sure yet

It’s going to be interesting to get Firebird under my skin in the coming months. It does suffer the problems of all open-source projects in that it’s completely at the mercy of it’s contributors. Some of the tools are a bit rustic, like GBack which is the tool you use to create incremental or full backups. Using this from Delphi does require a bit of manual labour — but hey, it’s a free database engine! Last I checked MySQL didnt get good tools before someone decided to build it. The engine provider’s own tools are typically command-line oriented because the audience is DB administrators, not programmers, which script everything. Firebird is no different — which is a good thing!

I think we live in exciting times. Delphi is getting back on track, we see a growth of Delphi based products – especially client/server stuff which is what Delphi is really good at. At the same time technologies which stem from the Borland/Delphi miljø and sub-culture is getting more attention. There is a lot of cool stuff going on and it’s a great time to be a Delphi, Smart Mobile Studio or Freepascal developer!

Hire a Delphi superhero!

May 18, 2015 2 comments

This is your chance to hire a Delphi super-hero, namely me! That’s right, I’m presently in the market for a new full-time job!

Note: if you need people for a short project then I personally must decline, but I can forward your position at Delphi Developer (4.000 Delphi developers) and here on my website (average of 15.000 visitors per month)

I must underline the obvious, namely that I live in Norway with no intention of moving. However, I have a fully established office which has been used for three consecutive years. So I am used to working remotely, communicating through Skype, Messenger, Google Meetup’s and similar utilities. This has worked brilliantly so far. Full time employment over the internet is finally a reality.

For Norwegian companies I can offer 50% commuting (100% in Vestfold) from Oslo to Akershus in the north, to Sandefjord and Larvik in the south. The reason I can only offer 50% is because I have custody for two children every other week, which means I am unable to drive long distances in combination with school and similar factors.

You need an awesome Delphi developer? Then I'm you man!

Jon Lennart Aasenden, 41 years old senior software engineer using Delphi

My experience is too long to list here, but if you are in the market for a dedicated software engineer then you have come to the right place. Simply drop me a line on e-mail (address at the end of this post) and I will get back to you with a full resume ASAP.

In short: I have worked with Delphi for 15 years, I am the author of Smart Mobile Studio and wrote the VJL (run-time library, similar to VCL) from scratch. I have worked for some of the biggest companies in Norway. I have good papers, excellent reputation and is recognized for my skill, positive attitude and “yes we can” mentality.

Before my 15 years of Delphi I spent roughly the same amount of time programming games, multimedia and contributing to the demo scene. In my view, demo programming is one of the most rewarding intellectual exercises you can engage in. It teaches you how to think and prepares you for “real life solutions” as a professional.

In my career as a Delphi developer I have written several commercial products; ranging from a complete invoice and credit management application, games and multimedia products, serial management components for Delphi, backup clients and much, much more.

No matter what your company works with, as long as it’s Delphi I’m the right man for the job.

My kind of job

It goes without saying that I am looking for a purely Delphi centric position. Although I have worked with C++ and C# and have no problems navigating and adding to such a codebase — there can be no doubt that Delphi is the environment where I excel at my work.

I bring with me a solid insight into HTML5/JavaScript and am willing to teach and educate your existing staff in Smart Mobile Studio, which I am also the founder and inventor of; bringing your team up to speed with the next step in computing: namely distributed cloud computing. At which JavaScript and Object Pascal play central roles.

Torro Invoice

Torro Invoice

So if you are moving towards cloud computing or want to leverage the cloud for service oriented architectures using RPC or REST servers, I am the guy you want to talk to. Especially if you want to use object pascal and leverage nodeJS, JavaScript and the whole spectrum of mobile devices. In short, with me on your team you will have a huge advantage.

What you get is 15 years of Delphi experience; and before that, 15 years of software authoring, demo and game development on 16 bit machines.

Good qualities

I have included a small list of “ad-hoc” qualifications. The list below does not serve as anything other than indicators. I have been a professional programmer for many, many years and to list every single piece of technology is pointless. But hopefully the list is enough to give you an idea of my qualifications, depth and diversity.

  1. Write custom Delphi controls and UI widgets
  2. Create control packages and resource management for our projects
  3. Optimize procedures by hand by refactoring variables, parameters and datatypes.
  4. Write a database engine from scratch and also use existing engines such as Elevatedb, MSSQL, Oracle, Firebird, SQLite and MySQL
  5. Work with Windows API directly
  6. Work with freepascal and port code to Linux and Macintosh
  7. Write networking libraries, custom protocols and high-speed multithreaded servers
  8. Use classical frameworks like Indy, Internet Direct and other out of box solutions
  9. Write advanced parsers for source-code and any complex text-format
  10. Write a real-life compiler and IDE (www.smartmobilestudio.com and quartexpascal.wordpress.com)
  11. Create a graphics library from scratch, supporting different pixel formats
  12. Work with large files, performing complex operations on gigabytes and terabytes
  13. Write mobile and desktop applications using Smart Mobile Studio and FireMonkey
  14. Create your server environment, architect your RPC (remote procedure call) service infrastructure
  15. Implement custom services through nodeJS, Smart Mobile Studio, Delphi and FreePascal
  16. Use Remobjects product line: such as RemObjects SDK, Hydra and Data-abstract
  17. Link with C and C++ object files
  18. Use project management tools, scrum, SVN, github and package management
  19. Write, export and import DLL files; both written in and imported to Delphi
  20. Write solid architectures using moderns technologies: xsl schemas, stylesheet validation, rest, node services

.. and much, much more!

Note: While you may not have a need for all of these qualifications (or any one of them), the understanding which the listed topics demand that I master is of great value to you as an employer. The more knowledge and hands-on experience a programmer can offer, the better qualified he is to deal with real-life problems.

Tools of the trade

All programmers have a toolbox with frameworks, code collections or third party components they use. These are the products I always try to use in projects. I have a huge library of components I have bought over the years, and I have no problem getting to know whatever toolkit you prefer. Below is a short list of my favorite tools, things I always carry with me to work.

  • RemObjects SDK to isolate business logic in RPC servers, this allows us to use the same back-end for both desktop, mobile and html5 javascript.
  • RemObject Data Abstract
  • Remobjects Hydra
  • ElevateDB and DBIsam (DBIsam for Delphi 7-2010, ElevateDB for the rest)
  • Developer Express component suite
  • TMS Aurelius database framework
  • MyDac – MySQL data access components
  • MSDac – Microsoft SQL server data access components
  • Raize component suite
  • FreePascal and Lazarus for OS X and Linux services
  • Mono/C# when customers demands it (or Visual Studio)
  • Firemonkey when that is in demand
  • Smart Mobile Studio for mobile applications, cloud services [nodejs] and HTML5/Web

Tools and utilities

  • Beyond compare (compare versions of the same file)
  • Total commander (excellent search inside files in a directory, recursive)
  • DBGen – A program written by me for turning Interbase/firebird databases into classes. Essentially my twist on Aurelius from TMS, except here you generate fixed classes in XML Data Mapping style.

Note: For large projects which needs customization I tend to write tools and converters myself. The DBgen tool was created when I worked at a company which used Firebird. They had two people who hand wrote wrapper classes for nearly 100 tables (!) They had used almost two months and covered about half. I wrote the generator in two days and it generates perfect wrappers for all tables in less than 10 minutes.

Safety first

As a father of three I am only interested in stable, long-term employments. Quick projects (unless we are talking six figures) are of no interest, nor do I want to work for startup groups or kickstarters. I am quite frankly not in a position to take risks; children comes first.

many of my products are available for purchase online

Many of my products are available for purchase online

I am used to working from my own office. I was initially skeptical to that, but it has turned out to work brilliantly. I am always available (within working hours Norwegian time) on Skype, Google, Messenger and phone.

Languages work format

For the past year I have been working full time with C# under Microsoft Visual Studio. Before that I used Mono for about four months, which I must admit I prefer over Visual Studio, and wrote a few iPhone and iPad apps. So I am not a complete stranger to C# or other popular languages. But as I have underlined in my posts on countless occations – object pascal is a better language. It is truly sad that Borland was brought down all those years ago, because with them they dragged what must be one of the most profound and influential languages of our time.

That said, I have no scruples about mixing and matching languages. Be it Remobjects Oxygene, C#, C++ or Object Pascal. I also do some mocking in BlitzBasic which is without question the fastest (execution wise) of the bunch, but not really suitable for enterprise level applications.

The point here being that I am versatile, prepared to learn new things and not afraid to ask. It would be a miracle if a man could walk through life without learning something new, so it’s important to show respect and be remain humble. Different languages represents different approaches to problem solving, and that is an important aspect of what it means to be a programmer.

The only thing I ask is that I am properly informed regarding the language you expect me to deliver in – and that I receive the required training or courseware in due time. Do not expect super-hero status in C#, I am absolutely above average, but it cannot be compared to my efficiency and excellence using Delphi.

Contact

If you are interested in hiring me for your company, then don’t hesitate to contact me for a resume. You may e-mail me at lennart.aasenden AT gmail dot com.

Aros, a Linux alternative

May 16, 2015 2 comments

Lately I have written quite a lot about the old Amiga platform. The reason I am so excited about this is not because I’m reminiscing about my childhood, or because I’m trying to sell anything. Quite the opposite.

What I would like to present is a modern, working and extremely effective alternative to Linux. Because that’s what Aros and Morphos represents. I know this view may come as a surprise on people, even those active in the Amiga retro computing community. But it’s non-the-less true.

What is an operative system?

An operative system is a methodology. Windows represents one such method, and by using their method and doing what Microsoft wants you to – you achieve and can enjoy the features Microsoft delivers through Windows.

OS X is likewise a methodology, it is wildly different from Windows and offers an equally different user experience. Where Microsoft Windows focus more on being a workhorse for all things technical (which I personally think is better than OS X), Apple focus more on creativity, special effects and visual beauty over technical achievements.

One of many, many ports

One of many, many ports

Linux represents the third methodology. It’s both different from and equal to the two aforementioned systems. It is open, free and can be customized and tailored to the needs and wants of it’s users. The downside of this freedom is that compared to Windows and OS X, Linux is extremely hard to work with on a lower level. You really need to know your stuff once you move away from the desktop, and if you want to make distinct changes to the operative system – you must be prepared to master and excel at C/C++ programming.

What about Aros and Morphos

Now that we have put words on what an operative system represents and which alternatives are out there (there are more operative systems of course, but these 3 methodologies represents the most widely accepted) – it’s time to look at a completely different and very exciting alternative. Namely Aros.

Linux has an interesting story which in many ways resemble Aros’s own history. If we go back in time to before Linux existed, universities and science programs primarily used Unix. Unix is after all designed to deal with hundreds and thousands of users, to keep data separate and to provide safety and security for everyone with an account. So it’s perfect for college campuses, scientific organizations and businesses.

Emulation is excellent :)

Emulation is excellent 🙂

But one day a small group of people started to rebel against the proprietary nature of Unix. Back then Unix was overly expensive, so expensive that no ordinary person would afford to own it. So this small group of people got together and decided to write a complete operative system simply called GNU. This system was free, open for all and could be downloaded for no fee what so ever.

There was only one problem with GNU and that was that they didn’t have a kernel which was written from scratch, so people were using the Unix kernel quite illegally. But one day a guy from Finland called Linus Torvaldson came along and wrote exactly that missing piece. Voila, Gnu/Linux was born!

The Aros story is of course different, but there are some commonalities between the two. Aros dates back to the days when Commodore went bankrupt. It was unsure what would happen to the Amiga, would it end up at gateway computing? Would Escom buy it? Maybe Haage & Partner?

In the midst of all this a group of coders joined forces and decided enough was enough. Commodore went bankrupt after years of abhorrent business decisions, wasting and throwing away the potential of the Amiga platform, losing the lead they had over PC and Mac due to negligence and greed — so these guys just had it with Commodore all together. They decided to reverse engineer the operative system from scratch to ensure that no matter what these financial cowboys end up doing — at least Aros would be there as a safe-haven for serious users.

Years of development

Aros has been in development ever since commodore went bankrupt in 1996. It represents a monumental piece of engineering, writing a complete operative system from scratch using nothing but technical documentation and API user-manuals as their source. The group has solved all the tasks which Commodore and it’s troll representatives gave as excuses to why AmigaOS would not be ported to x86. They have solved the 64k memory problems deemed so hard to work with by Microsoft and and others, and last but not least – they have solved the missing kernel which haunted GNU before it became GNU/Linux.

And even if Aros made full use of the Linux kernel and driver database, it would still represent a profound achievement in computing. Raise your hand everyone who knows a programmer which refuses to give up – and continues working 19 years later (nineteen years, that’s just mind boggling).

The question is, what does Aros give you, the user?

The Amiga methodology

As clarified at the beginning of this article an operative system is essentially a methodology. It’s a way of thinking, a way of working and ultimately a way to approach technology and gain access to it’s benefits.

Aros has come a long way since 1996!

Aros has come a long way since 1996!

The reason Windows users find Linux so utterly difficult is because absolutely everything is different. Linux is based on a completely different mindset, and it forces it’s users to develop a specific mode of thought which in turn educates them about the system. The same can be said about Aros, except here everyone will find something familiar to them, because Microsoft, Apple and GNU have all copied and stolen parts of the Amiga when Commodore died (!). So what you are faced with here is the original, the real deal, the big kahuna and true whopper!

Aros and the Amiga is quite simply a fourth methodology; Different from Windows, different from Linux and different from OS X. The key architectural feature of Amiga OS is be user friendly, hardware friendly and resource friendly (Amiga OS will happily boot up in 4 or 8 megabytes of ram. The other systems require 1000 times more memory to run properly).

We have to remember that Amiga OS delivered a silky smooth multi-tasking, fully windowed desktop UI roughly a decade before Windows 1.0 was invented. And it delivered this on computers with between 512 kb to 4 megabytes of ram! Since the old 68k processors did not support MMU (memory mapping unit) the Amiga could not support swap-files or true multi-tasking. But technique for multitasking Amiga OS uses turned out to be damn effective! Even today it outperforms Windows, Linux and OS X.

But the real power of Amiga OS, which ultimately is the power of Aros — is hidden in the actual architecture itself. It’s buried in the software API, the way drivers attach and work, how music is dealt with and how graphics is allocated and dealt with. The power of the Amiga can be found in it’s REXX scripting, it’s global automation support and signal management.

Can you imagine what an operative system designed to work in 4 megabytes of ram can do on a PC with 8-16 gigabytes, ultra-fast graphics cards and 16-24 bit sound chips? Well it’s exciting stuff that’s for sure.

Aros and Linux

Aros is right now where Linux was 15 years ago. Back then “Linux” meant (more often than not) a debian based distro, or perhaps a slackware version. Those were the most popular and debian was the undisputed king of the hill. Most of our modern distros today are fork’s which at one point derived from these older implementations.

Aros and Morphos can in many ways be seen as debian and slackware. Aros is not a “game operative system”. It has nothing to do with the old games machine of the 80’s and 90’s. Amiga OS is a unique creation, written by four people at Amiga Inc, especially Carl Sassenrath which is the author of Intuition and Exec. These guys just wanted to write an OS from scratch — and with Amiga they got the chance. They didn’t set out to make a game operative system. Quite the opposite — Carl and his team put together a unique operative system which for a whole decade was ahead of the competition. This is unheard of in the IT industry as a whole even to this day. And the OS they wrote delivers a high-performing, graphically excellent operative system which turns on a dime.

Not exactly graphically impaired

Not exactly graphically impaired

We should be thankful that the Aros team made it their lives mission to re-engineer Amiga OS for the future — because without it, the methodology and mode of thought which Amiga OS represents would never have survived. It would be a blast from the past, a true gem buried in the sand and forgotten.

Thanks to the Aros team, modern programmers and computer users can see for themselves just how cpu, memory and space efficient the Amiga methodology and formula is. And with a bit of work, turning this operative system into a killer business provider should not be a problem.

What about drivers

I had a chat with a individual on the Amiga User forum over at Facebook about this very topic, and unsurprising he was against the idea. Actually I dont think he really thought it through. He just went into “automatic” mind mode and said “It will never work, it’s a waste of time”. At which point I have to ask “what is a waste of time?”.

NVida has an excellent API

NVida has an excellent API, and you get a mighty bang for your pennies!

Aros can be downloaded on a live CD and tested on any x86 PC. Naturally you can expect it to work on every configuration on the planet, but the majority of modern PC’s will work just fine. So what exactly is it that makes this so much worse than, say, Linux?

In order to ensure driver efficiency I would propose that Aros picks out a fixed and easily available hardware platform, I can suggest the following (and it’s thought through, so please think about it before you just criticize it).

  • NVida Graphics hardware
  • Easily available motherboard with:
    • On board sound hardware
    • On board TCP/IP socket
    • On board Wi-Fi hardware
  • Intel i5 – i9 processor

The list of drivers required for such a setup:

  • NVida graphics API driver
    • VESA fallback driver [8 .. 32 BPP]
    • OpenGL integration unit
  • USB hub driver
    • USB mouse driver
    • USB keyboard driver
  • Standard Sound API driver (ASIO compliant)
  • Standard IDE device driver
    • CD-ROM recognition
    • Harddisk recognition
  • Standard Sata harddisk driver

This is essentially the number of drivers you would need. And just like Apple you must ensure that all new Aros PC’s have this spec. The CPU type and speed may vary, the nVida graphics card may vary and disk sizes may vary between models. But as long as you stick to the motherboard type and graphics adapter — you essentially have a device driver collection which needs no major work for at least 8 years. It all depends on the hardware vendors and how long a motherboard remains on the market (typically 4-5 revisions with an equal number of models).

Where you will find most work for future support, or like Ubuntu – a full online driver database and hardware recognition service, is under the USB topic above. Keyboard and mouse represents the bare basics. Once you have the USB hub and ports operating, the fun work begins 🙂 And there is almost no limit to the amount of stuff you want to recognize. Personally I would opt for USB stick brands before anything else, but that’s not my department to decide on.

As you can see, it’s not that hard to work with this. The hard part is finding people who are willing to write a driver, spend some time debugging and testing and ultimately donate to the Aros desktop and operative system. But like I told my friend while we were debating, it’s not black magic. Amiga OS itself was written by four (4) guys. What’s important is that the key programmers know their stuff and are willing to donate some free time.

I for one cant wait to get started. I really hope the Aros team picks up on this — because there are tons of programmers out there who really want an alternative to Linux!

Just imagine what an Amiga based web-server would run for? It’s a system which delivers top-notch multi tasking in 512kb.. now give it 512 gigabytes of ram, a kick ass CPU and watch the sparks fly! It would outperform even Linux, that’s for damn sure.

Windows stripped, revelations

May 2, 2015 1 comment

Lately I’ve gotten my hands on 4 super sexy HP thin clients. They are from 2006 so I got them cheap (almost give away price) but they work brilliantly. I decided to go with Windows XP embedded since that system is much easier to tailor and mold to my needs. I also updated two of them to Windows 7 embedded and DSLinux respectively.

How small is Windows, really?

When Windows 95 came out everyone was in shock: holy cow what a bloated OS! We were all used to 16 bit operating systems, like the Amiga where the operating system cam it at around 5 megabytes. This was because the kernel on the Amiga was on chip though. We were also used to Mac’s which operated on much the same principles, where the OS was a single floppy and parts of Mac OS was on chip as well.

Thin clients make excellent emulation machines. Here installing Amiga OS 1.3

Thin clients make excellent emulation machines. Here installing Amiga OS 1.3

Well today I had a small revelation. Having studied up on Terminal Server and thin clients, I found out that Windows embedded, no matter what version, is essentially the exact same software as the full desktop versions.

The only difference between Windows embedded and the all-round, commercial desktop edition – is that the embedded version has absolutely nothing extra. You use an image builder application to tailor the Windows image, and absolutely nothing else is added to the installation.

This means: no extra drivers or libraries, no extra images, no extra programs or protocols — all those myriad of files floating around you harddisk are gone.

Guess how big the core Windows system is? Take a wild guess? 44 megabytes!

As you add modules to this, like remote desktop support, networking standards and protocols, applications, hardware drivers, usb support (and so on) it slowly starts to grow. In the end my full Windows XP installation came in at a whopping 250 megabytes (roughly rounded)!

USB rescue stick

As you can probably imagine, a hacker like myself (in the good sense in the word) could not let this opportunity go unnoticed. So immediately I started constructing a rescue USB disk with the bare basics. How big did you say? Well the smallest USB stick i have is 8 gigabytes. I dont think you can get anything smaller these days? At least not in Norway.

Amiga UAE, installing OS 3.9

Amiga UAE, installing OS 3.9

Either way I partitioned the USB drive and gave the boot partition 1 gigabyte and then the work partition 7 gigabytes for applications and tools. I dont think i’ll be running out of space any time soon. It’s a rescue boot after all.

The really sexy thing about this is that it boots incredibly fast! Just plug in the USB stick, switch on the PC and BAM you’re in windows.

Now naturally there are restrictions. Like I mentioned the embedded version’s strength is it’s modularity. So creating a universal USB stick will be almost impossible without adding the standard Windows driver library. But you can pick a generic display driver with semi-high resolution, a vanilla sound driver, USB driver and CD-rom driver. And naturally you want to copy over english, american and norwegian keyboard layout files.

Well — hope you found this interesting! The reason I bought the thin-clients were initially to create emulation machines (turning it into a dedicated Amiga or Nintendo), and learning more about terminal server software. But right now I’m tempted to whip out Delphi and create my own television set-top-box 🙂