Archive

Archive for the ‘Amibian.js’ Category

Quartex: Mali GPU glitches

February 20, 2019 Leave a comment

Something really strange is happening on Chrome and Firefox for ARM. Javascript is not supposed to be able to take down a system, and in this case its neither an attempt as such either — yet for some reason I have managed to take down the ODroid XU4 with both Chrome and Firefox lately.

ODroid XU4

I guess I should lead with that I’m not able to replicate this on x86. One of the things I really love about the ODroid XU4 is that it’s affordable, powerful and probably the only SBC I have used that runs stable on the mali GPU. As you probably know I tested at least 10 different SBC’s back in 2018, and whenever there was a mali GPU involved, the product was either haunted by instabilities or lacked drivers all together.

amibian

Since the codebase for Chrome (and I presume Firefox) is ultimately the same between platforms, it leaves a question-mark about the ODroid. It is by far the most stable SBC I have tested so far (except for the PI, which is sadly underpowered for this task), but stable doesn’t mean flawless. And to be honest, Amibian.js is pushing web tech to the very limits.

Not Mali again

The reason I suspect the mali to be the culprit behind all this, is because the “bug” if we can call it that, happens exclusively during resize. So if there is a lot going on inside a desktop-window, you can sometimes provoke the ODroid to cold-crash and reboot. You actually have to power the board down and switch it back on for it to boot properly.

50431451_10155954273110906_8776790185049325568_n

Cloudripper ~ 5x ODroid XU4 [40 cores] in a PICO 5h cube

The resize and moving of windows uses CSS transformation, which in modern browsers makes use of the GPU. Chrome talks directly with OpenGL (or glES), so the operations are proxied through that. And again, since OpenGL is pretty rock solid elsewhere, we are only left with one common denominator: the mali GPU.

The challenge is that there is no way to debug or catch this error, because when it occurs the whole system literally goes down. There is no exception thrown, nor is the browser process terminated (not even a log entry, so its a clean cut) — the system reboots on the spot. Since it fails on reboot when opening X (setting a screen-mode) I again point the finger at the GPU. Somehow a flag or lock survives the cold-reboot and that’s why you have to manually switch it off and on again.

This is the exact problem that made the NanoPI Fire useless. It only shipped with Android embedded drivers. The X drivers could hardly open a display without crashing. Such a waste of a good cpu.

x86 as head

ODroid is perfect for a low-cost Amibian.js experience, but I was unsure if it would handle the payload. Interestingly it handles it just fine and even with a high-speed action game running + background tasks we are not using 50% of the CPU even.

Ram is holding up too, with memory consumption while running Tyrian + having a few graphics viewers open, is at a reasonable 700 mb (of 2 gigabyte in total).

51398321_10155998598505906_8984850199142727680_o

Tyrian jogs along at 45 fps ~ that is not bad for a $45 SBC

Right now this strange error is rare, but if it continues or grows into a problem (chrome is hardly useable at all, only firefox) then I have no option than to replace the master sbc in the cluster with something else. The x86 UP board is more than capable, but it would be a shame to break the price range because of that (excuse my language) crap mali GPU. I honestly dont understand why board makers insist on using a mali. Every board that has a mali is haunted by problems and get poor reviews.

It will be exciting to check out the dragonboard, although I fear 1Gb memory will not be enough for smooth operation. Not without a sata interface and a good swap-file.

Android and Delphi

One alternative is to switch to Android and use Delphi to code a custom Chromium Embedded webview. I am hoping to avoid the overhead of Android, but Delphi would definitively be a bonus with Android embedded (“Android of things”).

We will see.

Leaving Patreon: Developers be warned

February 17, 2019 4 comments

As a person I’m quite optimistic. I like to think the glass is half-full rather than half-empty. I have spent over a decade building up a thriving Delphi and C++ builder community on social media, I have built up a rich creative community for node and JavaScript on the side — not to mention retro computing, embedded tech and IOT. For better or for worse I think most developers in the Embarcadero camp have heard my name or engage in one of the 12 groups I manage around the world on a daily basis. It’s been hard work but man, it’s been worth every minute. We have so much fun and I get to meet awesome coders on a daily basis. It’s become an intrinsic part of my life.

I have been extremely fortunate in that despite my disadvantage, a spine injury in 2012 – not to mention being situated in Norway rather than the united states; despite these obstacles to overcome I work for a great American company, and I get to socialize and have friends all over the planet.

The global village is the concept, or philosophy, that technology makes it possible no-matter where you live, to connect and be a part of something bigger. You don’t have to be a startup in the san-francisco area to work with the latest tech. Sure a commute from Burlingame to Redwood beats a 14 hour flight from Norway any day of the week — but that’s the whole idea: we have Skype now, and Slack and Github; you don’t have to physically be on location to be a part of a great company. The only requirement is that you make yourself relevant to your field of expertise.

Patreon, a digital talent agency

Patreon is a service that grew straight out of the global village. If the world is just one place, one great big family of human beings with great ideas, then where is the digital stage that helps nurturing these individuals? I mean, you can have a genius kid living in poverty in Timbuktu that could crack a mathematical problem on the other side of the globe. The next musical prodigy could be living in a loft in Germany, but his or her voice will never be heard unless it’s recognized and given positive feedback.

“The irony is that Patreon doesn’t even pass their own safety tests. That should make you think twice about their operation”

My examples are extremes I agree, most people on Patreon are like me, creative but absolutely not cracking math problems for Nasa; nor am I singing a duet with Bono any time soon. But that’s the fun thing about the world – namely that all things have value when put in the correct context. Life is about combinations, and you just have to find one that works for you.

village

The global village, the idea of unity through diversity

The global village is this wonderful idea that we can use technology to transcend the limitations the world oppose on us, be they nationality, color, gender or location. Good solutions know no bounds and manifests wherever a mind welcomes it. Perhaps a somewhat romantic idea, if not naive, but it seems the only reasonable solution given the rapid changes we face as a species.

In my case, I love to make software components in my spare time. My day job is packed and I couldn’t squeeze in more work during the weekdays if I wanted to, so I only have a couple of hours after-work and the weekends to “do my thing”. So being a total geek I relax by making components. Some play chess, the guitar or whatever — I relax by coding something useful.

Obviously “code components” are completely useless to anyone who is not a software developer. The relevance is further clipped by the programming-language they are written for, and ultimately the functionality they provide. Patreon for me was a way to finance the evolution of these components. A way of self motivating myself to keep them up to date and available.

I also put a larger project on Patreon, namely the cloud desktop system people know as “Amibian.js” or “Quartex Web OS”. Amibian being the nickname, or codename.

Patreon seemed like the perfect match. I could take these seemingly unrelated topics, Delphi and C++ builder specific components and a cloud architecture, and assign each component and project to separate “tiers” that the audience could pick from. This was great! People could now subscribe to the tier’s they wanted, and would be notified whenever there was an update or new features. And I could respond to service messages in one place.

The Tier System

The thing about software is that it’s not maintained on infinite repeat. You don’t fix a component that is working. And you don’t issue updates unless you have fixed bugs or added new functionality. A software subscription secures a customer access to all and any updates, with a guarantee of X number of updates a year. And equally important, that they can get help if they are stuck.

“when you are shut down without so much as an explanation, with nothing but positive feedback, zero refunds and over 1682 people actively following the progress — that is utterly unacceptable behavior”

I set a relatively low number of guaranteed updates per year for the components (4). The things that would see the most updates were the Rage Libraries (PixelRage and ByteRage) and Amibian.js, but not until Q3 when all the modules would come together as a greater whole — something my backers are aware of and have never had a problem with.

Amigian_display

Amibian.js running on ODroid XU4, a $45 single board computer

The tiers I ended up with was:

  • $5 – “high-five”, im not a coder but I support the cause
  • $10 – Tweening animation library
  • $25 – License management and serial minting components
  • $35 – Rage libraries: 2 libraries for fast graphics and memory management
  • $45 – LDef assembler, virtual machine and debugger
  • $50 – Amibian.js (pre compiled) and Ragnarok client / server library
  • $100 – Amibian.js binaries, source and setup
  • $100+ All the above and pre-made disk images for ODroid XU4 and x86 on completion of the Amibian.js project (12 month timeline).

Note: Each tier covers everything before them. So if you pick the $35 tier, that also includes access to the license management system and the animation library.

As you can see, the tier-system that is intrinsic to Patreon, solves the software subscription model elegantly. After all, it would be unreasonable to demand $100 a month for a small component like the Tweening library. A programmer that just needs that library and nothing else shouldnt have to pay for anything else.

Here is a visual representation, showing graphically why my tiers are organized as they are, and how they all fit into a greater whole:

tier_dependencies

The server-side aspect of the architecture would take days to document, but a general overview of the micro-service architecture is fairly easy to understand:

tier_dependencies2

Each of the tiers were picked because they represent key aspects of what we need to create a visually pleasing, fast and reliable, distributed (each part running on separate machines or boards) cloud eco-system. Supporters can just get the parts they need, or support the bigger project. Everyone get’s what they want – all is well.

The thing some people don’t grasp, is that you are not getting something to just put on Amazon or Azure, you are getting your own Amazon or Azure – with source code! You are not getting services, you are getting the actual code that allows YOU to set up your own services. Anyone with a server can become a service provider and offer both hosting and software access. And they can expand on this without having to ask permission or pay through the nose.

So it’s a little bit bigger than first meets the eye.

I Move In Mysterious Ways ..

Roughly 3 weeks ago I was busy preparing the monthly updates.

Since each tier is separate but also covers everything before it (like explained above) I have to prepare a set of inclusive updates. The good news is that I only have to do this once and then add it as an attachment to my posts. Once added I can check of all the backers in that tier. I don’t have to manually email each backer, physically copy my songs or creations onto CD and send it – we live in the digital age as members of the global village. Or so i thought.

So I published two of the minor cases first: the full HTML5 assembly program, that can be run both inside Amibian.js as a hosted application — or as a solo program directly in the browser. So here people can write machine-code in the browser, assemble it to bytecodes, run the code, inspect registers, disassemble the bytecodes and all the normal stuff you expect from an assembler.

This update was special because the program contained the IPC (inter process communication) layer that developers use to make their programs talk to the desktop. So for developers looking to make their own web programs access the filesystem, open dialogs (normal system features), that code was quite important to get!

tier_posts

Although published, none of my backers could see them due to the suspended status

The second post was a free addition, the QTX library which is an open-source RTL (run time library) compatible with the Smart Pascal Compiler. While not critical at this juncture, several of my backers use Smart Mobile Studio, and for them to get access to a whole new RTL that can be used for open-source, is very valuable indeed.

I was just about to compress the Amibian.js source-code and binaries when I got a message on Facebook by a backer:

“Dude, your Patreon is shut down, what is happening?”

What? hang on let me check i replied, and rushed into Patreon where the following header greeted me:

tier_header

What the hell Patreon? I figured there must be some misunderstanding and that perhaps I missed an email or something that needed attention. I get close to 50 emails a day (literally) so it does happen that I miss one. I also check my spam folder regularly in case my google filters have been careless and flagged a serious email as spam. But there was nothing. Not a word.

Ok, so let’s check the page feedback, has there been any complaints? Perhaps a backer has misunderstood something and I need to clear that up? But nope. I had nothing but positive feedback and not even a single refund request.  In fact the Amibian.js group on Facebook has grown to 1,662 members. Which shows that the project itself holds considerable interest outside software development circles.

Well, let’s get on this quickly I thought, so I rushed off an email asking why Patreon would do such a thing? My entire Patreon page was visibly marked with the above banner, so my backers never even saw the updates I had issued.

Instead, the impression people would get, was that I was involved in something so devious that it demanded my account to be suspended. Talk about shooting first and asking later. I have never in my life seen such behavior from a company anywhere, especially not in the united states; Americans don’t take kindly to companies behaving like bullies.

Just Contact Support, If You Can Find Them

To make a long story short it took over a week before Patreon replied to my emails. I sent a total of 3 emails asking what on earth would have prompted them to shut down a successful campaign. And how they found it necessary to slander the project without even informing me of the problem. Surely a phone call could have sorted this up in minutes? Where I come from you pick up the phone or get in contact with people before you flag them in public.

patreon

Sounds great, sadly it’s pure fiction

The response I got was that “some mysterious activity had been reported on my page”, and that they wanted my name, address, phone number and credit card (4 last digits). Which I found funny because with the exception of credit-card details, I always put my name, address, phone numbers and email etc. at the head of my letters.

I’m not a 16-year-old kid working out of a garage, im a 46-year-old established software developer that have worked as a professional for close to 3 decades. Unlike the present generation I moved into my first apartment when I was 16, and was working as an author for various tech magazines by the time I was 17.

But what really piss me off, was that they never even bothered to explain what this “mysterious behavior” actually was? I write about code, clustering, Delphi, JavaScript and bytecodes for christ sake. I might have published updates and code wearing a hoodie at one point, in a darken room, listening to Enigma.. but honestly: there is not enough mystery in my life to cover an episode of Scooby-Doo.

Either way, I provided the information they wanted and expected the problem to be resolved asap. Two days at themost. Maybe three, but that was pushing it.

It’s now close to 3 weeks since this ridiculous temporary suspension occurred, and neither have I been given any explanation to what I have done, nor have they removed the ban on the content. I must have read their guidelines 100 times by now, but given the nature of their ruling (which are more than reasonable), I can’t see that I have violated a single one:

  • No pornography and adult content
  • No hate speech against minorities or forms of religious extremism
  • No piracy or spreading copyrighted material
  • No stealing from backers

Let’s go over them one by one shall we?

Pornography and adult content

Seriously? I don’t have time to loaf around glaring at naked women (i’m a geek, I look weird enough as it is), and after 46 years on this planet I know what a woman looks like nude from every possible angle; I don’t need to run around like a retard posting pictures of body parts. And if you are talking about me — good lord is there a marked for hobbits? Surely the world has enough on it’s plate. Sorry, never been huge on porn.

And for the record, porn is for teenagers and singles. The moment you love someone deeply, the moment you have children together — it changes you profoundly. You get a bond to your wife or girlfriend that makes you not want to be with others. Not all men are into smut, some of us are invested more deeply in a relationship.

Hate speech and religious extremism

Hm, that’s a tough one (sigh). Did you know that one of my best friends is so gay – that he began to speculated that he actually was a liquid? He makes me laugh so bad and he’s probably the best human being I have ever met. I actually went with him on Pride last year, not because i’m gay but because he needed someone to hold the other side of the banner. That’s what friends do. Besides, I looked awesome, what can I say.

As for religion I am a registered Tibetan Buddhist. I believe in fluffy pillows, comfy robes, mother nature and quite frankly I find the world inside us far more interesting than the mess outside. You cant be extreme in Buddhism: “Be kind now, or ill hug you until you weep the tears of compassion!”. Buddhism sucks as an extreme doctrine.

So I’m going to go out on a limb and say nuuuu to both.

Piracy and copyrighted material

Eh, I’m kinda writing the software from scratch before your eyes (including the run-time-library for the compiler), so as far as worthy challenges go, piracy would be the opposite. I am a huge fan of classical operating-systems though, like the Amiga; But unlike most people I actually took the time to ask permission to use a OS4 inspired CSS theme-file.

asana

The Amibian.js project is well organized and I have worked systematically through a well planned architecture. This is not some slap-dash project made for a quick buck

Most people just create a theme-file and don’t bother to ask. I did, and Trevor Dickinson was totally cool about it. And not a single byte has been taken or stolen from anyone. The default theme file is inspired by Amiga OS 4.1, but the thing is: the icons are all freeware. Mason, the guy that did the OS icons, have released large sets of icons into GPL. There is also a website called OS4Depot where people publish icons and backdrops that are free for all.

So if this “mysterious activity” is me posting a picture of a picture (not a typo) of an obscure yet loved operating-system, rest assured that it’s not violating anyone.

Stealing from backers

That they even include this as a point is just monumental. Patreon is a service established to make that impossible (sigh); meaning that the time-frame where you deliver updates or whatever – and the time when the payout is delivered, that is the window where backers can file a complaint or demand a refund.

And yes, complaints on fraud would indeed (and should!) flag the account as potentially dubious — but again, I have not a single complaint. Not even a refund request, which I believe is pretty uncommon.

And even if this was the case, shutting down an account without so much as a dialog in 2019? Who the hell becomes a thief for 600 dollars? Im not some kid in a garage, I make twice that a day as a consultant in Oslo, why the heck would I setup a public account in the US, only to run off with 600 bucks! I have standing offers for projects continuously, I havent applied for a job since the 90s – so if I needed some extra money I would have taken a side project.

I even posted to let my backers know I had a cold last month just to make sure everyone knew in case I was unavailable for a couple of days. Truly the tell-tell sign of a criminal mastermind if I ever saw one ..

tier_refunds

Sorry Patreon, but your behavior is unacceptable

Hopefully your experience with Patreon has not been like mine. They spent somewhere in the range of 5 weeks just to register me, while friends of mine in the US was up and running in less than 2 days.

We are now 3 weeks into a temporary suspension, which means that most of my backers will run out of patience and just leave. It sends a signal of being whimsical about other people’s trust, and that people take a risk if they back my project.

At this point it doesn’t matter that none of these thoughts are true, because they are thoughts that anyone would think when a project remains flagged for so long.

What should scare you as a creator with Patreon though, is that they can do this to anyone. There is nothing you can do, neither to prove your innocence or sort out a misunderstanding — because you are not even told what you allegedly have done wrong. I also find it alarming that Patreon actually doesn’t have a phone-number listed, nor do they have offices you can call or reach out to.

The irony is that Patreon doesn’t even pass their own safety tests. That should make you think twice about their operation. I had heard the rumors about them, but I honestly did not believe a company could operate like this in our day and age. Especially not in the united states. It undermines the whole spirit of US as a technological hub. No wonder people are setting up shop in China instead, if this is how they are treated in the valley.

After this long, and the damage they have caused, I have no option than to inform my backers to terminate their pledges. I will have to relocate my project to a host that has more experience with software development, and who treats human beings with common decency and respect.

If I by accident had violated any of their guidelines, although I cannot see how I could have, I have no problem taking responsibility. But when you are shut down without so much as an explanation, with nothing but positive feedback, zero refunds and over 1682 people actively following the progress — that is utterly unacceptable.

It is a great shame. Patreon symbolized, for a short time, that the global village had matured into more than an idea. But I categorically refuse to be treated like this and find their modus-operandi insulting.

Stay Well Clear

If you as a developer have a chance to set up shop elsewhere, then I urge you to do so. And make sure your host have common infrastructure such as a phone number. Patreon have taken the art of avoiding direct contact to a whole new level. It is absolutely mind-boggling.

I honestly don’t think Patreon understands software development at all. Many have voiced more sinister motives for my shutdown, since the project obviously is a threat to various companies. But I don’t believe in conspiracies. Although, if Patreon does this to enough creators on interval, the interest rates from holding the assets would be substantial.

It could be that the popularity of the project grew so fast that it was picked up as a statistical anomaly, but surely that should be a good thing? Not to mention a potential case study Patreon could have used as a success story? I mean, Amibian.js didn’t get up and running until october, so stopping a project 5 months into a 12 month timeline makes absolutely no sense. Unless someone did this on purpose.

Either way, this has been a terrible experience and I truly hope Patreon get’s their act together. They could have resolved this with a phone-call, yet chose to let it fester for almost a month.

Their loss.

Hyperion vs Cloanto, the longest running lawsuit in the history of computing?

February 15, 2019 3 comments

Delphi and C++ builder developers will probably not have much interest in this, but as far as general IT news goes, this one is attracting interest far and wide due to the sheer absurdity involved. To be honest I also think that the case itself serves as a warning to companies and developers in general, because this truly is the best example of how bad things can go if you don’t manage your patents and rights properly.

So while I’m loving Delphi’s 24th birthday festivities, I find the ongoing lawsuits so amazing that I have to write a few words.

[Edit]: To make the case even remotely understandable for people that have never read about it before, I have left out a ton of details. The whole Amiga Inc scandal (which I believe ordered production of OS4 to begin with?), Eyetech, H&P, the loss of the Amiga OS 3.9 source code. The gist of the post here is not to dig into the details (also known as “the rabbit hole” in the community), but to give a short recount of the highlights leading up to the present situation – and to underline that people who still care for the system, the Amiga community, is beyond fed-up with this. I hope all parties get their act together and find a way to co-exist.  For those that want to dig into the gory details spanning three decades, there is always the Amiga documentation project.

Some context

Long story short, back in the early 90s Commodore, a company that for close to two decades ranked as a giant of computing, collapsed. Years of mismanagement, poor leadership, if not outright shameful, had taken its toll on the once fierce giant; And as the saying goes: the bigger they are, the harder they fall. And boy did Commodore fall.

players

Commodore ranked side-by side with the biggest names in the industry

What people often forget is that tech-companies have two types of currencies. The first is what consumers consider valuable; things like the products they make, how much money is in the bank, the state of their inventory, good partners and retailers — all points of importance when running a business.

Major players though couldn’t care less about these factors, not unless they align with their own needs. So from a PC company’s perspective, getting rid of the Amiga and butchering Commodore for patents was a spectacular win. Because, and here we get into the nasty parts: for an already established competitor, a dead tech company has one asset and one asset only: namely their patent-portfolio.

So all that buying and selling we saw in the 90s, with Amiga changing hands left and right, had nothing to do with saving the Amiga. The Commodore legacy was reduced to a piece of meat and thrownto the wolves, each ripping into its patents left and right. So while graphic, the piece of meat in this analogy held an estimated value of a billion dollars.

Patents are valuable because they represent repeated income and a level of financial security unline ordinary currency. Large companies use patent portfolios in combination with their insurance. IBM is more or less the archetypical example of this. They remain one of the richest companies in the world, but spend their time tinkering with super-computers and science experiments. “Big Blue” haven’t “worked” in the true sense of the word since they started licensing out PC as a platform. They own the patents for pretty much everything we know as a PC today, and don’t need to compete. They make a fortune just sitting there.

Climbing up the rabbit hole

gateway

Gateway and Escom both tried to save themselves using the Amiga patents, but they failed

When Commodore fell, the vultures moved in quickly. People have focused so much on the Amiga computer and branding aspect of Commodore, that we often neglect that the true value of such a giant was never the end-product, but the intrinsic values of their patents and technological inventions.

Very few knew the identity of the party now in possession of the Commodore patent portfolio until quite recently. It caused quite a stir online when I published the name of the owner last year (both on this blog and Amiga Disrupt on Facebook).

Just to underline: this information have never been secret or anything of the sorts. It’s just a type of information ordinary people wouldn’t know where to find (myself included). You have to know where to look and what to look for. And while I have some experience with copyright cases and intellectual property – I would never have found it without a heart to heart with Trevor Dickinson. The major shareholder in Aeon, which produces the Next Generation Amiga system (x5000 and the upcoming A1222). He kindly helped me through the avalanche of older court documents and pointed me to an article series in AF Magazine that I had no idea even existed.

I should also stress that I have no special friendship with Trevor. I have talked to him on various occasions and we share a passion for the Amiga system. He has always been very kind, but I don’t know him personally. Nothing I write here is done in his favour or out of some form of loyalty. I simply find that A-EON and Hyperion’s plans and products makes the most sense in 2019.

When the mysterious owner of Commodore and Amiga turned out to be Acer my jaw dropped. They had been sitting on the patents for all these years without making a sound. Licensing out bits and pieces to Aeon and Cloanto respectively — which are just that, license holders, not intellectual property owners (except what they have made themselves). From Acer’s point of view the Amiga computer is worthless and they wouldn’t give a cup of coffee for the Amiga name or its legacy. I’m actually surprised they even bother to allow the licenses in the first place. Unless they inherited them as part of a package deal. Dont know and don’t care.

How Acer got a hold of them can only be speculated on, but I would imagine they snapped them up when Escom went under. How much of the original portfolio remains intact is anyone’s guess. The classical Amiga OS source-code was, as we know, acquired by Hyperion from Amiga Inc years ago. That was the 3.1 version. Interestingly the 3.9 version was help by H&P (a german company) and was sadly lost when they existed the Amiga market permanently.

Workbench and hipsters

For those that haven’t read or followed up on the “Commodore case”, the license holders mentioned above (A-EON, Hyperion, Cloanto), have been at each other’s throats since the brits annexed India. Which is why this case has become interesting for others as well.

0001

Nobody under 33 years of age would associate this with Commodore or Amiga.

To give you some examples of the epic battles at hand: they have argued in court over the right to use a checkered bathing ball, you know those you can buy almost anywhere and that resemble a french table-cloth? Oh yes I kid ye not.

They have gone to court over the misuse of said bathing apparatus, the misrepresentation of the ball, who owns the ball, it’s buoyancy – and let us not forget trademarking the word “Workbench” (the name of the desktop system the Amiga uses). A word today only used by hipsters in meth-labs and tool-time-tim wannabe’s on YouTube. The absurdities are so dense you could bottle them.

If we look at the many struggles since Commodore went under from a bird’s eye perspective, we are essentially seeing the same lawsuit on infinite repeat (with a few variations here and there). I got married, I had kids and 15 years later I got divorced. And when I got back they were still at it! Good god guys, what a complete and utter waste of time, resources and talent (The lawsuits not my marriage. Well maybe both), not to mention counter productive! If anything these frequent lawsuits are destroying what both parties are trying to protect. Although I question if one of them indeed are.

If I was to go back to school and re-invent myself, I would become an author. All I had to do to was take the Commodore story and place it in middle-earth, give the people involved pointy ears, brutal weaponry and silly names and voila! A tale that would make Tolkien himself weep; because great as his imagination was, never could he have concocted such a story. Not even Keith Richards if we let him loose in a pharmacy on “take all the drugs you can carry day” – could make up a timeline as insane as the Commodore aftermath.

Lawsuits 1-0-1: Que bono?

To catch you up with the present events, let’s just go through the basics first.

It can be difficult to distinguish between Hyperion and Aeon, so lets start with a few words about that. Hyperion is ultimately a software company. They started (if I recall correctly) as software house porting PC games to the Amiga platform.

I previously wrote that Trevor was the major shareholder in both companies, that was actually wrong, he holds a very small role in Hyperion. But who owns what here is ultimately pointless. The relationship between Hyperion and A-EON is that Hyperion represents the software branch, and A-EON is the hardware branch. And combined they make out the owners and producers of what is commonly called “Next Generation” Amiga machines.

A-EON and Hyperion hold the rights to develop Amiga OS, covering both the classical 68k version and the NG models which are PPC based. Cloanto have only sales rights, which are limited to the legacy 68k ROM kernel files, and workbench. That is ultimately what separates these two groups. So even though there are 3 companies involved, it’s easier to regard them as two separate entities.

And yes we could argue that OS4 was instigated by Amiga Inc earlier, but i’m trying to keep this readable for people that haven’t read anything about this silliness before, so i’m skipping all of that.

image2

Amiga OS is loved by many, but to be frank it’s reached the point that fighting over it has long since passed. A teenager today knows PSX, XBox and completely different brands

Until recently Aeon and Hyperion have focused completely on their Next Generation system. Aeon creates the hardware and Hyperion does the software. Hyperion also offers the older legacy roms and Workbench in their webshop. But until recently they have been more interested in selling next-generation software and machines.

Cloanto have been exclusively about legacy. They have no license that involves software development, and are for all means an purposes a retro retailer (or undertaker if you will). They sell old Commodore stuff, and that’s it. So while they have argued like cats and dogs over absolutely everything, like that worthless boing ball and the name “workbench”, they at least managed to co-exist somehow.

That was, until Hyperion listened to the Amiga Community and released an update for the 68k platform. Which is perfectly within their rights to do. They have a license that covers both 68k and PPC. Acer has set a clause (from what I can tell) that they are not allowed to touch x86, but as far as 68k and PPC is concerned — Hyperion is well within their rights to issue an update. After all they own the source-code for Amiga OS 3.1 which I mentioned above, Cloanto does not.

The response from the community was quite frankly outstanding. Finally a proper update for both Workbench and the kernel! Everyone was ecstatic and the whole scene was filled with positive hopes that things were finally moving forward. This was after all the first real update since Napoleon was in office!

Cloanto however, not so much. Because even though they share the sales license with Aeon, they have no rights to the new software created. They don’t make a penny on the new 68k kernel (rom files) or the new Workbench. They can continue to sell the older variations of Amiga OS, but they have no legal right to software written and issued in 2018. Cloanto responded like they always have, by issuing a lawsuit.

So the reason Cloanto took Hyperion to court for the 13th thousand time, has nothing to with open-source (a rumour that was planted before Xmas). It is motivated purely by greed and the fear that the Amiga might actually spring back to life.

And this is where we get to the nasty parts

Legacy software undertakers

undertaker

Legacy software is not unlike the undertaking business

First of all, and I want to make this crystal clear: Cloanto’s entire business model rests on the Amiga remaining dead. In a bizarre twist of irony, the self-proclaimed caretakers of Amiga actually face financial ruin if the Amiga ever became popular or rose from the grave. Stop and think about that for a moment: They make money on the Amiga remaining a dead system.

The only product Cloanto have actually produced, is a pixel paint program called PPaint, which was awesome back in the previous century.

The state of affairs for the past 18 years, is that Cloanto depends completely an emulator, UAE, short for “The Unix Amiga Emulator”, when it comes to the Amiga . Which ironically is not Cloanto’s work at all, but an emulator created by Bernd Schmidt, Toni Wilen and Mathias Ortmann; neither have received a penny despite Cloanto profiting on their work for close to two decades (!)

The selling of legacy Commodore software I have no problem with at all. But what bakes my noodle is forking UAE and selling it for profit without giving something back to its original authors? I have yet to see the source-code for Amiga Forever on Github for example? The laws of GPL are pretty straight forward. I’m not saying that the source code does not exist, i’m simply saying that Cloanto has gone out of their way to keep it hidden.

Sure it may be legal but I find it somewhat tasteless. profiting on UAE for all those years, and not even a symbolic sum for the guys that keep UAE going? I mean, had they actively participated and contributed to the UAE codebase I would have applauded them for it. Sadly Cloanto presents itself as a blatant opportunist more than a preserver. They say one thing, but their actions speak of something else entirely.

And don’t get me wrong, Hyperion and Aeon have more than enough mistakes on file. But when comparing Hyperion’s mistakes against Cloanto, remembering that these two have an obligation to represent Acer’s financial interests to the best of their ability — you cannot help notice that they are worlds apart. Hyperion is producing new software, Aeon new hardware, and they have even given the much loved 68k systems a do-over. That is their responsibility to Acer who ultimately can pull the plug on either should they be so inclined.

This where I get a bit worked up – because Cloanto have nothing to do with software or hardware development. It is quite frankly none of their business (in the true sense of the word). They have licensed the old kernel and Workbench; they have also licensed the C64 roms – and that is where their role ends. Yet they spend more time trying to obstruct Hyperion (and by consequence, Aeon) at every step of the way.

While I have no idea who sits on the c64 rights these days, the c64-mini has sold in good numbers around the world. Since Cloanto is the only company with c64 rights I presume they have cashed in on that? Like always it’s hard to tell, because there are more than one company that claim to sit on pieces of the true Commodore legacy.

So to sum up: we have one side producing new hardware, new software and doing updates which is their obligation and right. And we have another party who has created nothing, including the heart of their business, demanding a cut of something they shouldn’t even be involved in (!)

Greed, the mother of invention

Cloanto’s motives should be pretty obvious by now, but let’s hash through it.

With a new Workbench and kernel out in the wild, Cloanto find themselves in a difficult position. Who would want to buy an older kernel or Workbench when there is a newer, 2018 version available? Well, I would like all of them to be honest, but yes I obviously want to use the new versions as much as possible.

790_2

The A1222 was due out Q1 2018. It remains on hold until the lawsuits are finished. Keeping Hyperion and Aeon in court is a matter of survival for Cloanto at this point

But that alone is not enough to explain Cloanto’s panic-stricken behavior. They could welcome the new update and simply license it, like they should because they have no right to another companies work.

Instead they run out and buys the remnants of that company I mentioned earlier, Amiga Inc, which is a straw company that has a terrible reputation involving fraud and investor scams. A company that for some magical reason had the right to the name “Amiga” (like that holds any value in 2018, good lord what are you people doing) and sat on the source-code for the OS. This is the same source-code that Hyperion ended up buying, which is no doubt the foundation for the update before xmas.

Why would they go to such lengths as to secure a superficial paper-tiger like Amiga Inc? Trying to reverse the process? Looking to hijack the Amiga names? What gives? It’s almost like Cloanto is looking for something to fight over, desperate to keep Hyperion in court for as long as possible.

And why would they refuse to sell 2000 roms to myself and Gunnar to make ready-to-use Amiga “mini” machines? If I didnt know better, they are brewing on something. The market is just ripe for retro, and their behavior towards us hints that they are not very happy about Amibian’s existence.

It makes even more sense when you factor in the long-awaited A1222. A whole new Amiga that Aeon and Hyperion is 100% invested in bringing to market.

The Amiga A1222 is a Next Generation PPC Amiga that should retail at around USD 450. This product was supposed to reach the market in Q1 2018, but with the lawsuit(s) and drain on funds, getting the product out the door has been impossible. So much so that Cloanto is now damaging Hyperion (and Aeon) by proxy.

Around Xmas 2018 Cloanto began spreading the rumor that they were fighting to “open source Amiga OS”. That is a blatant lie and I was tempted to write a piece there and then, but I have been busy with work. I also thought Amiga users wouldn’t fall for such an evident lie, but some people actually cheer Cloanto on — believing that Cloanto can somehow “help” the Amiga platform. For Christ sake, Cloanto doesn’t even have a developer license – much less the right to open source Hyperion and Acer’s intellectual property. I doubt Acer is even aware of just how badly Cloanto is going about their business. Buying the remnants of Amiga Inc might be an attempt to buy credibility, but its 20 years too late.

The present legalities are, to be blunt, nothing more than a diversion designed to keep the A1222 out of the marketplace. The question is: why and will they try to replace it with something?

Although the motives are now painfully visible, so much so that it might as well be lit up in neon – I think Amiga fans should be very careful where they place their trust. I am sorry but I would not trust Cloanto with a stick of gum, much less the computing legacy of a giant like Commodore. And they are brewing on something, either directly or indirectly, mark my words.

Normally I don’t take sides, but I seriously hope Cloanto wakes up and realize that they are right now, and have been for some time, the spearhead that is keeping the platform in limbo. I have nothing against them personally, but we have now passed the point of no return. You are now risking the codebase of a system that thousands of people care for.

I think I speak for quite a few when I say: Enough! Put that energy, time and money into making something – because whatever you guys started arguing over, is long gone.

There is a whole generation that has grown up without any knowledge of Amiga. Who have no clue what Commodore was and represented. So while you guys have been fighting about who gets to sit where, the boat has left and you missed it.

Final words

You know why I find the most annoying about the situation Cloanto have created? Hear me out here.

Sun Microsystems spent a fortune drumming up support for Java, selling people on a lofty dream where a whole operating-system would be written as bytecodes. And that in special hardware would be made so that bytecodes could run anywhere. Because said bytecodes would be portable between platforms even, and solve the problem with platform bound software once and for all. Companies pumped billions into that dream, yet for all their wealth and power, they failed.

Meanwhile Cloanto, and by extension Hyperion, have had access to UAE since the 90s. A system that embody all the traits that Sun Microsystems attempted to create, and all they have done is to add a menu to it. They have wasted close to two decades without realizing that UAE is that holy grail that Sun Microsystems failed to deliver.

68k machine-code is bytecodes if you execute it on another system. And the distinctions between “virtual machine” and “emulator” are ultimately conceptual – not factual. UAE could have been adjusted as a virtual machine. There you have the compilers, the ecosystem and all the pieces you would need to deliver a portable, blistering fast software deployment system that is truly platform independent.

So, Cloanto, you have been sitting on a gold mine. And you didn’t recognize it because you were too busy arguing over balls, chicken-lip logos, old roms and god knows what else.

engines

You have had solid gold for ages, but you were too busy arguing over names to see it

I sincerely hope Acer takes an active role in their licensing, because as far as I can see, Cloanto is not acting in Acer’s financially best interest (nor Hyperion’s for that matter, which last time I checked can withhold all and any changes to their OS, leaving Cloanto with the dry bones from the past) – and they have become, unless they perform a complete makeover before their next lawsuit, unfit to manage the intellectual property and licenses they have acquired.

You don’t have a developer license, so stick to the legacy stuff and stop getting in the way of those that do.

And for christ sake give the guys who make UAE a percentage, it is tasteless and ugly to watch this level of greed. Seriously.

Quartex Web OS: A cloud OS in takes form

January 19, 2019 Leave a comment

It’s been a while since I’ve posted now. I have 3 articles in escrow, and every time I think I will finish them, I end up writing more. But yes, more Delphi articles is coming and I have lined up both components and rich code that everyone will be happy about.

Please look before shooting

Before we dig into the new stuff, I want to clear up a misconception. We programmers often forget that not everyone knows what we do, and we take it for granted that everyone will instantly understand something we talk about. Which is rarely the case.

I have noticed that quite a few have misjudged the project radically, thinking that the first version (cloud ripper) is just a toy, a mock desktop or even worse: just a remake of a legacy system that “has no role in modern computing”.

It is true that I have taken more than a little from Amiga OS in terms of architecture, but I have exclusively taken ideas that are good and works well under the ASYNC execution model. I have also replicated the way the filesystem is organized, things like REXX (which was added to OS X in 2015), the menu system – these are indeed built on how Amiga OS did things. The same can be said about library functions. Not because they are old, but because they make sense. Many of the functions appear in other systems too, like GTK on Linux and WinAPI for Windows. There are only so many ways to open a window, change the title, define scrollbars and execute processes.

kiosk-systems

Kiosk systems like this are great targets for the Quartex Web OS

While there are clear architectural aspects taken from older systems, doesn’t mean that the system itself is old in any way. This system is designed to run as WebAssembly, ASM.js and vanilla Javascript – which is ASYNC by nature. It is designed to run and share payload over several machines, not a single outdated CPU and chipset. You have swarm based task solving – which is quite cutting edge if I might say so. None of these things were invented back in the day.

Some have also asked why this is even needed. Well, let me give you a simple use case.

One of my customers is doing work for Jensen, a Danish producer of IT hardware. They make mostly routers, wifi usb dongles and similar devices. But like many hardware vendors their web interface leaves a lot to be desires. Router web interfaces are usually quite annoying and poorly written. Something that should have taken 5 minutes can end up taking 30 just because the design of the interface is rubbish.

With my solution these vendors will be able to drop a whole infrastructure into their products; a infrastructure that provides all the things they need to quickly build a great control panel and router interface. Things like file system mapping, being able to store data to the filesystem through an established websocket protocol; all of it wrapped up in a simple but powerful API. Their settings and features can be represented as programs, which run in windows that are intuitively styled and easy to understand. They will also cut development time dramatically by calling the Quartex Soft-Kernel, rather than having to re-invent everything from scratch.

That is just a tiny, tiny use-case where the desktop and services makes perfect sense. But also keep in mind that the same system can scale up to a 1000 instance Amazon supercomputer if you need to, providing software for your offices and development teams.

In 8 months the desktop is complete (probably before) and I start building the first purely web powered software development toolchain. Everything has been transformed into Javascript (as in compilers, linkers – the whole lot). Both freepascal, clang c/c++ and much more. And developers will be able to login and start producing applications out of the box. The fact that the entire system is chipset and platform independent is quite unique. People tend to use native code behind a facade of html5. Not here. Here you have over 4000 classes, 800.000 lines of code just for the desktop client, looking back at you.

Hopefully this has shed some lights on the project, and people will stop looking at this as “old junk”. As a person who loves older computers, Amiga especially, I am quite frankly astounded by the ignorance regarding that platform. A juiced up 30 year old Amiga will give any modern computer a run for it’s money when it comes to ease of use, quality software and pure productivity. 10 years before Windows even existed, europeans enjoyed a colorful, window based desktop with full multitasking. When we had to switch to PC it was like going back to the 1500’s in terms of functionality – and it wasnt until Windows 7 that Microsoft caught up with Commodore. So if I have managed to get over even 1% of the spirit in that machine – then I will be very happy indeed.

But to limit a clustered, 40 CPU core architecture using modern, off-the-shelves parts, a multitude of node services to “old junk” is nothing short of an intellectual emergency. Please read, digest and look more closely before passing judgement.

Right then, so what’s new?

48365835_10155890849180906_6431235229611982848_n

The Quartex “Cloud Ripper”

Where to begin! Like mentioned in my previous post Amibian.js is a cluster system. As such the project now has its first real hardware sorted! I have gone for a 5 x ODroid XU4 model, neatly tucked inside a PICO 5H case. The budget was set at USD 400, but with shipping and taxes it ended up costing around USD 600. But that is not a bad price for the firepower you get (40 CPU cores, 20 GPU cores and 16 Gb Ram), the ODroid is a powerful, stable and reliable ARM SBC (single board computer). In benchmarks the Raspberry PI 3b scored 830 Dhrystones, the ODroid scored 5500 Dhrystones. And my architecture use five of them, so this is a $600 super-computer built using off the shelves part.

The back-end server has had several bugs fixed, especially the problems with path’s and databases. You can now edit the settings.ini file and tell the system where the database should be created or accessed from, you can set the port for the server, if it should use SSL + Secure WebSocket,  or ordinary HTTP + Websocket.

50511885_10155952491120906_1059229155276619776_o

40 ARM CPU cores, that is a lot of firepower for USD 200 !

I am also ditching the TW3NodeFileSystem driver for server logic and using ordinary node.js calls there. The TW3NodeFileSystem driver is mounted as you perform a login – and it acts as a sandbox, mounting your folder as a device (and making sure you can’t ever touch files outside your “home” server folder). We still need to implement a proper UNIX directory parser, but that is easy enough.

Quartex Pascal

Yes, I have picked up Quartex Pascal again, which originally started in 2014. I have started writing a new RTL for DWScript which is an alternative to Smart Mobile Studio. It is different from the Smart RTL and is closer to FMX than VCL.

Eventually the Quartex Web OS and all its services will compile without code from Smart Mobile Studio.

Hosted applications, messages and our soft-kernel

The biggest news, which is also the most tricky to get right, is getting hosted applications (applications are hosted in IFrame containers) to communicate with the desktop. As you probably know browsers have rigid security measures, and the rules for threads (web workers) and separate processes (frames) are severe to say the least.

50407351_795409364151096_4870092648481816576_n

The LDEF assembler is the first application to grace the system

A secondary application hosted in a frame has absolutely no access to the rest of the DOM. Meaning that the code has no way of calling functions or manipulating elements outside its own DOM in the frame container. This is a good system because we don’t want rouge applications causing havoc.

The only way an application can talk to the desktop is through messages. And while this sounds easy, remember: we are doing this as a solid system, not just slapping something together.

  • After loading a hosted application, the desktop will send a handshake request. It will do this on interval until the application accepts.
  • When the application replies with a handshake message, the desktop sends a special message-channel object to the app. All communication with the desktop must happen on that secure channel.
  • With the channel obtained, the application has to provide the application manifest file. This is a special INI-File containing information about the program, including access rights. None of the soft-kernel API functions will execute until a valid manifest-file has been delivered.
  • Once the manifest has been sent and accepted, the hosted application is free to call the soft-kernel functions.

The above might sound simple but it includes several sub technologies to be in place first:

  • Call Stack: a class that keep track of sent messages and a callback. When a response arrives it will execute the correct callback to deliver the response. This is a kind of “promises” engine for message delivery.
  • Message factory, matches message-data to the correct message class, creates the instance and de-serialize the data automatically for you
  • Message dispatcher: Allows you to register a message with a handler procedure. When a message arrives the dispatcher calls the message-factory, then calls the correct handler.
  • Base64 Encoding on byte-array, stream and buffer level (does not exist in either node.js or JavaScript in general)
  • String to UTF8 Byte-Array encoding
  • UTF8 Byte-Array to String encoding
  • escape and unescape for byte-array, stream and buffer
  • URI-encoder for byte-array, stream and buffer

But that was just the beginning, I also had to introduce an object that I have been dreading to even start on, namely the “process” class. The process is not just a simple reference to the frame container, it has to keep track of the websocket endpoint, application manifest, error handling, message routing and much more.

50077678_10155951521540906_6068161951656050688_o

CLANG compiled to webassembly, meaning we can now compile proper C/C++ in the browser

Since Amibian.js supports not just JavaScript, but also bytecode applications – the process object also contains the LDEF runtime engine; not to mention all the system resources a process can own.

The cool part is that things work exactly like I planned! There is plenty of room to optimize, but all in all the architecture is sound. And it was quite a hallelujah moment when the first API call went through at 00:00 19.01.2019! A call to SetWindowTitle() where the hosted application set the caption of its main-window purely via code. Cross domain communication at it’s very best.

The LDEF Assembler

Yes LDEF Bytecodes are fantastic, and the first program I have made is a traditional assembler. I went all in and implemented a full text-editor to get better control, and also to get rid of the ACE code editor, which was a massive dependency. So glad we got rid of that.

So now you can write assembly code, assemble it, run it, dis-assemble it and even dump the bytecodes to the window. You will be able to save the bytecodes to disk by the end of this weekend, and then run the bytecode programs from shell or the desktop. So we are really making progress here.

49938355_1169526123220996_502291013608407040_o

A good shell / pipe infrastructure is the key to a powerful desktop

LDEF is the bytecode system that will be used to build high-level languages like Basic and Pascal. Since Freepascal is now able to compile itself to JavaScript I will naturally add that to the IDE next fall; the same is true for CLANG which has compiled itself to WebAssembly — and who generates webassembly.

So C/C++ and object pascal are already working and waiting for the IDE.

LDEF is a grander system though, because libraries can be loaded by Delphi, C++ builder, C# or whatever you fancy – and used. It can be post-processed to real machine code, or converted to pure WebAssembly. It holds much wider scope than stack machines like CLR and Java, and its more natural for assembly programmers – because it’s based on real CPU’s. It’s a register based virtual machine, not a stack-machine.

More?

Tons, but you have to visit my patreon page to keep track. I try to publish as much as possible there rather than here. I post a bit on both, but the proper channel for Amibian.js (or “Quartex Web OS” as its official name is) will always be Patreon.

50108015_314551789176307_8213345524409958400_n

The picture viewer now has momentum scrolling in full-mode.

Also, fixed more bugs in the Smart RTL than I can count, and re-made window movement. Window movement now uses the GPU, so they are silky smooth everywhere. Resize will be optimized next, then you can’t really tell it’s not native code at all.

Delphi Component updates

Yes Delphi is also a huge part of the Patreon project, and you will be happy to hear that the form designer (which shares a codebase with the graphics application components) have seen more work!

You can check out some of the changes to the form-designer here:

These changes will be in the january update (end of month) together with all the changes to Amibian.js, HexLicense, Tween library and all the rest 🙂

Cheers!

Amibian.js under the hood

December 5, 2018 2 comments

Amibian.js is gaining momentum as more and more developers, embedded systems architects, gamers and retro computer enthusiasts discover the project. And I have to admit I’m pretty stoked about what we are building here myself!

intro

In a life-preserver no less 😀

But, with any new technology or invention there are two common traps that people can fall into: The first trap is to gravely underestimate a technology. JavaScript certainly invites this, because only a decade ago the language was little more than a toy. Since then JavaScript have evolved to become the most widely adopted programming language in the world, and runtime engines like Google’s V8 runs JavaScript almost as fast as compiled binary code (“native” means machine code, like that produced by a C/C++ compiler, Pascal compiler or anything else that produces programs that run under Linux or Windows).

It takes some adjustments, especially for traditional programmers that havent paid attention to where browsers have gone – but long gone are the days of interpreted JavaScript. Modern JavaScript is first parsed, tokenized and compiled to bytecodes. These bytecodes are then JIT compiled (“just in time”, which means the compilation takes place inside the browser) to real machine-code using state of the art techniques (LLVM). So the JavaScript of 2018 is by no means the JavaScript of 2008.

The second trap you can fall into – is to exaggerate what a new technology can do, and attach abilities and expectations to a product that simply cannot be delivered. It is very important to me that people don’t fall into either trap, and that everyone is informed about what Amibian.js actually is and can deliver – but also what it wont deliver. Rome was not built-in a day, and it’s wise to study all the factors before passing judgement.

I have been truly fortunate that people support the project financially via Patreon, and as such I feel it’s my duty to document and explain as much as possible. I am a programmer and I often forget that not everyone understands what I’m talking about. We are all human and make mistakes.

Hopefully this post will paint a clearer picture of Amibian.js and what we are building here. The project is divided into two phases: first to finish Amibian.js itself, and secondly to write a Visual Studio clone that runs purely in the browser. Since it’s easy to mix these things up, I’m underlining this easy – just in case.

What the heck is Amibian.js?

Amibian.js is a group of services and libraries that combined creates a portable operating-system that renders to HTML5. A system that was written using readily available web technology, and designed to deliver advanced desktop functionality to web applications.

The services that make up Amibian.js was designed to piggyback on a thin Linux crust, where Linux deals with the hardware, drivers and the nitty-gritty we take for granted. There is no point trying to write a better kernel in 2018, because you are never going to catch up with Linus Torvalds. It’s must more interesting to push modern web technology to the absolute limits, and build a system that is truly portable and distributed.

smart_ass

Above: Amibian.js is created in Smart Pascal and compiled to JavaScript

The service layer is written purely in node.js (JavaScript) which guarantees the same behavior regardless of host platform. One of the benefits of using off-the-shelves web technology is that you can physically copy the whole system from one machine to the other without any changes. So if you have a running Amibian.js system on your x86 PC, and copy all the files to an ARM computer – you dont even have to recompile the system. Just fire up the services and you are back in the game.

Now before you dismiss this as “yet another web mockup” please remember what I said about JavaScript: the JavaScript in 2018 is not the JavaScript of 2008. No other language on the planet has seen as much development as JavaScript, and it has evolved from a “browser toy” – into the most important programming language of our time.

So Amibian.js is not some skin-deep mockup of a desktop (lord knows there are plenty of those online). It implements advanced technologies such as remote filesystem mapping, an object-oriented message protocol (Ragnarok), RPCS (remote procedure call invocation stack), video codec capabilities and much more — all of it done with JavaScript.

In fact, one of the demos that Amibian.js ships with is Quake III recompiled to JavaScript. It delivers 120 fps flawlessly (browser is limited to 60 fps) and makes full use of standard browser technologies (WebGL).

utube

Click on picture above to watch Amibian.js in action on YouTube

So indeed, the JavaScript we are talking about here is cutting edge. Most of Amibian.js is compiled as “Asm.js” which means that the V8 runtime (the code that runs JavaScript inside the browser, or as a program under node.js) will JIT compile it to highly efficient machine-code.

Which is why Amibian.js is able to do things that people imagine impossible!

Ok, but what does Amibian.js consist of?

Amibian.js consists of many parts, but we can divide it into two categories:

  • A HTML5 desktop client
  • A system server and various child processes

These two categories have the exact same relationship as the X desktop and the Linux kernel. The client connects to the server, invokes procedures to do some work, and then visually represent the response This is identical to how the X desktop calls functions in the kernel or one of the Linux libraries. The difference between the traditional, machine code based OS and our web variation, is that our version doesn’t have to care about the hardware. We can also assign many different roles to Ambian.js (more about that later).

smartdesk

Enjoying other cloud applications is easy with Amibian.js, here is Plex, a system very much based on the same ideas as Amibian.js

And for the record: I’m trying to avoid a bare-metal OS, otherwise I would have written the system using a native programming language like C or Object-Pascal. So I am not using JavaScript because I lack skill in native languages, I am using JavaScript because native code is not relevant for the tasks Amibian.js solves. If I used a native back-end I could have finished this in a couple of months, but a native server would be unable to replicate itself between cloud instances because chipset and CPU would be determining factors.

The Amibian.js server is not a single program. The back-end for Amibian.js consists of several service applications (daemons on Linux) that each deliver specific features. The combined functionality of these services make up “the amibian kernel” in our analogy with Linux. You can think of these services as the library files in a traditional system, and programs that are written for Amibian.js can call on these to a wide range of tasks. It can be as simple as reading a file, or as complex as registering a new user or requesting admin rights.

The greatest strength of Amibian.js is that it’s designed to run clustered, using as many CPU cores as possible. It’s also designed to scale, meaning that it will replicate itself and divide the work between different instances. This is where things get’s interesting, because an Amibian.js cluster doesn’t need the latest and coolest hardware to deliver good performance. You can build a cluster of old PC’s in your office, or a handful of embedded boards (ODroid XU4, Raspberry PI’s and Tinkerboard are brilliant candidates).

But why Amibian.js? Why not just stick with Linux?

That is a fair question, and this is where the roles I mentioned above comes in.

As a software developer many of my customers work with embedded devices and kiosk systems. You have companies that produce routers and set-top boxes, NAS boxes of various complexity, ticket systems for trains and busses; and all of them end up having to solve the same needs.

What each of these manufacturers have in common, is the need for a web desktop system that can be adapted for a specific program. Any idiot can write a web application, but when you need safe access to the filesystem, unified API’s that can delegate signals to Amazon, Azure or your company server, things suddenly get’s more complicated. And even when you have all of that, you still need a rock solid application model suitable for distributed computing. You might have 1 ticket booth, or 10.000 nation wide. There are no systems available that is designed to deal with web-technology on that scale. Yet 😉

Let’s look at a couple of real-life scenarios that I have encountered, I’m confident you will recognize a common need. So here are some roles that Amibian.js can assume and help deliver a solution rapidly. It also gives you some ideas of the economic possibilities.

Updated: Please note that we are talking javascript here, not native code. There are a lot of native solutions out there, but the whole point here is to forget about CPU, chipset and target and have a system floating on top of whatever is beneath.

  • When you want to change some settings on your router – you login to your router. It contains a small apache server (or something similar) and you do all your maintenance via that web interface. This web interface is typically skin-deep, annoying to work with and a pain for developers to update since it’s connected to a native apache module which is 100% dependent on the firmware. Each vendor end up re-inventing the wheel over and over again.
  • When you visit a large museum notice the displays. A museum needs to display multimedia, preferably on touch capable devices, throughout the different exhibits. The cost of having a developer create native applications that displays the media, plays the movies and gives visual feedback is astronomical. Which is why most museums adopt web technology to handle media presentation and interaction. Again they re-invent the wheel with varying degree of success.
  • Hotels have more or less the exact same need but on a smaller scale, especially the larger hotels where the lobby have information booths, and each room displays a web interface via the TV.
  • Shopping malls face the same challenge, and depending on the size they can need anything from a single to a hundred nodes.
  • Schools and education spend millions on training software and programming languages every year. Amibian.js can deliver both and the schools would only pay for maintenance and adaptation – the product itself is free. Kids get the benefit of learning traditional languages and enjoying instant visual feedback! They can learn Basic, Pascal, JavaScript and C. I firmly believe that the classical languages will help make them better programmers as they evolve.

You are probably starting to see the common denominator here?

They all need a web-based desktop system, one that can run complex HTML5 based media applications and give them the same depth as a native operating-system; Which is pretty hard to achieve with JavaScript alone.

Amibian.js provides a rich foundation of more than 4000 classes that developers can use to write large, complex and media rich applications (see Smart Mobile Studio below). Just like Linux and Windows provides a wealth of libraries and features for native application development – Amibian.js aims to provide the same for cloud and embedded systems.

And as the name implies, it has roots in the past with the machine that defined multimedia, namely the Commodore Amiga. So the relation is more than just visually, Amibian.js uses the same system architecture – because we believe it’s one of the best systems ever designed.

If JavaScript is so poor, why should we trust you to deliver so much?

First of all I’m not selling anything. It’s not like this project is something that is going to make me a ton of cash. I ask for support during the development period because I want to allocate proper time for it, but when done Amibian.js will be free for everyone (LGPL). And I’m also writing it because it’s something that I need and that I havent seen anywhere else. I think you have to write software for yourself, otherwise the quality wont be there.

Secondly, writing Amibian.js in raw JavaScript with the same amount of functions and depth would take years. The reason I am able to deliver so much functionality quickly, is because I use a compiler system called Smart Mobile Studio. This saves months and years of development time, and I can use all the benefits of OOP.

Prior to starting the Amibian.js project, I spent roughly 9 years creating Smart Mobile Studio. Smart is not a solo project, many individuals have been involved – and the product provides a compiler, IDE (editor and tools), and a vast run-time library of pre-made classes (roughly 4000 ready to use classes, or building-blocks).

amibian_shell

Writing large-scale node.js services in Smart is easy, fun and powerful!

Unlike other development systems, Smart Mobile Studio compiles to JavaScript rather than machine-code. We have spent a great deal of time making sure we could use proper OOP (object-oriented programming), and we have spent more than 3 years perfecting a visual application framework with the same depth as the VCL or FMX (the core visual frameworks for C++ builder and Delphi).

The result is that I can knock out a large application that a normal JavaScript coder would spend weeks on – in a single day.

Smart Mobile Studio uses the object-pascal language, a dialect which is roughly 70% compatible with Delphi. Delphi is exceptionally well suited for writing large, data driven applications. It also thrives for embedded systems and low-level system services. In short: it’s a lot easier to maintain 50.000 lines of object pascal code, than 500.000 lines of JavaScript code.

Amibian.js, both the service layer and the visual HTML5 client application, is written completely using Smart Mobile Studio. This gives me as the core developer of both systems a huge advantage (who knows it better than the designer right?). I also get to write code that is truly OOP (classes, inheritance, interfaces, virtual and abstract methods, partial classes etc), because our compiler crafts something called a VMT (virtual method table) in JavaScript.

Traditional JavaScript doesn’t have OOP, it has something called prototypes. With Smart Pascal I get to bring in code from the object-pascal community, components and libraries written in Delphi or Freepascal – which range in the hundreds of thousands. Delphi alone has a massive library of code to pick from, it’s been a popular toolkit for ages (C is 3 years older than pascal).

But how would I use Amibian.js? Do I install it or what?

Amibian.js can be setup and used in 4 different ways:

  • As a true desktop, booting straight into Amibian.js in full-screen
  • As a cloud service, accessing it through any modern browser
  • As a NAS or Kiosk front-end
  • As a local system on your existing OS, a batch script will fire it up and you can use your browser to access it on https://127.0.0.1:8090

So the short answer is yes, you install it. But it’s the same as installing Chrome OS. It’s not like an application you just install on your Linux, Windows or OSX box. The whole point of Amibian.js is to have a platform independent, chipset agnostic system. Something that doesn’t care if you using ARM, x86, PPC or Mips as your CPU of preference. Developers will no doubt install it on their existing machines, Amibian.js is non-intrusive and does not affect or touch files outside its own eco-system.

But the average non-programmer will most likely setup a dedicated machine (or several) or just deploy it on their home NAS.

The first way of enjoying Amibian.js is to install it on a PC or ARM device. A disk image will be provided for supporters so they can get up and running ASAP. This disk image will be based on a thin Linux setup, just enough to get all the drivers going (but no X desktop!). It will start all the node.js services and finally enter a full-screen web display (based on Chromium Embedded) that renders the desktop. This is the method most users will prefer to work with Amibian.js.

The second way is to use it as a cloud service. You install Amibian.js like mentioned above, but you do so on Amazon or Azure. That way you can login to your desktop using nothing but a web browser. This is a very cost-effective way of enjoying Amibian.js since renting a virtual instance is affordable and storage is abundant.

The third option is for developers. Amibian.js is a desktop system, which means it’s designed to host more elaborate applications. Where you would normally just embed an external website into an IFrame, but Amibian.js is not that primitive. Hosting external applications requires you to write a security manifest file, but more importantly: the application must interface with the desktop through the window’s message-port. This is a special object that is sent to the application as a hand-shake, and the only way for the application to access things like the file-system and server-side functionality, is via this message-port.

Calling “kernel” level functions from a hosted application is done purely via the message-port mentioned above. The actual message data is JSON and must conform to the Ragnarok client protocol specification. This is not as difficult as it might sound, but Amibian.js takes security very seriously – so applications trying to cause damage will be promptly shut down.

You mention hosted applications, do you mean websites?

Both yes and no: Amibian.js supports 3 types of applications:

  • Ordinary HTML5/JS based applications, or “websites” as many would call them. But like I talked about above they have to establish a dialog with the desktop before they can do anything useful.
  • Hybrid applications where half is installed as a node.js service, and the other half is served as a normal HTML5 app. This is the coolest program model, and developers essentially write both a server and a client – and then deploy it as a single package.
  • LDEF compiled bytecode applications, a 68k inspired assembly language that is JIT compiled by the browser (commonly called “asm.js”) and runs extremely fast. The LDEF virtual machine is a sub-project in Amibian.js

The latter option, bytecodes, is a bit like Java. A part of the Amibian.js project is a compiler and runtime system called LDEF.

patron_asm2

Above: The Amibian.js LDEF assembler, here listing opcodes + disassembling a method

The first part of the Amibian.js project is to establish the desktop and back-end services. The second part of the project is to create the worlds first cloud based development platform. A full Visual Studio clone if you like, that allows anyone to write cloud, mobile and native applications directly via the browser (!)

Several languages are supported by LDEF, and you can write programs in Object Pascal, Basic and C. The Basic dialect is especially fun to work with, since it’s a re-implementation of BlitzBasic (with a lot of added extras). Amiga developers will no doubt remember BlitzBasic, it was used to create some great games back in the 80s and 90s. It’s well suited for games and multimedia programming and above all – very easy to learn.

More advanced developers can enjoy Object Pascal (read: Delphi) or a sub-set of C/C++.

And please note: This IDE is designed for large-scale applications, not simple snippets. The ultimate goal of Amibian.js is to move the entire development cycle to the cloud and away from the desktop. With Amibian.js you can write a cool “app” in BlitzBasic, run it right in the browser — or compile it server-side and deploy it to your Android Phone as a real, natively compiled application.

So any notion of a “mock desktop for HTML” should be firmly put to the side. I am not playing around with this product and the stakes are very real.

But why don’t you just use ChromeOS?

There are many reasons, but the most important one is chipset independence. Chrome OS is a native system, meaning that it’s core services are written in C/C++ and compiled to machine code. The fundamental principle of Amibian.js is to be 100% platform agnostic, and “no native code allowed”. This is why the entire back-end and service layer is targeting node.js. This ensures the same behavior regardless of processor or host system (Linux being the default host).

Node.js has the benefit of being 100% platform independent. You will find node.js for ARM, x86, Mips and PPC. This means you can take advantage of whatever hardware is available. You can even recycle older computers that have lost mainstream support, and use them to run Amibian.js.

A second reason is: Chrome OS might be free, but it’s only as open as Google want it to be. ChromeOS is not just something you pick up and start altering. It’s dependence on native programming languages, compiler toolchains and a huge set of libraries makes it extremely niche. It also shields you utterly from the interesting parts, namely the back-end services. It’s quite frankly boring and too boxed in for any practical use; except for Google and it’s technology partners that is.

I wanted a system that I could move around, that could run in the cloud, on cheap SBC’s. A system that could scale from handling 10 users to 1000 users – a system that supports clustering and can be installed on multiple machines in a swarm.

A system that anyone with JavaScript knowledge can use to create new and exciting systems, that can be easily expanded and serve as a foundation for rich media applications.

What is this Amiga stuff, isn’t that an ancient machine?

In computing terms yes, but so is Unix. Old doesn’t automatically mean bad, it actually means that it’s adapted and survived challenges beyond its initial design. While most of us remember the Amiga for its games, I remember it mainly for its elegant and powerful operating-system. A system so flexible that it’s still in use around the world – 33 years after the machine hit the market. That is quite an achievement.

image2

The original Amiga OS, not bad for a 33-year-old OS! It was and continues to be way ahead of everyone else. A testament to the creativity of its authors

Amibian.js as the name implies, borrows architectural elements en-mass from Amiga OS. Quite simply because the way Amiga OS is organized and the way you approach computing on the Amiga is brilliant. Amiga OS is much more intuitive and easier to understand than Linux and Windows. It’s a system that you could learn how to use fully with just a couple of days exploring; and no manuals.

But the similarities are not just visual or architectural. Remember I wrote that hosted applications can access and use the Amibian.js services? These services implement as much of the original ROM Kernel functions as possible. Naturally I can’t port all of it, because it’s not really relevant for Amibian.js. Things like device-drivers serve little purpose for Amibian.js, because Amibian.js talks to node.js, and node talks to the actual system, which in turn handles hardware devices. But the way you would create windows, visual controls, bind events and create a modern, event-driven application has been preserved to the best of my ability.

But how does this thing boot? I thought you said server?

If you have setup a dedicated machine with Amibian.js then the boot sequence is the same as Linux, except that the node.js services are executed as background processes (daemons or services as they are called), the core server is initialized, and then a full-screen HTML5 view is set up that shows the desktop.

But that is just for starting the system. Your personal boot sequence which deals with your account, your preferences and adaptations – that boots when you login to the system.

When you login to your Amibian.js account, no matter if it’s just locally on a single PC, a distributed cluster, or via the browser into your cloud account — several things happen:

  1. The client (web-page if you like) connects to the server using WebSocket
  2. Login is validated by the server
  3. The client starts loading preferences files via the mapped filesystem, and then applies these to the desktop.
  4. A startup-sequence script file is loaded from your account, and then executed. The shell-script runtime engine is built into the client, as is REXX execution.
  5. The startup-script will setup configurations, create symbolic links (assigns), mount external devices (dropbox, google drive, ftp locations and so on)
  6. When finished the programs in the ~/WbStartup folder are started. These can be both visual and non-visual.

As you can see Amibian.js is not a mockup or “fake” desktop. It implements all the advanced features you expect from a “real” desktop. The filesystem mapping is especially advanced, where file-data is loaded via special drivers; drivers that act as a bridge between a storage service (a harddisk, a network share, a FTP host, Dropbox or whatever) and the desktop. Developers can add as many of these drivers as they want. If they have their own homebrew storage system on their existing servers, they can implement a driver for it. This ensures that Amibian.js can access any storage device, as long as the driver conforms to the driver standard.

In short, you can create, delete, move and copy files between these devices just like you do on Windows, OSX or the Linux desktop. And hosted applications that run inside their own window can likewise request access to these drivers and work with the filesystem (and much more!).

Wow this is bigger than I thought, but what is this emulation I hear about? Can Amibian.js really run actual programs?

Amibian.js has a JavaScript port of UAE (Unix Amiga Emulator). This is a fork of SAE (scripted Amiga Emulator) that has been heavily optimized for web. Not only is it written in JavaScript, it performs brilliantly and thus allows us to boot into a real Amiga system. So if you have some floppy-images with a game you love, that will run just fine in the browser. I even booted a 2 gigabyte harddisk image 🙂

But Amiga emulation is just the beginning. More and more emulators are ported to JavaScript; you have Nes, SNes, N64, PSX I & II, Sega Megadrive and even a NEO GEO port. So playing your favorite console games right in the browser is pretty straight forward!

But the really interesting part is probably QEmu. This allows you to run x86 instances directly in the browser too. You can boot up in Windows 7 or Ubuntu inside an Amibian.js window if you like. Perhaps not practical (at this point) but it shows some of the potential of the system.

I have been experimenting with a distributed emulation system, where the emulation is executed server-side, and only the graphics and sound is streamed back to the Amibian.js client in real-time. This has been possible for years via Apache Guacamole, but doing it in raw JS is more fitting with our philosophy: no native code!

I heard something about clustering, what the heck is that?

Remember I wrote about the services that Amibian.js has? Those that act almost like libraries on a physical computer? Well, these services don’t have to be on the same machine — you can place them on separate machines and thus its able to work faster.

47470965_10155861938320906_4959664457727868928_n

Above: The official Amibian.js cluster, 4 x ODroid XU4s SBC’s in a micro-rack

A cluster is typically several computers connected together, with the sole purpose of having more CPU cores to divide the work on. The cool thing about Amibian.js is that it doesn’t care about the underlying CPU. As long as node.js is available it will happily run whatever service you like – with the same behavior and result.

The official Amibian.js cluster consists of 5 ODroid XU4/S SBC (single board computers). Four of these are so-called “headless” computers, meaning that they don’t have a HDMI port – and they are designed to be logged into and software setup via SSH or similar tools. The last machine is a ODroid XU4 with a HDMI out port, which serves as “the master”.

The architecture is quite simple: We allocate one whole SBC for a single service, and allow the service to copy itself to use all the CPU cores available (each SBC has 8 CPU cores). With this architecture the machine that deals with the desktop clients don’t have to do all the grunt work. It will accept tasks from the user and hosted applications, and then delegate the tasks between the 4 other machines.

Note: The number of SBC’s is not fixed. Depending on your use you might not need more than a single SBC in your home setup, or perhaps two. I have started with 5 because I want each part of the architecture to have as much CPU power as possible. So the first “official” Amibian.js setup is a 40 core monster shipping at around $250.

But like mentioned, you don’t have to buy this to use Amibian.js. You can install it on a single spare X86 PC you have, or daisy chain a couple of older PC’s on a switch for the same result.

Why Headless? Don’t you need a GPU?

The headless SBC’s in the initial design all have GPU (graphical processing unit) as well as audio capabilities. What they lack is GPIO pins and 3 additional USB ports. So each of the nodes on our cluster can handle graphics at blistering speed — but that is ultimately not their task. They serve more as compute modules that will be given tasks to finish quickly, while the main machine deals with users, sessions, traffic and security.

The 40 core cluster I use has more computing power than northern europe had in the early 80s, that’s something to think about. And the pricetag is under $300 (!). I dont know about you but I always wanted a proper mainframe, a distributed computing platform that you can login to and that can perform large tasks while I do something else. This is as close as I can get on a limited budget, yet I find the limitations thrilling and fun!

Part of the reason I have opted for a clustered design has to do with future development. While UAE.js is brilliant to emulate an Amiga directly in the browser – a more interesting design is to decouple the emulation from the output. In other words, run the emulation at full speed server-side, and just stream the display and sounds back to the Amibian.js display. This would ensure that emulation, of any platform, runs as fast as possible, makes use of multi-processing (read: multi threading) and fully utilize the network bandwidth within the design (the cluster runs on its own switch, separate from the outside world-wide-web).

I am also very interested in distributed computing, where we split up a program and run each part on different cores. This is a topic I want to investigate further when Amibian.js is completed. It would no doubt require a re-design of the LDEF bytecode system, but this something to research later.

Will Amibian.js replace my Windows box?

That depends completely on what you use Windows for. The goal is to create a self-sustaining system. For retro computing, emulation and writing cool applications Amibian.js will be awesome. But Rome was not built-in a day, so it’s wise to be patient and approach Amibian.js like you would Chrome OS. Some tasks are better suited for native systems like Linux, but more and more tasks will run just fine on a cloud desktop like Amibian.js.

Until the IDE and compilers are in place after phase two, the system will be more like an embedded OS. But when the LDEF compiler and IDE is in place, then people will start using it en-mass and produce applications for it. It’s always a bit of work to reach that point and create critical mass.

tomes

Object Pascal is awesome, but modern, native development systems are quite demanding

My personal need has to do with development. Some of the languages I use installs gigabytes onto my PC and you need a full laptop to access them. I love Amibian.js because I will be able to work anywhere in the world, as long as a browser and normal internet line is available. In my case I can install a native compiler on one of the nodes in the cluster, and have LDEF emit compatible code; voila, you can build app-store ready applications from within a browser environment.

 

I also love that I can set-up a dedicated platform that runs legacy applications, games – and that I can write new applications and services using modern, off the shelve languages. And should a node in the cluster break down, I can just copy the whole system over to a new, affordable SBC and keep going. No super expensive hardware to order, no absurd hosting fees, and finally a system that we all can shape and use in a plethora of systems. From a fully fledged desktop to a super advanced NAS or Router that use Amibian.js to give it’s customers a fantastic experience.

And yes, I get to re-create the wonderful reality of Amiga OS without the absurd egoism that dominates the Amiga owners to this day. I don’t even know where to begin with the present license holders – and I am so sick of the drama that rolling my own seemed the only reasonable path forward.

Well — I hope this helps clear up any misconceptions about Amibian.js, and that you find this as interesting as I do. As more and more services are pushed cloud-side, the more relevant Amibian.js will become. It is perfect as a foundation for large-scale applications, embedded systems — and indeed, as a solo platform running on embedded devices!

I cant wait to finish the services and cluster this sucker on the ODroid rack!

If you find this project interesting, head over to my Patreon website and get involved! I could really use your support, even if it’s just a $5 “high five”. Visit the project at: http://www.patreon.com/quartexNow

Mirroring groups on the MeWe network

November 18, 2018 1 comment

Following my Administrator woes on Facebook post I have had a look at alternative places to run a forum. I realized that Facebook is getting pretty intrinsic in society around the world, so I know everyone won’t be interested in a new venue. But honestly, MeWe is very simple to use and have an UI experience very close to the Facebook app.

amibian_shell

This picture was flagged as “hateful” on Facebook, which has rendered my account frozen for the next 30 days. While I agree to the strict rules that FB advocates, they really must deploy more human beings if they intend to have success in this endeavour. And that means really investigating what is flagged, reading threads in all languages etc. Because the risk of flagging the wrong guy is just too high. Admins get flagged all the time for kicking out bullies, and the use of reporting tools as a revenge strategy *must* carry a penalty.

MeWe is thankfully not like G+ which (in my personal opinion) was counter-intuitive and damn right intrusive. We all remember the G+ auto-upload feature, where some 3 million users had their family photos, vacation photos and .. ehrm, “explicitly personal” photos uploaded without consent.

Well, the MeWe app is very simple, and registration is as easy as it should be. You make a user name, a password, and type in your email; then you verify your email and that’s it!

Besides, my main use for Facebook or MeWe is to run the groups – I spend very little of my time socializing anyways. With the amount of groups and media i push on a daily basis it’s quite frankly their loss.

mewe

The MeWe group functionality is very good, and almost identical to Facebook

The alternative to MeWe is to setup a proper web forum instead. I have bought 6 domains that are now collecting dust so yes, I will look into that – but the whole purpose of a social platform is that you don’t have to do maintenance beyond daily management – so MeWe saves us some time.

So head over to MeWe and register! Here are the two main groups I manage these days. The main groups are on facebook, but i have now registered the same groups on MeWe.

MeWe doesn’t cost anything and takes less than 5 minutes to join. Just like G+ and Facebook, MeWe can be installed as an app for your phone (both iOS and Android). So as far as alternatives go, it’s a good alternative. One more app wont do much harm I imagine.

Note: I will naturally keep my Facebook account for the sake of the groups, but having experienced this 4 times in 9 years, my tolerance of Mr. Suckerberg is quickly reaching its limits. If I have blurted something out I have no problems standing for that and taking the penalty, but posting a picture of software development? In a group dedicated to software development? That takes some impressive mental acrobatics to accept.

Admin woes on Delphi Developer

November 17, 2018 8 comments

For well over 10 years I have been running different interest groups on Facebook. While Delphi Developer is without a doubt the one that receives most attention from myself and my fellow moderators, I also run the Quartex Components group and lately, Amiga Disrupt. The latter dedicated to my favorite hobby, namely retro computing.

I have to say, it’s getting harder to operate these groups under the current Facebook regime. I applaud them for implementing a moral codex, that is both fair and good, but that also means that their code must be able to distinguish between random acts of hate and bullying, and moderator operations.

A couple of days ago I posted an update picture from Amibian.js. This is a picture of my vmware development platform, with pascal code, node.js and the HTML5 desktop running. You would  have be completely ignorant of technology to not recognize the picture as having to do with software development.

amibian_shell

This picture was flagged as hateful, and was enough to get an admin’s account frozen for 30 days

Sadly facebook contains all sorts of people, and for some reason even grown men will get into strange, ideological debates about what constitutes retro-computing. In this case the user was a die-hard original-amiga fan, who on seeing my post about amibian.js went on a spectacular rant. Listing in alphabetical and chronological order, the depths of depravity that people have stooped to in implementing 68k as Javascript.

Well, I get 2-3 of these comments a week and the rules for the group is crystal clear: if you post comments like that, or comments that are racist, hateful or otherwise regarded as a provocative to the general group standard — you are given a single warning and then you are out.

So I gave him a warning that such comments are not welcome; He immediately came back with a even worse response – and that was the end of that.

But before I managed to kick the user, he reported a picture of Amibian as hateful. Again, we are talking about a screen-dump from VMWare with pascal code. No hate, no poor choice of images – nothing that would violate ordinary Facebook standards.

The result? Facebook has now frozen my account for 30 days (!)

Well I’m not even going to bother being upset, because this is not the first time. When people seem to willfully seek out conflict, only to use the FB’s reporting tools as weapons of revenge — well, there is not much I can do.

Anyways, Gunnar, Glenn, Peter and Dennis got you covered – and I’ll see you in a month. I think it’s time i contact FB in Oslo and establish separate management profiles.

Smart Mobile Studio presentation in Oslo

September 28, 2018 Leave a comment

Yesterday evening I traveled to Oslo and held a presentation on Smart Mobile Studio. The response was very positive and I hope that everyone who attended left with some new ideas regarding JavaScript, the direction the world of software is heading – and how Smart Mobile Studio can be of service to Delphi.

Smart Pascal is especially exciting in concert with Rad-Server, where it opens the doors to Node based, platform independent services and sub clustering. With relatively little effort Rad-Server can absorb the wealth that node has to offer through Smart – but on your terms, and under Delphi’s control. The best of both worlds.

You get the stability and structure that makes Delphi so productive, and then infuse that with the flamboyance, flair and async brilliance that JavaScript represents.

More important than technology is the community! It’s been a few years since I took part in the Oslo Delphi Club’s meetups, so it was great to chat with Halvard Vassbotten, Trond Grøntoft, Alf Christoffersen, Torgeir Amundsen and Robin Bakker face to face again. I also had the pleasure of meeting some new Delphi developers.

prespic

Presentation at ABG Sundal Collier’s offices in Oslo

Thankfully the number of attendees were a moderate 14, considering this was my first presentation ever. Last time I visited was when our late Paweł Głowacki presented FMX, and the turnout was in the ballpark of a hundred. So it was an easy-going, laid-back atmosphere throughout the evening.

Conflict of interest?

Some might wonder why a person working for Embarcadero will present Smart Mobile Studio, which some still regard as competition. Smart is not in competition with Delphi and never will be. It is written by Delphi developers for Delphi developers as a means to bridge two worlds. It’s a project of loyalty and passion. We continue because we love what it enables us to do.

The talks on Smart that I am holding now, including the november talk in London, were booked before I started at Embarcadero (so it’s not a case of me promoting Smart in leu of Embarcadero). I also made it perfectly clear when I accepted the job that my work on Smart will continue in my spare time. And Embarcadero is fine with that. So I am free to spend my after-work hours and weekend time as I see fit.

smart_desktop

The Smart Desktop, codename Amibian.js, is a solid foundation for building large-scale web front-ends. Importing Sencha’s JS API’s can be done via our TypeScript wizard

So, after my presentation in London in november Smart Mobile Studio presentations (at least hosted by me) can only take place during weekends. Which is fair and the way it should be.

Recording the English version

Since the presentation last evening was in Norwegian, there was little point in recording it. Norway have a healthy share of Delphi developers, but a programming language available internationally must be presented in English.

techA couple of months back, before I started working for Embarcadero I promised to do a video presentation that would be available on Delphi Developer and YouTube. I very much like to keep that promise. So I will re-do the presentation in English as soon as possible. I would have done it today after work, but buying tech from the US have changed quite dramatically in just a couple of years.

In short: I haven’t received the remaining equipment I ordered for professional video recording and audio podcasting (which is a part of my Patreon offering as well), as such there will be no live video-feed /slash/ webinar – and questions will be limited to either the comment-section on Delphi Developer; or perhaps more appropriate, the Smart Mobile Studio Forums.

I’m hoping to get the HD camera, mic-table-arm and various bits-and-bobs i ordered from the US sometime next week. I have no idea why FedEx have become so difficult lately, but the package is apparently at LaGuardia, and I have to send receipts that document that these items are paid for before they ship them abroad (so the package manifest listing me as the customer, my address, phone number and receipt from the seller is somehow not enough). This is a first for me.

Interestingly they also stopped a package from Embarcadero with giveaways for my upcoming Delphi presentation in Sweden – at which point I had to send them a copy of my work contract to prove that I indeed work for an American company.

But a promise is a promise, so come rain or shine it will be done. Worst case scenario we can put Samsung’s claims to the test and hook up a mic + photo lens and see if their commercials have any merit.

HexLicense, Patreon and all that

September 6, 2018 Comments off

Apparently using modern service like Patreon to maintain components has become a point of annoyance and confusion. I realize that I formulated the initial HexLicense post somewhat vague and confusing, in retrospect I will admit that and also take critique for not spending a little more time on preparations.

Having said that, I also corrected the mistake quickly and clarified the situation. I feel some of the comments have been excessively critical for something that, ultimately, is a service to the community. But I’ll roll with the punches and let’s just put this issue to bed.

From the top please

fromthetopI have several products and frameworks that naturally takes time to maintain and evolve. And having to maintain websites, pay for tax and invoicing services, pay for hosting (and so on), well it consumes a lot of hours. Hours that I can no longer afford to spend (my work at Embarcadero must come first, I have a family to support). So Patreon is a great way to optimize a very busy schedule.

Today developers solve a lot of the business strain by using Patreon. They make their products open source, but give those that support and help fund the development special perks, such as early access, special builds and a more direct line of control over where the different projects and sub-projects are heading.

The public repository that everyone has access to is maintained by pushing the code on interval, meaning that the public “free stuff” (LGPL v3 license) will be some months behind the early-access that patrons enjoy. This is common and the same approach both large and small teams go about things in 2018. Quite radical compared to what we “old-timers” are used to, but that’s how things work now. I just go with flow and try to do the most amount of good on the journey.

Benefits of Patreon

The benefits are many, but first and foremost it has to do with time. Developer don’t have to maintain 3-4 websites, pay for invoicing services on said products, pay hosting fees and rent support forums — instead focus is on getting things done. So instead of an hour here and there, you can (based on the level of support) allocate X hours within a week or weekend that are continuous.

4a128ea6852444fbfc89022be4132e9b

Patreon solves two things: time and cost

Everyone wins. Those that support and help fund the projects enjoy early access and special builds. The community at large wins because the public repository is likewise maintained, albeit somewhat behind the cutting edge code patrons enjoy. And the developers wins because he or she doesn’t have to run around like a mad chicken maintaining X number of websites -wasting more time doing maintenance than building cool new features.

 

And above all, pricing goes down. By spreading the cost over a larger base of interest, people get access to code that used to cost $200 for $35. The more people that helps out, the more the cost can be reduced per tier.

To make it crystal clear what the status of my frameworks and component packages are, here is a carbon copy from HexLicense.com

For immediate release

Effective immediately HexLicense is open-source, released under the GNU Lesser General Public License v3. You can read the details of that license by clicking here.

Patreon model

Patreon_logo.svgIn order to consolidate the various projects I maintain, I have established a Patreon account. This means that people can help fund further development on HexLicense, LDEF, Amibian and various Delphi libraries as a whole. This greatly simplifies things for everyone.

I will be able to allocate time based on a broader picture, I also don’t need to pay for invoicing services, web hosting and more. This allows me to continue to evolve the components and code, but without so many separate product identities to maintain.

Patreon supporters will receive updates before anyone else and have direct access to the latest code at all times. The public bitbucket repository will be updated on interval, but will by consequence be behind the Patreon updates.

Further security

One of the core goals on Patreon is the evolution of a bytecode compiler. This should be of special interest to HexLicense users. Being able to compile modules that hackers will be unable to debug gives you a huge advantage. The engine is designed so that the instruction-set can be randomized for a particular build. Making it unique for your application.

patron_asm1

The LDEF assembler prototype running under Smart Mobile Studio

Well, I want to thank everyone involved. It has been a great journey to produce so many components, libraries and solutions over the years – but now it’s time for me to cut down on the number of projects and focus on core technology.

HexLicense with the update license files will be uploaded to BitBucket shortly.

Sincerly

Jon Lennart Aasenden

 

 

Nano PI Fire 3, part two

July 18, 2018 Leave a comment

If you missed the first installment of this test, please click here to catch up. In this installment we are just going to dive straight into general use and get a feel for what can and cannot be done.

Solving the power problem

pi-powerLike mentioned in the previous article, a normal mobile charger (5 volt, 2 amps) is not enough to support the nano-pi. Since I have misplaced my original PI power-supply with 5 volt / 3 amps I decided to cheat. So I plugged the power USB into my PC which will deliver as much juice as the device needs. I don’t have time to wait for a new PSU to arrive so this will have to do.

But for the record (and underlined) a proper PSU with at least 2.5 amps is essential to using this board. I suggest you order the official Raspberry PI 3b power-supply. But if you should find one with 3 amps that would be even better.

Web performance

The question on everyone’s mind (or at least mine) is: how does the Nano-PI fire 3 perform when rendering cutting edge, hardcore HTML5? Is this little device a potential candidate for running “The Smart Desktop” (a.k.a Amibian.js for those of you coming from the retro-computing scene)?

Like I suspected earlier, X (the Linux windowing framework) doesn’t have drivers that deliver hardware acceleration at all.

shot_desktop-1024x819-2-1024x819

Lubuntu is a sexy desktop no doubt there, but it’s overkill for this device

This is quite easy to test: when selecting a rectangle on the Lubuntu desktop and moving the mouse-cursor around (holding down the left mouse button at the same time) if it lags terribly, that is a clear indicator that no acceleration exists.

And I was right on the money because there is no acceleration what so ever for the Linux distribution. It struggles hopelessly to keep up with the mouse-pointer as you move it around with an active selection; something that would be silky smooth had the GPU been tasked with the job.

But, hardware acceleration is not just about the desktop. It’s not some flag you enable and it magically effect everything, but rather several API’s at either the kernel-level or immediate driver level (modules the kernel loads), each affecting different aspects of a system.

So while the desktop “2d blitting” is clearly cpu driven, other aspects of the system can still be accelerated (although that would be weird and rare. But considering how Asus messed up the Tinkerboard I guess anything goes these days).

Asking Chrome for the hard facts

I fired up Google Chrome (which is the default browser thank god) and entered the magic url:

chrome://gpu

This is a built-in page that avails a detailed report of what Chrome learns about the current system, right down to specific GPU features used by OpenGL.

As expected, there was NO acceleration what so ever. So I was quite surprised that it managed to run Amibian.js at all. Even without hardware acceleration it outperformed the Raspberry PI 3b+ by a factor of 4 (at the very least) and my particle demo ran at a whopping 8 fps (frames per second). The original Rasperry PI could barely manage 2 fps. So the Nano-PI Fire is leagues ahead of the PI in terms of raw cpu power, which is brilliant for headless servers or computational tasks.

FriendlyCore vs Lubuntu? QT for the win

Now here is a funny thing. So far I have used the Lubuntu standard Linux image, and performance has been interesting to say the least. No hardware acceleration, impressive cpu results but still – what good is a SBC Linux distro without fast graphics? Sure, if you just want a head-less file server or host services then you don’t need a beefy GPU. But here is the twist:

Turns out the makers of the board has a second, QT oriented distro called Friendly-core. And this image has OpenGL-ES support and all the missing acceleration lacking from Lubuntu.

I was pretty annoyed with how Asus gave users the run-around with Tinkerboard downloads, but they have thankfully cleaned up their act and listened to their customers. Friendly-elec might want to learn from Asus mistakes in this area.

Qtwebenginebrowser

QT has a rich history, but it’s being marginalized by node.js and Delphi these days

Alas, Friendly-core xenial 4.4 Arm64 image turned out to be a pure embedded development image. This is why the board has a debug port (which is probably awesome if you are into QT development). So this is for QT developers that want to use the board as a single-application system where they write the code on Windows or Linux, compile and it’s all transported to the board with live debugging back to the devtools they use. In other words: not very useful for non C/C++ QT developers.

Android Lolipop

2000px-Android_robot.svgI have only used Android on a pad and the odd Samsung Galaxy phone, so this should be interesting. I Downloaded the Lolipop disk image, burned it to the sd-card and booted up.

After 20 minutes with a blank screen i gave up.

I realize that some Android distros download packages ad-hoc and install directly from a repository, so it can take some time to get started; but 15-20 minutes with a black screen? The Android logo didn’t even show up — and that should be visible almost immediately regardless of network install or not.

This is really a great shame because I wanted to test some Delphi Firemonkey applications on it, to see how well it scales the more demanding GPU tasks. And yes i did try a different SD-Card to be sure it wasnt a disk error. Same result.

Back to Lubuntu

Having spent a considerable time trying to find a “wow” factor for this board, I have to just surrender to the fact that it’s just not there. . This is not a “PI” any more than the Tinkerboard is a PI. And appending “pi” to a product name will never change that.

I can imagine the Nano-PI Fire 3 being an awesome single-application board for QT C/C++ developers though. With a dedicated debug port making it a snap to transport, execute and do live debugging directly on the hardware — but for general DIY hacking, using it for native Android development with Delphi, or node.js development with Smart Mobile Studio – or just kicking back with emulators like Mame, UAE or whatever tickles your fancy — its just too rough around the edges. Which is really a shame!

So at the end of the day I re-installed Lubuntu and figure I just have to wait until Friendly-elec get their act together and issue proper drivers for the Mali GPU. So it’s $35 straight out the window — but I can live with that. It was a risk but at that price it’s not going to break the bank.

The positive thing

The Nano-PI Fire 3 is yet another SBC in a long list that fall short of its potential. Like many others they try to use the word “PI” to channel some of the Raspberry PI enthusiasm their way – but the quality of the actual system is not even close.

In fact, using PI in their product name is setting themselves up for a fall – because customers will quickly discover that this product is not a PI, which can cause some subconscious aversion and resentment.

37013365_10155541149775906_3122577366065348608_o

The Nano rendered Amibian.js running some very demanding demos 4 times as fast as the PI 3b, one can only speculate what the board could do with proper drivers for the GPU.

The only positive feature the Fire-3 clearly has to offer, is abundantly more cpu power. It is without a doubt twice as fast (if not 3 times as fast) as the Raspberry PI 3b. The fact that it can render highly demanding and complex HTML5 demos 4 times faster than the Raspberry PI 3b without hardware acceleration is impressive. This is a $35 board after all, which is the same price.

But without proper drivers for the mali, it’s a useless toy. Powerful and with great potential, but utterly useless for multimedia and everything that relies on fast 2D and 3D graphics. For UAE (Amiga emulation) you can pretty much forget it. Even if you can compile the latest UAE4Arm with SDL as its primary display framework – it wouldn’t work because SDL depends on the graphics drivers. So it’s back to square one.

But the CPU packs a punch that is without question.

Final verdict

Top the x86 UP board, left bottom a Raspberry PI 3, bottom right the ODroid XU4

There are a lot of stable and excellent options out there, take your time

I was planning to test UAE next but as I have outlined above: without drivers that properly expose and delegate the power of the mali, it would be a complete disaster. I’m not even sure it would build.

As such I will just leave this board as is. If it matures at some point that would be great, but my advice to people looking for a great SBC experience — get the new Raspberry PI 3b+ and enjoy learning and exploring there.

And if you are into Amibian.js or making high quality HTML5 kiosk / node.js based systems, then fork out the extra $10 and buy an ODroid XU4. If you pay $55 you can pick up the Asus Tinkerboard which is blistering fast and great value for money, despite its turbulent introduction.

Note: You cannot go wrong with the ODroid XU4. Its affordable, stable and fast. So for beginners it’s either the Raspberry PI 3b+ or the ODroid. These are the most mature in terms of software, drivers and stability.

Power for pennies, getting a server rack and preparing my ultimate coding environment

July 18, 2018 Leave a comment

One of the benefits of doing repairs on your house, is that during the cleanup process you come over stuff you had completely forgot about. Like two very powerful Apple blade servers (x86) I received as a present three years ago. I never got around to using them because I there was literally no room in my house for a rack cabinet.

Sure, a medium model rack cabinet isn’t that big (the size of a cabin refrigerator), but you also have to factor in that servers are a lot more noisy than desktop PCs; the older they are the more noise they make. So unless you have a good spot to place the cabinet, where the noise wont make it unbearable to be around,  I suggest you just rent a virtual instance at Amazon or something. It really depends on how much service coding you do, if you need to do dedicated server and protocol stress testing (the list goes on).

Power for pennies

serverrack

Sellers photo. It needs a good clean, but this kit would have set you back $5000 a decade ago; so picking this up for $400 is almost ridicules.

The price of such cabinets (when buying new ones) can be anything from $800 to $5000 depending on the capacity, features and materials. My needs for a personal server farm are more than covered by a medium cabinet. If it wasnt for my VMWare needs I would say it was overkill. But some of my work, especially with node.js and Delphi system services that should handle terabytes of raw data reliably 24/7, that demands a hard-core testing environment.

Having stumbled upon my blade servers I decided to check the local second-hand online forum; and I was lucky enough to find (drumroll) a second-hand cabinet holding a total of 10 blades for $400. So I’ll be picking up this beauty next weekend. It will be so good to finally get my blades organized. Not to mention all my SBC / Node.js cluster experiments centralized in one physical location. Far away from my home office space (!)

Interestingly, it comes fitted with 3 older servers. There are two Dell web and file servers, and then a third, unmarked mystery box (i3 cpu + sata caddies so that sounds good).

It really is amazing how much cpu fire-power you can pick up for practically nothing these days. $50 buys you a SBC (single board computer) that will rival a Pentium. $400 buys you a 10 blade cabinet and 3 servers that once powered a national newspaper (!).

VMWare delights

All the blades I have mentioned so far are older models. They are still powerful machines, way more than $400 livingroom NAS would get you. So my node.js clustering will run like a dream and I will be able to host all my Delphi development environments via VMware. Which brings us neatly to the blade I am really looking forward to get into the rack.

I bought an empty server blade case back in 2015. It takes a PSU, motherboard, fans and everything else is there (even the six caddies for disks). Into this seemingly worthless metal box I put a second generation Intel i7 monster (Asus motherboard), with 32 gigabyte ram – and fitted it with a sexy NVidia GEFORCE GTX 1080 TI.

37013365_10155541149775906_3122577366065348608_o

All my Delphi work, Smart work and various legacy projects I maintain, all in one neat rack

This little monster (actually it takes up 2 blade-spots) allows me to run VMWare server, which gives me at least 10 instances of Windows (or Linux, or OSX) at the same time. It will also be able to host and manage roughly 1000 active Smart Desktop users (the bottleneck will be the disk and network more than actual computation).

Being a coder in 2018 is just fantastic!

Things we could only dream about a decade ago can now be picked up for close to nothing (compared to the original cost). Just awesome!

 

What is new in Smart Mobile Studio 3.0

July 16, 2018 1 comment

Trying to sum up the literally thousands of changes we have done in Smart Mobile Studio the past 12 months is quite a challenge. Instead of just blindly rambling on about every little detail – I’ll try to focus on the most valuable changes; changes that you can immediately pick up and experience for yourself.

Scriptable css themes

theme_structure

A visual control now has its border and background styled from our pre-defined styles. The styles serve the same function in all themes even though they look different.

This might not feel like news since we introduced this around xmas, but like all features it has matured through the beta phases. The benefits of the new system might not be immediately obvious.

So what is so fantastic about the new theme files compared to the old css styling?

We have naturally gone over every visual control to make them look better, but more importantly – we have defined a standard for how visual controls are styled. This is important because without a theme system in place, making application “theme aware” would be impossible.

  • Each theme file is constructed according to a standard
  • A visual control is no longer styled using a single css-rule (like we did before), but rather a combination of several styles:
    • There are 15 background styles, each with a designated role
    • There are 14 borders, each designed to work with specific backgrounds
    • We have 4 font sizes to simplify what small, normal, medium and large means for a particular theme.
  • A theme file contains both CSS and Smart pascal code
  • The code is sandboxed and has no access to the filesystem or RTL
  • The code is executed at compile time, not runtime (!). So the code is only used to generate things like gradients based on constants; “scaffolding” code if you will that makes it easier to maintain and create new themes.

Optimized and re-written visual controls

Almost all our visual controls have been re-written or heavily adjusted to meet the demands of our users. The initial visual controls were originally designed as examples, following in the footsteps of mono where users are expected to work more closely with the code.

To remedy this we have gone through each control and added features you would expect to be present. In most cases the controls are clean re-writes, taking better advantage of HTML5 features such as flex-boxing and relative positions (you can now change layout mode via the PositionMode property. Displaymode is likewise a read-write property).

flexing

Flex boxing relieves controls of otherwise expensive layout chores and evenly distributes elements

Flex-boxing is a layout technique where the browser will automatically stretch or equally distribute screen real estate for child elements. Visual controls like TW3Toolbar and TW3ListMenu makes full use of this – and as a result they are more lightweight, requires no resize code and behave like native controls.

Momentum scrolling as standard

Apple have changed the rules for scrolling 3 times in the past eight years, and it’s driving HTML/JS developers nuts every time. We decided years ago that we had enough and implemented momentum scrolling ourselves written in Smart Pascal. So no matter if Apple or anyone else decides to make life difficult for developers – it wont bother us.

momentum

Momentum scrolling with indicator (or scrollbars) are now standard for all container controls and lists.

Our new TW3Scrollbox and (non visual) TW3ScrollController means that all our container and list controls supports GPU powered momentum scrolling by default. You can also disable this and use whatever default method the underlying web-view or browser has to offer.

Bi-directional Tab control

A good tab control is essential when making mobile and web applications, but making one that behaves like native controls do is quite a challenge. We see a lot of frameworks that have problems doing the bi-directional scrolling that mobile tabs do, where the headers scroll in-place as you click or touch them – and the content of the tab scroll in from either side (at the same time).

tabcontrol

Thankfully this was not that hard to implement for us, since we have proper inheritance to fall back on. JS developers tend to be limited to prototype cloning, which makes it difficult to build up more and more complex behavior. Smart enjoys the same inheritance system that Delphi and C++ uses, and this makes life a lot easier.

Google Maps control

Not exactly hard to make but a fun addition to our RTL. Very useful in applications where you want to pinpoint office locations.

google-maps-android-100664872-orig

Updated ACE coding editor

ACE is by many regarded as the de-facto standard text and code editor for JavaScript. It is a highly capable editor en-par with SynEdit in the Delphi and C++ world. This is actually the only visual control that we did not implement ourselves, although our wrapper code is substantial.

ace

Ace comes with a wealth of styles (color themes) and support for different programming languages. It can also take on the behavior of other editors like emacs (an editor as old as Unix).

We have updated Ace to the latest revision and tuned the wrapper code for speed. There was a small problem with padding that caused Ace to misbehave earlier, this has now been fixed.

The Smart Desktop, windowing framework

People have asked us for more substantial demos of what Smart Mobile Studio can do. Well this certainly qualifies. It is probably the biggest product demo ever made and represents a complete visual web desktop with an accompanying server (the Ragnarok Websocket protocol).

37013365_10155541149775906_3122577366065348608_o

The Smart Desktop showcases some of the power Smart Mobile Studio can muster

It involves quite a bit of technology, including a filesystem that uses the underlying protocol to browse and access files on the server as if they were local. It can also execute shell applications remotely and pipe the results back.

A shell window and command-line system is also included, where commands like “dir” yields the actual directory of whatever path you explore on the server.

Since the browser has no concept of “window” (except a browser window) this is fully implemented as Smart classes. Moving windows, maximizing them (and other common operations) are all included.

The Smart desktop is a good foundation for making large-scale, enterprise level web applications. Applications the size of Photoshop could be made with our desktop framework, and it makes an excellent starting-point for developers involved in router, set-top-boxes and kiosk systems.

Node.JS and server-side technology

While we have only begun to expand our node.js namespace, it is by far one of the most interesting aspects of Smart Mobile Studio 3.0. Where we only used to have rudimentary support (or very low-level) for things like http – the SmartNJ namespace represents high-level classes that can be compared to Indy under Delphi.

As of writing the following servers can be created:

  • HTTP and HTTPS
  • WebSocket and WebSocket-Secure
  • UDP Server
  • Raw TCP server

The cool thing is that the entire system namespace with all our foundation code, is fully compatible and can be used under node. This means streams, buffers, JSON, our codec classes and much, much more.

I will cover the node.js namespace in more detail soon enough.

Unified filesystem

The browser allows some access to files, within a sandboxed and safe environment. The problem is that this system is completely different from what you find under phonegap, which in turn is wildly different from what node.js operates with.

In order for us to make it easy to store information in a unified way, which also includes online services such as Azure, Amazon and Dropbox — we decided to make a standard.

filesys

The Smart Desktop shows the filesystem and device classes in action. Here accessing the user-account files on the server both visually and through our command-line (shell) application.

So in Smart Mobile Studio we introduce two new concepts:

  • Storage device classes (or “drivers”)
  • Path parsers

The idea is that if you want to save a stream to a file, there should be a standard mechanism for doing so. A mechanism that also works under node, phonegap and whatever else is out there.

For the browser we went as far as implementing our own filesystem, based on a fast B-Tree class that can be serialized to both binary and JSON. For Node.js we map to the existing filesystem methods – and we will continue to expand the RTL with new and exciting storage devices as we move along.

Path parsers deals with how operative-systems name and deal with folders and files. Microsoft Windows has a very different system from Unix, which again can have one or two subtle differences from Linux. When a Smart application boots it will investigate what platform it’s running on, and create + install an appropriate path parser.

You will also be happy to learn that the unit System.IOUtils, which is a standard object pascal unit, is now a part of our RTL. It contains the class TPath which gives you standard methods for working with paths and filenames.

New text parser

Being able to parse text is important. We ported our TextCraft parser (initially written for Delphi) to Smart, which is a good framework for making both small and complex parsers. And we also threw in a bytecode assembler and virtual-cpu demo just for fun.

Note: The assembler and virtual cpu is meant purely as a demonstration of the low-level routines our RTL has to offer. Most JS based systems run away from raw data manipulation, well that is not the case here.

asmparse

Time to get excited!

I hope you have enjoyed this little walk-through. There are hundreds of other items we have added, fixed and expanded (we have also given the form-designer and property inspector some much needed love) – but some of the biggest changes are shown here.

For more information stay tuned and visit www.smartmobilestudio.com

Smart Mobile Studio: Q&A about v3.0 and beyond

July 1, 2018 4 comments

A couple of days back I posted a sneak-peek of our upcoming Smart Mobile Studio 3.0 web desktop framework; as a consequence my Facebook messenger app has practically exploded with questions.

smart_desktop

The desktop client / server framework is an example of what you can do in Smart

As you can imagine, the questions people ask are often very similar; so similar in fact that I will answer the hottest topics here. Hopefully that will make it easier for everyone.

If you have further questions then either ask them on our product forums or the Delphi Developer group on Facebook.

 

Generics

Yes indeed we have generics running in the labs. We havent set a date on when we will merge the new compiler-core, but it’s not going to happen until (at the earliest) v3.4. So it’s very much a part of Smart’s future but we have a couple of steps left on our time-line for v3.0 through v3.4.

RTTI access

RTTI is actually in the current version, but sadly there is a bug there that causes the code generator to throw a fit. The fix for this depends on a lot of the sub-strata in the new compiler-core, so it will be available when generics is available.

Associative arrays

This is ready and waiting in the new core, so it will appear together with generics and RTTI.

Databases

We have supported databases since day 1, but the challenge with JavaScript is that there are no “standards” like we are used to from established systems like Delphi or Lazarus.

Under the browser we support WebSQL and our own TW3Dataset. We also compiled SQLite from native C to JavaScript so we can provide a fast, lightweight SQL engine for the browser regardless of what the W3C might do (WebSQL has been deprecated but will be around for many years still).

Server side it’s a whole different ballgame. There you have drivers (or modules) for every possible database you can think of, even Firebird. But each module is implemented as the authors see fit. This is where our Database framework comes in, sets a standard, and we then inherit out classes and implement the engines we want.

This framework and standard is being written now, but it wont be introduced until v3.1 and v3.2. In the meantime you have sqlite both server-side and client-side, WebSQL and TW3Dataset.

Attributes

This question is often asked separately from RTTI, but it’s ultimately an essential part of what RTTI delivers.

So same answer: it will arrive with the new compiler-core / infrastructure.

Server-side scripting

22555292_1630289757034374_6911478701417326545_n

The new theme system in action

While we do see how this could be useful, it requires a substantial body of work to make a reality. Not only would we have to implement the whole “system” namespace from scratch since JavaScript would not be present, but we would also have to introduce a a secondary namespace; one that would be incompatible with the whole RTL at a fundamental level. Instead of going down this route we opted for Node.js where creating the server itself is the norm.

 

If we ever add server-side scripting it would be JavaScript support under node.js by compiling the V8 engine from C to asm.js. But right now our focus is not on server-side-scripting, but on cloud building-blocks.

Bytecode compilation

I implemented the assembler and runtime for our bytecode system (LDef) this winter / early spring; So we actually have the means to create a pure bytecode compiler and runtime.

But this is not a priority for us at this time. Smart Mobile Studio was made for JavaScript and while it would be cool to compile Delphi sourcecode to portable bytecodes, such a project would require not just a couple of namespaces – but a complete rewrite of the RTL. The assembler source-code and parser can be found in the “Next Generation Demos” folder (Smart Mobile Studio 3.0 demos). Feel free to build on the codebase if you fancy creating your own language;Get creative and enjoy! **Note: Using the assembler in your own projects requires a valid Smart Mobile license.

Native Apps

It’s interesting that people still ask this, since its one of our central advantages. We already generate native apps via the Phonegap post-processor.

phonegap

Phonegap turns your JS apps into native apps

Phonegap takes your compiled Smart (visual projects only) compiled code, processes it, and spits out native apps for every mobile OS on the market (and more). So you don’t have to compile especially for iOS, Android, Tizen or FireOS — Phonegap generates one for each system you need, ready for AppStore.

So we have native covered by proxy. And with millions of users Phonegap is not going anywhere.

Release date

We are going over the last beta as I type this, and Smart Mobile Studio 3.0 should be available next week. Which day is not easy to say, but at least before next weekend if all goes accoring to plan.

Make sure you visit www.smartmobilestudio.com and buy your license!

Patching Smart Mobile Studio’s ACE editor

June 30, 2018 Leave a comment

The Ace text-editor has been a part of the Smart Mobile Studio component set for a while now. It is seen by many as the de-facto code and text editor for JavaScript, and much like SynEdit for Delphi and C++ builder – Ace has support for a myriad of themes, languages and even key shortcut mapping.

Align problems

22814515_1630289797034370_9138255627706616601_nWith the introduction of our new theme engine, we completely revamped the entire notion of how a theme is organized. Gone are the hard-coded styles that targeted each individual control regardless if it was used or not. Instead we created a theme engine with a fixed number of borders and backgrounds, which are used as building-blocks by our visual controls.

This makes life much easier for everyone, especially Smart developers who write their own custom-controls (which you kinda have to do if you want something unique).

But Ace didn’t like this one bit. It has taken quite a debugging chore to track down what the heck is causing Ace to mis-place the cursor like that. It only happens when you apply a language theme to ace (Ace has its own themes and language parsers). And it’s not a superficial bug either, it renders Ace useless for anything serious.

Fixing the padding

The “bug” turned out to be as simple as padding. In our theme-files we are very careful and avoid imposing on other styles that might be loaded. But there are two sections where we apply values globally (as in “apply this to all elements of type x, y and z”).

One of these values is the padding. Depending on the theme, the padding is either set to 1px or 2px. This is set in a constant (Smart supports scriptable stylesheets) almost at the top of the file.

Before you start changing the theme files, I suggest you do the following:

  • Copy the existing theme files and prefix them with “np” (no padding). Just keep these copies in the same folder as the other themes
  • You should now have the following files in your theme folder:
    • npDefault.css
    • npAndroid.css
    • npiOS.css
    • Default.css
    • Android.css
    • iOS.pcss
  • Now edit each file (only those prefixed with np), and change the constant “stdpadding” which is defined on the top of each file (line #6), and set it to “0px” rather than the original “2px”.
  • Save all changes to the files
  • Restart Smart Mobile Studio

When the Smart IDE restarts it will have your additional theme files in the project options (under “linker”).

If you use Ace in your application then simply pick one of the new files as an alternative to the older. This fixes the problem with Ace’s cursor ending up behind the last character on a styled line.

Less intrusive fix

An alternative and less intrusive remedy, is to define a custom css style for Ace directly in your code. This is now very simple thanks to our css classes, but if you use Ace a lot then the above fix is probably the best for now.

stylecode

Injecting a CSS style is very simple 

The Amiga ARM project

April 19, 2018 5 comments

This has been quite the turbulent week. Without getting into all the details, a post that I made with thoughts and ideas for an Amiga inspired OS for ARM escaped the safe confines of our group, Amiga Disrupt, and took on a life of its own.
This led to a few critical posts being issued publicly, which all boiled down to a misunderstanding. Thankfully this has been resolved and things are back to normal.

The question on everyone’s lips now seem to be: did Jon mean what he said or was it just venting frustration? I thought I made my points clear in my previous post, but sadly Commodore USA formulated a title open for interpretation (which is understandable considering the mayhem at the time). So let’s go thrugh the ropes and put this to rest.

Am I making an ARM based Amiga inspired OS?

Hopefully I don’t have to. My initial post, the one posted to the Amiga Disrupt comment section (and mistaken for a project release note), had a couple of very clear criteria attached:

If nothing has been done to improve the Amiga situation [with regards to ARM or x86] by the time I finish Amibian.js (*), I will take matters into my own hand and create my own alternative.

(*) As you probably know, Amibian.js is a cloud implementation of Amiga OS, designed to bring Amiga to the browser. It is powered by a node.js application server; a server that can be hosted either locally (on the same machine as the html5 client) or remotely. It runs fine on popular embedded devices such as Tinkerboard and ODroid, and when run in a full-screen browser with no X or Windows desktop behind it – it is practically indistinguishable from the real thing.

We have customers who use our prototype to deliver cloud based learning for educational institutions. Shipping ready to use hardware units with pre-baked Amibian.js installed is perfect for schools, libraries, museums, routers and various kiosk projects.

smart_desktop

Amibian.js, here running Quake 3 at 60 fps in your browser

Note: This project started years before FriendOS, so we are not a clone of their work.

Obviously this is a large task for one person, but I have written the whole system in Smart Mobile Studio, which is a product our company started some 7 years ago, and that now has a team of six people behind it. In short it takes object pascal code such as Delphi and Freepascal, and compiles this to JavaScript. Suitable for both the browser and NodeJS. It gives you a full IDE with form designer, drag & drop visual components and a wast and rich RTL (run-time library) which naturally saves me a lot of time. So this gives me an edge over other companies working with similar technology. So while it’s a huge task, it’s leveraged considerably by the toolchain I made for it.

So am I making a native OS for ARM or x86? The short answer: I will if the situation havent dramatically improved by the time Amibian.js is finished.

Instead of wasting years trying to implement everything from scratch, Pascal Papara took the Linux kernel and ran with it. So Aeros boots by virtue of the Linux Kernel, but jumps straight into Aros once the drivers has loaded

If you are thinking “so what, who the hell do you think you are?” then perhaps you should take a closer look at my work and history.

I am an ex Quartex member, which was one of the most infamous hacking cartels in europe. I have 30 years of software development behind me, having worked as a professional developer since the age of 17. I have a history of taking on “impossible” projects and finding ways to deliver them. Smart Mobile Studio itself was deemed impossible by most Delphi developers; It was close to heresy, triggering an avalanche of criticism for even entertaining the idea that object pascal could be compiled to JavaScript. Let alone thrive on JSVM (JavaScript Virtual Machine).

assembler

Amibian.js runs javascript, but also bytecodes. Here showing the assembler prototype

You can imagine the uproar when our generated JavaScript code (compiled from object pascal) actually bested native code. I must admit we didn’t expect that at all, but it changed the way Delphi and object pascal developers looked at the world – for the better I might add.

What I am good at, is taking ordinary off the shelves parts and assembling them in new and exciting ways. Often ways the original authors never intended; in order to produce something unique. My faith is not in myself, but in the ability and innate capacity of human beings to find solutions. The biggest obstacle to progress is ultimately pride and fear of losing face. Something my Buddhist training beat our of me ages ago.

So this is not an ego trip, it’s simply a coder that is completely fed-up with the perpetual mismanagement that has held Amiga OS in captivity for two decades.

Amiga OS is a formula, and formulas are bulletproof

People love different aspects of the same thing – and the Amiga is no different. For some the Amiga is the games. Others love it for its excellent sound capabilities, while some love it for the ease of coding (the 68k is the most friendly cpu ever invented in my book). And perhaps all of us love the Amiga for the memories we have. A harmless yet valuable nostalgia of better times.

image3

Amiga OS 3.1 pimped up, running on Amibian [native] Raspberry PI 3b

But for me the love was always the OS itself. The architecture of Amiga OS is so elegant and dare I say, pure, compared to other systems. And I’m comparing against both legacy and contemporary systems here. Microsoft Windows (WinAPI) comes close, but the sheer brilliance of Amiga OS is yet to be rivaled.

We are talking about a design that delivers a multimedia driven, window based desktop 10 years before the competition. A desktop that would thrive in as little as 512 kb of ram, with fast and reliable pre-emptive multitasking.

I don’t think people realize or understand the true value of Amiga OS. It’s not in the games (although games is definitively a huge part of the experience), the hardware or the programs. The reason people have been fighting bitterly over Amiga OS for a lifetime, is because the operating system architecture or “formula” is unmatched to this very day.

Can you imagine what a system that thrives under 512 KB would do to the desktop market? Or even better, what it could bring to the table for embedded and server technology?

And this is where my frustration soars up. Even though we have OS 4.1, we have been forced to idly stand by and watch, as mistake after mistake is being made. opportunities that are ripe for the taking (some of them literally placed on the doorstep of Hyperion), have been thrown by the wayside time and time again.

And they are not alone. Aros and Morphos has likewise missed a lot of opportunities. Both opportunities to generate income and secure development as well as embracing new technology. Although I must stress that I sympatize with Aros since they lack any official funding. Morphos is doing much better using a normal, commerical license.

Frustration, the mother of invention

When the Raspberry PI was first released I jumped of joy. Finally a SBC (single board computer) with enough power to run a light version of Amiga OS 4.1, with a price tag that everyone can live with. I rushed over to Hyperion to see if they had issued a statement about the PI, but nothing could be found. The AEON site was likewise empty.

The PI version 2 came and went, still no sign that Hyperion would capitalize on the situation. I expected them to issue a “Amiga OS 4.1 light” edition for ARM, which would put them on the map and help them establish a user base. Without a user base and fresh blood there is no chance in hell of selling next generation machines in large enough quantities to justify future development. But once again, opportunity after oppertunity came and went.

Sexy, fast and modern: Amiga OS 4.1

Sexy, fast and modern: Amiga OS 4.1 would do wonders on ARM

Faster and better suited SBC’s started to turn up in droves: The ODroid, Beaglebone black, The Tinkerboard, The Banana PI – and many, many others. When the SnapDragon IV CPU’s shipped on a $120 SBC, which is the same processor used by Samsung Galaxy 6S, I was sure Hyperion would wake up and bring Amiga OS to the masses. But not a word.

Instead we were told to wait for the Amiga x5000 which is based on PPC. I have no problem with PPC, it’s a great platform and packs a serious punch. But since PPC no longer sell to mainstream computer companies like it used to, the price penalty would be nothing short of astronomical. There is also the question of longevity and being able to maintain a PPC based system for the forseeable future. Where exactly is PPC in 15 years?

Note: One of the reasons PPC was selected has to do with coding infrastructure. PPC has an established standard, something ARM lacked at the time (this was first established for ARM in 2014). PPC also has an established set of development platforms that you can build on, with libraries and pre-fab modules (pre fabricated modules, think components that you can use to quickly build what you need) that have been polished for two decades now. A developer who knows PPC from the Amiga days will naturally feel more at home with PPC. But sadly PPC is the past and modern development takes place almost exclusively on ARM and x86. Even x86 is said to have an expiration date now.

The only group that genuinely tried to bring Amiga OS to ARM has been the Aros team. They got their system compiled, implemented some rudimentary drivers (information on this has been thin to say the least) and had it booting natively on the Raspberry PI 3b. Sadly they lacked a USB stack (remember I mentioned pre-fab modules above? Well, this is a typical example. PPC devtools ship with modules like this out of the box) so things like mouse, keyboard and external peripherals wouldn’t work.

3

Aeros, the fastest Amiga you will ever play with. Running on the Raspberry PI 3b

And like always, which is the curse of Amiga, “something came up”, and the whole Raspberry PI / ARM initiative was left for dead. The details around this is sketchy, but the lead developer had a personal issue that forced him to set a new direction in life. And for some reason the other Aros developers have just continued with x86, even though a polished ARM version could have made them some money, and helped finance future development. It’s the same story, again and again.

But then something amazing happened! Out of the blue came Pascal Papara with a new take on Aros, namely AEROS. This is a distro after my own heart. Instead of wasting years trying to implement everything from scratch, Pascal took the Linux kernel and ran with it. So Aeros boots by virtue of the Linux Kernel, but jumps straight into Aros once the drivers has loaded. And the result? It is the fastest desktop you will ever experience on ARM. Seriously, it runs so fast and smooth on the Raspberry PI that you could easily mistake it for a $450 Intel i3.

Sadly Pascal has been more or less alone about this development. And truth be told he has molded it to suit his own needs rather than the consumer. Since his work includes a game machine and some Linux services, the whole Linux system is exposed to the Aros desktop. This is a huge mistake.

Using the Linux kernel to capitalize on the thousands of man hours invested in that, not to mention the linux driver database which is massive, is a great idea. It’s also the first thing that came into my mind when contemplating the issue.

But when running Aros on top of this, the Linux aspect of the system should be abstracted away. Much like what Apple did with Unix. You should hardly notice that Linux is there unless you open a shell and start to investigate. The Amiga filesystem should be the only filesystem you see when accessing things from the desktop, and a nice preferences option for showing / hiding mounted Linux drives.

My plans for an ARM based Amiga inspired OS

Building an OS is not a task for the faint of heart. Yes there is a lot of embedded / pre-fab based systems to pick from out there, but you also have to be sensible. You are not going to code a better kernel than Linus Torvalds, so instead of wasting years trying to catch up with something you cannot possibly catch up with – just grab the kernel and make it work for us.

The Linux kernel solves things such as process contexts, “userland” vs “kernel space” (giving the kernel the power to kill a task and reclaim resources), multitasking / threading, thread priorities, critical sections, mutexes and global event objects; it gives us IPC (inter process communication), disk IO, established and rock solid sound and graphics frameworks; and last but perhaps most important: free access to the millions of drivers in the Linux repository.

Screenshot

Early Amibian.js login dialog

You would have to be certified insane to ignore the Linux Kernel, thinking you will somehow be the guy (or group) that can teach Linus Torvalds a lesson. This is a man who has been writing kernel’s for 20+ years, and he does nothing else. He is surrounded by a proverbial army of developers that code, test, refactor and strive to deliver optimal performance, safety and quality assurance. So sorry if I push your buttons here, but you would be a moron to take him on. Instead, absorb the kernel and gain access to the benefits it has given Linux (technically the kernel is “Linux”, the rest is GNU – but you get what I mean).

With the Linux kernel as a foundation, as much as 50% of the work involved in writing our OS is finished already. You don’t have to invent a driver API. You dont have to invent a new executable format (or write your own ELF parser if you stick with the Linux executable). You can use established compilers like GCC / Clang and Freepascal. And you can even cherry pick some low-level packages for your own native API (like SDL, OpenGL and things that would take years to finish).

But while we want to build our house on rock, we don’t want it to be yet another Linux distro. So with the kernel in place and a significant part of our work done for us, that is also where the similarities end.

The end product is Amiga OS, which means that we need compatibility with the original Amiga rom libraries (read: api). Had we started from scratch that would have been a tremendous effort, which is also why Aros is so important. Because Aros gives us a blueprint of how they have implemented these API’s.

But our main source of inspiration is not Aros, but Amithlon. What we want to do is naturally to pipe as much as we can from the Amiga API’s back to the Linux kernel. Things like device detection, memory allocation, file IO, pipes, networking — our library files will be more thin wrappers that expose Amiga compatible calls; methods that calls the Linux Kernel to do the job. So our Amiga library files will be proxy objects whenever possible.

AmithlonQEmu

Amithlon, decades ahead of it’s time

The hard work is when we get to the window manager, or Intuition. Here we can’t cheat by pushing things back to Linux. We don’t want to install X either (although we can render our system into the X framebuffer if we like), so we have to code a window manager. This is not as simple as it sounds, because our system must operate with multiple cores, be multi threaded by design and tap into the grand scheme of things. Things like messages (which is used by applications to respond to input) must be established, and all the event codes from the original Amiga OS must be replicated.

So this work wont be easy, but with the Linux kernel as a foundation – the hardest task of all is taken care of. The magic of a kernel is that of process management and task switching. This is about as hard-core as you can get. Without that you can almost forget the rest. But since we base our system on the Linux kernel, we can focus 100% on the real task – namely to deliver a modern Amiga experience, one that is platform independent (read: conforms to standard Linux and can thus be recompiled and run anywhere Linux can run), preserves as much of the initial formula as possible – and can be successfully maintained far into the future.

By pushing as much of our work as possible into user-space (the process space where ordinary programs run, the kernel runs outside this space and is thus unaffected when a program crashes) and adhering to the Linux kernel beneath the bonnet, we have created a system that can be re-compiled anywhere Linux is. And it can be done so without any change to our codebase. Linux takes care of things like drivers, OpenGL, Sound — and presents to us a clean API that is identical on every platform. It doesn’t matter if it’s ARM, PPC, 68k, x86 or MIPS. As long as we follow the standards we are home free.

Last words

I hope all of this clears up the confusion that has surrounded the subject this week. Again, the misunderstanding that led to some unfortunate posts has been resolved. So there is no negativity, no drama and we are all on the same page.

amidesk

Early Amibian.js prototype, running 68k in the browser via uae.js optimized

Just remember that I have set some restrictions for my involvement here. I sincerely hope Hyperion and the Aros development group can focus on ARM, because the community needs this. While the Raspberry PI might seem too small a form-factor to run Aros, projects like Aeros have proven just how effective the Amiga formula is. I’m sure Hyperion could find a powerful ARM SOC in the price range of $120 and sell a complete package with profit for around $200.

What the Amiga community needs now, is not expensive hardware. The userbase has to be expanded horizontally across platforms. Amiga OS / Aros has much to offer the embedded market which today is dominated by overly complex Linux libraries. The Amiga can grow laterally as a more user-friendly alternative, much like Android did for the mobile market. Once the platform is growing and established – then custom hardware could be introduced. But right now that is not what we need.

I also hope that the Aros team drops whatever they are working on, fork Pascal Paparas codebase, and spend a few weeks polishing the system. Abstract away the Linux foundation like Apple have done, get those sexy 32 bit OS4 icons (Note: The icons used by Amiga OS 4 is available for free download from the designer’s website) and a nice theme that looks similar to OS 4 (but not too similar). Get Lazarus (the freepascal IDE) going and ship the system with a ready to use Pascal, C/C++ and Basic development environments. Bring back the fun in computing! The code is already there, use it!

page2-1036-full

Aeros interfaces directly with linux, I would propose a less direct approach

Just take something simple, like a compatible browser. It’s actually not that simple, both for reasons of complexity and how memory is handled by PPC. With a Linux foundation things like Chromium Embedded could be inked into the Amiga side of things and we would have a native, fast, established and up-to-date browser.

At the same time, since we have API level compatability, people can recompile their Aros and Morphos applications and they would run more or less unchanged.

I really hope that my little protest here, if nothing else, helps people realize that there are viable options readily at hand. Commodore is not coming back, and the only future this platform has – is the one we make. So people have to ask themselves how much they want a future.

If the OS gains momentum then there will be grounds for investors to look at custom hardware. They can then choose off the shelves parts that are inexpensive to cover the normal functionality you expect in a modern computer – which more resources can go into custom hardware that sets the system apart. But we cant start there. It has to be built up brick by brich, standing on the shoulders of giants.

OK, rant over 🙂

Smart Mobile Studio 3.0 and beyond

March 20, 2018 Leave a comment

cascade_03With Smart Mobile Studio 3.0 entering its second beta, Smart Pascal developers are set for a boost in quality, creativity and power. We have worked extremely hard on the product this past year, including a complete rewrite of all our visual controls (and I mean all). We also introduced a completely new theme engine, one that completely de-couples visual appearance from structural architecture (it also allows scripting inside the CSS theme files).

All of that could be enough for a version bump, but we didn’t stop there. Much of the sub-strata in Smart has been re-implemented. Focus has been on stability, speed and future growth. The system is now divided into a set of name-spaces (System, SmartCL, SmartNJ, Phonegap, and Espruino), making it easier to navigate between the units as well as expanding the codebase in the future.

To better understand the namespaces and why this is a good idea, let’s go through how our units are organized.

smart_namespace

The RTL is made to expand easily and preserve as much functionality as possible

  • The System namespace is the foundation. It contains clean, platform independent code. Meaning code that doesn’t rely on the DOM (browser) or runtime (node). Focus here is on universal code, and to establish common object-pascal classes.
  • Our SmartCL namespace contains visual code, meaning code and controls that targets the browser and the DOM. SmartCL rests on the System namespace and draws functionality from it. Through partial classes we also expand classes introduced in the system namespace. A good example is System.Time.pas and SmartCL.Time.pas. The latter expands the class TW3Dispatch with functionality that will only work in the DOM.
  • SmartNJ is our high-level nodejs namespace. Here you find classes with fairly complex behavior such as servers, memory buffers, processes and auxillary classes. SmartNJ draws from the system namespace just like SmartCL. This was done to avoid multiple implementations of streams, utility classes and common functions. Being able to enjoy the same functionality under all platforms is a very powerful thing.
  • Phonegap is our namespace for mobile devices. A mobile application is basically a normal visual application using SmartCL, but where you access extra functionality through phonegap. Things like access to a device’s photos, filesystem, dialogs and so on is all delegated via phonegap.
  • Espruino is a namespace for working with Espruino micro-controllers. This has been a very low-level affair so far, due to size limitation on these devices. But with our recent changes you can now, when you need to, tap into the system namespace for more demanding behavior.

As you can see there is a lot of cool stuff in Smart Mobile Studio, and our codebase is maturing nicely. With out new organization we are able to expand both horizontally and vertically without turning the codebase into a gigantic mess (the VCL being a prime example of how not to implement a multi-platform framework).

Common behavior

One of the coolest things we have added has to be the new storage device classes. As you probably know the browser has a somewhat “limited” storage mechanism. You are stuck with name-value pairs in the cache, or a filesystem that is profoundly frustrating to work with. To remedy this we took the time to implement a virtual filesystem (in memory filesystem) that emits data to the cache; we also implemented a virtual storage device stack on top of it, one for each target (!).

In short, if a target has IO capability, we have implemented a storage “driver” for it. So instead of you having to write 4-5 different storage mechanisms – you can now write the storage code once, and it works everywhere.

This is a pretty cool system because it doesn’t limit us to local device storage. We can have device classes that talk to Google-Storage, One-Drive, Dropbox and so on. It also opens up for custom storage solutions should you already have this pre-made on your server.

Database support, a quick overview

Databases have always been available in Smart Mobile Studio. We have units for WebSQL, IndexDB and SQLite. In fact, we even compiled SQLite3 from native C code to asm.js, meaning that the whole database engine is now pure JavaScript and no-longer dependant on W3C standards.

smart_db

Each DB engine is implemented according to a framework

Besides these we also have TW3Dataset which is a clean, Smart Pascal implementation of a single table dataset (somewhat inspired by Delphi’s TClientDataset). In our previous beta we upgraded TW3Dataset with a robust expression parser, meaning that you can now set filters just like Delphi does. And its all written in Smart Mobile Studio which means there are no dependencies.

 

And ofcourse, there is also direct connections to Embarcadero Datasnap servers, and Remobjects SDK servers. This is excellent if you have an already existing Delphi infrastructure.

A unified DB framework

If you were hoping for a universal DB framework in beta-2 of v3.0, sadly that will not be the case. The good news is that databases should make it into v3.2 at the latest.

Databases looks simple: table, rows and columns right? But since each database engine known to JavaScript is written different from the next, our model has to take height for these and be dynamic enough to deal with them.

The model we used with WebSQL is turning out to be the best way forward I feel, but its important to leave room for reflection and improvements.

So getting our DB framework established is a priority for us, and we have placed it on our timeline for (at the latest) v3.2. But im hoping to have it done by v3.1. So it’s a little ahead of us, but we need that time to properly evolve the framework.

Smart Desktop [a.k.a Amibian.js]

The feedback we have received on our Smart Desktop demos have been pretty overwhelming. It is also nice to know that our prototype is being used to deliver software to schools and educational centers. So our desktop is not going away!

smart_desktop

Fancy a game of Quake at 60+ fps? Web assembly rocks!

But we are not rushing into this without some thought first. The desktop will become a project type like I have written about many times before. So you will be able to create both the desktop and client applications for it. The desktop is suitable for software that requires a windowing environment (a bit like Sencha or similar frameworks). It is also brilliant for kiosk displays and as a remote application hub.

Our new storage device system came largely from Amibian, and with these now a part of our RTL we can clean up the prototype considerably!

Smart assembler

It may sound like an oxymoron, but a lab project we created while testing our parser framework (system.text.parser unit) turned into an exercise in compiler / assembler making. We implemented a virtual machine that runs instructions represented by bytecodes (fairly straight ahead stuff). It supports the most common assembler methods, vaguely inspired by the Motorolla 68k processor with a good dose of ARM thrown in for good measure.

smart_assembler

Yes that is a full parser, assembler and runtime model

If you ponder why on earth this would be interesting, consider the following: most web platforms allow for scripting by third-party developers. And by opening up for that these, the websites themselves become prone to attacks and security breaches. There is no denying that any JS based framework is very fragile when potentially hundreds of unknown developers are hacking away at it.

But what if you could offer third parties to write plugins using more traditional languages? Perhaps a dialect of pascal, a subset of basic or perhaps C#? Wouldnt that be much better? A language and (more importantly) runtime that you have 100% control over.

While our assembler, disassembler and runtime is still in its infancy (and meant as a demo and excercise), it has future potential. We also made the instructions in such a way that JIT compiling large chunks of it is possible – and the output (or codegen) can be replaced by for example web assembly.

Right now it’s just a curiosity that people can play with. But when we have more time I will implement high-level parsers and codegens that emit code via this assembler. Suddenly we have a language that runs under node.js, in the browser or any modern JS runtime engine – and its all done using nothing but Smart Mobile Studio.

Well, stay tuned for more!

Smart Pascal assembler, it’s a reality

January 31, 2018 2 comments

After all these years of activity I guess there is no secret that I am a bit over-active at times. I am usually the most happy when I work on 2-3 things at the same time. I also do plenty of research to test theories and explore various technologies. So it’s never a dull moment – and this project has been no exception.

Bytecode based compilation

For that past 7 years I have worked close to compiler tech of various types and complexity on a daily basis. Script engines like DWScript, PAXScript, PascalScript, C# script, JavaScript (the list continues) – all of these have been used in projects either inhouse or for customers; and each serve a particular purpose.

Now while they are all fantastic engines and deliver fantastic results – I have had this “itch” to create something new. Something that approach the problem of interpreting, compiling and running code from a more low-level angle. One that is more standardized and not just a result of the inventors whim or particular style. Which in my view results in a system  that wont need years of updates and maintenance. I am a strong believer in simplicity, meaning that most of the time – a simple ad-hoc solution is the best.

It was this belief that gave birth to Smart Mobile Studio to begin with. Instead of spending a year writing a classical parser, tokenizer, AST and code emitter – we forked DWScript and used it to perform the tokenizing for us. We were also lucky to catch the interest of Eric (the maintainer) and the rest is history. Smart Mobile Studio was born and made with off the shelves parts; not boring. grey studies by men in lab coats.

The bytecode project started around the summer of 2017. I had thought about it for a while but this is when I finally took the time to sit down and pen my ideas for a portable virtual machine and bytecode based instruction set. A system that could be easily implemented in any language, from Basic to C/C++, without demanding the almost ridicules system specs and know-how of Java or the Microsoft CLR.

I labeled the system LDef, short for “language definition format”; I have written a couple of articles on the subject here on my blog, but I did not yet have enough finished to demo my ideas.

Time is always a commodity, and like everyone else the majority of my time is invested in my day job, working on Smart Mobile Studio. The rest is divided between my family, social obligations, working out and hobbies. Hence progress has been slow and sporadic.

But I finally have a working prototype so the LDEF parser, assembler, disassembler and runtime is no longer a theory but a functional virtual machine.

Power in simplicity

Without much fanfare I have finally reached the stage where I can demonstrate my ideas. It took a long time to get to this point, because before you can even think of designing a language or carve out a bytecode-format, you have to solve quite a few fundamental concepts. These must be in place before you even entertain the idea of starting on the virtual machine – or the project will simply end up as useless spaghetti that nobody understands or wants to work with.

  • Text parsing techniques must be researched properly
  • Virtual machine design must be worked out
  • A well designed instruction-set must be architected
  • Platform criteria must be met

Text parsing sounds easy. Its one of those topics where people reply”oh yeah, that’s easy” on auto pilot. But when you really dig into this subject you realize it’s anything but easy. At least if you want a parser that is fast, trustworthy – and more importantly: that can be ported to other dialects and languages with relatively ease (Delphi, FreePascal, C#, C/C++ are obvious targets). The ideas has to mature quite frankly.

One of my most central criteria when writing this system has been: no pointers in the core system. How people choose to inplement their version of LDEF for other languages is up to them (Delphi and FPC included), but the original prototype should be as clean and down to earth as possible.

Besides, languages like C# are not too keen on pointers anyways. You can use them but you have to mark your assemblies as “unsafe”. And why bother when var and const parameters offers you a safe and portable alternative? Smart Mobile Studio (or Smart Pascal, the dialect we use) doesn’t use pointers either; we compile to JavaScript after all where references is the name of the game. So avoiding pointers is more than central; it’s fundamental.

We want the system to be easy to port to any language, even Basic for that matter. And once the VM is ported, LDEF compiled libraries and assemblies can be loaded and used straight away.

The virtual CPU and it’s aggregates

The virtual machine architecture is the hard part. That’s where the true challenge resides. All the other stuff, be it source parsing, expressions management, building a model (AST), data types, generating jump tables, emitting bytecodes; All those tasks are trivial compared to the CPU and it’s aggregates.

The design and architecture of the cpu (or “runtime” or “virtual machine” since it consists of many parts) affects everything. It especially shapes the cpu instructions (what they do and how). But like mentioned the CPU is just one of many parts that makes up the virtual machine. What about variable handling? How should variables be allocated, addressed and dealt with? The way the VM deals with this will directly reflect how the byte code operates and how much code you need to initialize, populate and dispose of a variable.

Then you have more interesting questions like: how should the VM distinguish between global and local variable identities? We want the assembly code to be uniform like real machine code, we don’t want “special” instructions for global variables, and a whole different set of instructions for local variables. LDEF allows you to pass registers, variables, constants and a special register (DC) for data control as you wish. You are not bound to using registers only for math for instance.

I opted for an old trick from the Commodore days, namely “bit shift marking”. Local variables have the first bit in their ID set. While Global variables have the first bit zeroed. This allows us to distinguish between global and local variables extremely fast.

Here is a simple example that better demonstrates the technique. The id parameter is variable id read directly from the bytecode:

function TExample.GetVarId(const Id: integer;
  var IsGlobal: boolean): integer; inline;
begin
  IsGlobal := ((byte((Id shl 24) shr 24) shr 1) and 1) = 0;
  result := Id shr 1;
end;

This is just one of a hundred details you need to mentally work out before you even attempt the big one: namely how to deal with OOP and inheritance.

So far we have only talked about low-level bytecodes (ILASM as it’s called under the .net regime). In both Java and  dot net, object orientation is intrinsic to the VM. The runtime engine “knows” about objects, it knows about classes and methods and expect the bytecode files to be neatly organized class structures.

LDEF “might” go that way; but honestly I find it more tempting to implement OOP in ASM itself. So instead of the runtime having intrinsic knowledge of OOP, a high level compiler will have to emit a scheme for OOP instead. I still need to think and research what is best regarding this topic,

Pictures or it didn’t happen

The prototype is now 97% complete. And it will be uploaded so that people can play around with it. The whole system is implemented in Smart Pascal first (a Delphi and FreePascal version will follow) which means the whole system runs in your browser.

Like you would expect from any ordinary x86 assembler program (MASM, NASM, Gnu ASM, IAR [ARM] with others) the system consists of 4 parts:

  • Parser
  • Assembler
  • Disassembler
  • Runtime

So you can write source code directly in the browser, compile / assemble it – and then execute it on the spot. Then you can disassemble it and look at the results in-depth.

assembler

The virtual cpu

The virtual CPU sports a fairly common set of instructions. Unlike Java and .net the cpu has 16 data-aware registers (meaning the registers adopt the type of the value you assign to them, a bit like “variant” in Delphi and C++ builder). Variables allocated using the alloc() instruction can be used just like a register, all the instructions support both registers and variables as params – as well as defined constants, inline constants and strings.

  • R[0] .. R[16] ~ Data aware work registers
  • V[x] ~ Allocated variable
  • DC ~ Data control register

The following instructions are presently supported:

  • alloc [id, datatype]
    Allocate temporary variable
  • vfree [id]
    Release previously allocated variable
  • load [target, source]
    Move data from source to target
  • push [source]
    Push data from a register, variable onto the stack
  • pop [target]
    Pop a value from the stack into a register or variable
  • add [target, source]
    Add value of source to target
  • sub [target, source]
    Subtract source from target
  • mul [target, factor]
    Multiply target by factor
  • div [target, facor]
    Divide target by factor
  • mod [target, factor]
    Modulate target by factor
  • lsl [target, factor]
    Logical shift left, shift bits to the left by factor
  • lsr [target, factor]
    Logical shift right, shift bits to the right by factor
  • btst [target, bit]
    Test bit in target
  • bset [target, bit]
    Set bit in target
  • bclr [target, bit]
    Clear bit in target
  • and [target, source]
    And target with source
  • or [target, source]
    OR target with source
  • not [target]
    NOT value in target
  • xor [target]
    XOR value in target
  • cmp  [target, source]
    Compare value in target with source
  • noop
    No operation, used mostly for byte alignment
  • jsr [label]
    Jump sub-routine
  • bne [label]
    Branch not equal, conditional jump based on a compare
  • beq [label]
    Branch equal, conditional jump based on a compare
  • rts
    Return from a JSR call
  • sys [id]
    Call a standard library function

The virtual cpu can support instructions with any number of parameters, but the most common is either one or two.

I will document more as the prototype becomes available.

Why buy a Vampire accelerator?

August 24, 2017 2 comments

With the Amiga about to re-enter the consumer market, a lot of us “old timers” are busy knocking dust of our old machines. And I love my old machines even though they are technically useless by modern standards. But these machines have a lot of inspiration in them, especially if you write code. And yes there is a fair bit of nostalgia involved in this, there is no point in lying about any of this.

I mean, your mobile phone is probably 100 times faster than a vintage Amiga. But like you will discover with the new machines that are about to hit the market, there is more to this computer than you think. But vintage Amiga? Sadly they lack the power to anything useful [in the “modern” sense].

Enter the vampire

The Vampire is a product that started shipping about a year ago. It’s a FPGA based accelerator, and it’s quite frankly turning the retro scene on its head! Technically it’s a board that you just latch onto the CPU socket of your classical Amiga; it then takes over the whole machine and replace the CPU and chipset with its versions of these. Versions that are naturally a hell of a lot faster!

vanpireThe result is that the good old Amiga is suddenly beefy enough to play Doom, Quake, MP3 files and MPG video (click here to read the datasheet). In short: this little board gives your old Amiga machine a jolt of new life.

Emulation vs. FPGA

Im not going to get into the argument about FPGA not being “real”, because that’s not what FPGA is about. Nor am I negative to classical hardware – because I own a ton of old Amiga gear myself. But I will get in your face when it comes to buying a Vampire.

Before we continue I just want to mention that there are two models of the vampire. There is the add-on board I have just mentioned which is again divided into different models for various Amiga versions (A600, A500 so far). The second model is a completely stand-alone vampire motherboard that wont even need a classic Amiga to work. It will be, for all means and purposes, a stand alone SBC (single board computer) that you just hook up power, video, storage and mouse – and off you go!

This latter version, the stand-alone, is a project I firmly believe in. The old boards have been out of production since 1993 and are getting harder to come by. And just like people they will eventually break down and stop working. There is also price to consider because getting your 20-year-old A500 fixed is not easy. First of all you need a specialist that knows how to fix these old things, and he will also need parts to work with. Since parts are no longer in production and homebrew can only go so far, well – a brand new motherboard that is compatible in every way sounds like a good idea.

There is also the fact that FPGA can reach absurd speeds. It has been mentioned that if the Vampire used a more expensive FPGA modules, 68k based Amiga’s could compete with modern processors (Source: https://www.generationamiga.com/2017/08/06/arria-10-based-vampire-could-reach-600mhz/). Can you imagine a 68k Amiga running side by side with the latest Intel processors? Sounds like a lot of fun if you ask me !

20934076_1512859975447560_99984790080699660_o

Amiga 1000, in my view the best looking Amiga ever produced

But then there is emulation. Proper emulation, which for Amiga users can only mean one thing: UAE in all its magnificent diversity and incarnations.

Nothing beats firing up a real Amiga, but you know what? It has been greatly exaggerated. I recently bought a sexy A1000 which is the first model that was ever made. This is the original Amiga, made way back before Commodore started to mess around with it. It cost me a small fortune to get – but hey, it was my first ever Amiga so I wanted to own one again.

But does it feel better than my Raspberry PI 3b powered A500? Nope. In fact I have only fired up the A1000 twice since I bought it, because having to wait for disks to load is just tedious (not to mention that you can’t get new, working floppy disks anymore). Seriously. I Love the machine to bits but it’s just damn tedious to work on in 2017. It belongs to the 80s and no-one can ever take away its glory or it’s role in computer history. That achievement stands forever.

High Quality Emulation

If you have followed my blog and Amiga escapades, you know that my PI 3b based Amiga, overclocked to the hilt, yields roughly 3.2 times the speed of an Amiga 4000/040. This was at one point the flagship Commodore computer. The Amiga 4000’s were used in movie production, music production, 3d rendering and heavy-duty computing all over the world. And the 35€ Raspberry PI gives you 3.2 times the power via the UAE4Arm emulator. I don’t care what the vampire does, the PI will give it the beating of its life.

Compiling anything, even older stuff that is a joke by today standard, is painful on the Raspberry PI. Here showing my retro-fitted A500 PI with sexy led keyboard. It will soon get a makeover with an UP board :)

My retrofitted Raspberry PI 3b Amiga. Serious emulation at high speed allowing for software development and even the latest Freepascal 3.x compiler

Then suddenly, out of the blue, Asus comes along with the Tinkerboard. A board that I hated when it first came out (read part-1 here, part-2 here) due to its shabby drivers. The boards have been collecting dust on my office shelf for six months or so – and it was blind luck that i downloaded and tested a new disk image. If you missed that part you can read the full article here.

And I’m glad I did because man – the Tinkerboard makes the Raspberry PI 3b look like a toy! Asus has also adjusted the price lately. It was initially priced at 75€, but in Norway right now it retails for about 620 NKR – or 62€. So yes, it’s about twice the price of the PI – but it also gives you twice the memory, twice the graphics performance, twice the IO performance and a CPU that is a pleasure to work with.

The Raspberry PI 3b can’t be overclocked to the extent the model 1 and 2 could. You can over-volt it and tweak the GPU and memory and make it run faster. But people don’t call that “overclock” in the true sense of the word, because that means the CPU is set to run at speeds beyond the manufacturing specifications. So with the PI 3b there is relatively little you can do to make it run faster. You can speed it up a little bit, but that’s it. The Tinkerboard can be overclocked to the hilt.

A1222

The A1222 motherboard is just around the corner [conceptual art]

Out of the box it runs at 1.5 Ghz, but if you add a heatsink, fan (important) and a 3A PSU – you can overclock it to 2.6 Ghz. And like the PI you can also tweak memory and gpu. So the Tinkerboard will happily run 3 times faster than the PI. If you add a USB3 harddisk you will also beef up IO speeds by 100 megabyte a second – which makes a huge difference. Linux does memory paging and it slows down everything if you just use the SD card.

In short: if you fork out 70€ you get a SBC that runs rings around both the vampire and the Raspberry PI 3b. If we take height for some Linux services and drivers that have to run in the background, 3.2 x 3 = 9.6. Lets round that off to 9 since there will be performance hits by the background services. But still — 70€ for an Amiga that runs 9 times faster than A4000 @ MC68040 cpu ? That should blow your mind!

I’m sorry but there has to be something wrong with you if that doesn’t get your juices flowing. I rarely game on my classic Amiga setup. I’m a coder – but with this kind of firepower you can run some of the biggest and best Amiga titles ever made – and the Tinkerboard wont even break a sweat!

You can’t afford to be a fundamentalist

There are some real nutbags in the Amiga community. I think we all agree that having the real deal is a great experience, but the prices we see these days are borderline insane. I had to fork out around 500€  to get my A1000 shipped from Belgium to Norway. Had tax been added on the original price, I would have looked at something in the 700€ range. Still – 500€ for a 20-year-old computer that can hardly run Workbench 1.2? Unless you add the word “collector” here you are in fact barking mad!

If you are looking to get an Amiga for “old times sakes”, or perhaps you have an A500 and wonder if you should fork out for the Vampire? Will it be worth the 300€ pricetag? Unless you use your Amiga on a daily basis I can’t imagine what you need a vampire for. The stand-alone motherboard I can understand, that is a great idea – but the accelerator? 300€?

I mean you can pay 70€ and get the fastest Amiga that ever existed. Not a bit faster, not something on second place – no – THE FASTEST Amiga that has ever existed. If you think playing MP3 and MPG media files is cool with the vampire, then you are in for a treat here because the same software will work. You can safely download the latest patches and updates to various media players on the classic Amiga, and they will run just fine on UAE4Arm. But this time they will run a hell of a lot faster than the Vampire.

14269402_10153769032280906_1149985553_n

My old broken A500 turned into an ass-kicking, battle hardened ARM monster

You really can’t be a fundamentalist in 2017 when it comes to vintage computers. And why would you want to? With so much cool stuff happening in the scene, why would you want to limit your Amiga experience to a single model? Aros is doing awesome stuff these days, you have the x5000 out and the A1222 just around the corner. Morphos is stable and good on the G5 PPC — there has never been a time when there were so many options for Amiga enthusiasts! Not even during the golden days between 1989-1994 were there so many exciting developments.

I love the classic Amiga machines. I think the Vampire stand-alone model is fantastic and if they ramp up the fpga to a faster model, they have in fact re-created a viable computer platform. A 68080 fpga based CPU that can go head to head with x86? That is quite an achievement – and I support that whole heartedly.

But having to fork out this amount of cash just to enjoy a modern Amiga experience is a bit silly. You can actually right now go out and buy a $35 Raspberry PI and enjoy far better results than the Vampire is able to deliver. How that can be negative? I have no idea, nor will I ever understand that kind of thinking. How do any of these people expect the Amiga community to grow and get new, young members if the average price of a 20-year-old machine costs 500€? Which incidentally is 50€ more than a brand new A1222 PPC machine capable of running OS 4.

And with the Tinkerboard you can get 9 times the speed of an A4000? How can that not give you goosebumps!

People talk about Java and Virtual-Machines like its black magic. Well UAE gives you a virtual CPU and chipset that makes mince-meat of both Java and C#. It also comes with one of the largest software libraries in the world. I find it inconceivable that no-one sees the potential in that technology beyond game playing – but when you become violent or nasty over hardware, then I guess that explains quite a bit.

I say, use whatever you can to enjoy your Amiga. And if your perfect Amiga is a PI or a Tinkerboard (or ODroid) – who cares!

I for one will not put more money into legacy hardware. I’m happy that I have the A1000, but that’s where it stops for me. I am looking forward to the latest Amiga x5000 PPC and cant wait to get coding on that – but unless the Appollo crew upgrades to a faster FPGA I see little reason to buy anything. I would gladly pay 500 – 1000 € for something that can kick modern computers in the behind. And I imagine a lot of 68k users would be willing to do that as well. But right now PPC is a much better option since it gives you both 68k and the new OS 4 platform in one price. And for affordable Amiga computing, emulation is now of such quality that you wont really notice the difference.

And I love coding 68k assembler on my Amibian emulator setup. There is nothing quite like it 🙂

The Tinkerboard Strikes Back

August 20, 2017 Leave a comment

For those that follow my blog you probably remember the somewhat devastating rating I gave the Tinkerboard earlier this year (click here for part 1, and here for part 2). It was quite sad having to give such a poor rating to what is ultimately a fine piece of hardware. I had high hopes for it – in fact I bought two of the boards because I figured there was no way it could suck with that those specs. But suck it did and while the muscle was there, the drivers were in such a state that it never emerged for the user. It was released prematurely, and I think most people that bought it agrees on this.

asus

The initial release was less than bad, it was horrible

Since my initial review those months ago good things have happened. Asus seem to have listened to the “poonami” of negative feedback and adapted their website accordingly. Unlike the first time I visited when you literally had to dig into recursive menus (which was less than intuitive in this case) just to download the software – the disk images are now available at the bottom of the product page. So thumbs up for that (!)

They have also made the GPIO programming API a lot easier to get; downloading it is reduced to a “one liner” for C developers, which is the way it should be. And they have likewise provided wrappers for other languages, like ever popular python and scratch.

I am a bit disappointed that they don’t provide freepascal units. A lot of developers use object pascal on these board after all, because Object Pascal gives you a better balance between productivity and depth. Pascal is easier to learn (it was designed for that after all) but avoids some of the pitfalls of C/C++ while retaining all the good things. Porting over C headers is fairly easy for a good pascal programmer – but it would be cool of Asus remember that there are more languages in the world than C and python.

All of this aside: the most important change of all is what Asus has done with the drivers! They have finally put together drivers that shows off the capabilities of the hardware and unleash the speed we all hoped for when the board was first announced. And man does it show! My previous experience with the Tinkerboard was horrible; it was the text-book example of a how not to release a product (the whole release has been odd; Asus is a huge, multi-national corporation. Yet their release had basement 3 man band written all over it).

So this is fantastic news! Finally the Tinkerboard delivers and can be used for real life projects!

Smart IOT

At The Smart Company we both create and use our core product, Smart Mobile Studio, to deliver third-party solutions. As the name implies Smart is a software development system initially made for mobile applications; but it quickly grew into a much larger toolchain and is exceptionally good for making embedded applications. With embedded applications I mean things that run on kiosk systems, cash machines and stuff like that; basically anything with a touch-screen that does something.

smarts

The Smart desktop gives you a good starting point for embedded work

One of the examples that ship with Smart Pascal is a fully working desktop embedded environment. Smart compiles for both ordinary browsers (JavaScript environments with a traditional HTML5 display) but also for node.js, which is JavaScript unbound by the strict rules of a browser. Developers typically use node.js to write highly scalable server software, but you are naturally not limited to that. Netflix is written 100% in Node.js, so we are talking serious firepower here.

Our embedded environment is called The Smart Desktop (also known as Amibian.js) and gives you a ready-made node.js back-end that couples with a HTML5 front-end. This is a ready to use environment that you can deploy your own applications through. Things like storage, a nice looking UI, user logon and credentials and much, much more is all implemented for you. You don’t have to use it of course, you can write your own system from scratch if you like. We created “Amibian” to demonstrate just how powerful Smart Pascal can be in the right hands.

With this in mind – my main concern when testing SBC’s (single board computers) is obviously web performance. By default JavaScript is a single core event-driven runtime system; you can spawn threads of course but its done somewhat different from how you would work in Delphi or C++.  JavaScript is designed to be system friendly and a gentle giant if you like, which has turned out to be a good thing – because the way JS schedules execution makes it ideal for clustering!

Most people find it hard to believe that JavaScript can outperform native code, but the JavaScript runtimes of today is almost a whole eco system in themselves. With JIT compilers and LLVM optimization — it’s a whole new ballgame.

Making a scale

To give you a better context to see where the Tinkerboard is on a scale, I decided to set up a couple of simple tests. Nothing fancy, just running the same web applications and see how each of them perform on different boards. So I used the same 3 candidates as before, namely the Raspberry PI 3b, the Hardkernel ODroid XU4 and last but not least: the Asus Tinkerboard.

I setup the following applications to compile with the desktop system, meaning that they were compiled with the Smart project. We got plenty of web applications but for this I wanted to pack the most demanding apps in our library:

  • Skid-Row intro remake using the CODEF library
  • Quake 3 asm.js build
  • Plex

OK let’s go through them and see where the chips land!

The Raspberry PI 3b

bassoon

Bassoon ran well, its not that demanding

The Raspberry PI was aweful (click here for a video). There is no doubt that native applications like UAE4Arm runs extremely well on the PI (which contains hand optimized assembler, not exactly a fair fight)- but when it comes to modern HTML5 the PI doesn’t stand a chance. You could perhaps use a Raspberry PI 3b for simple applications which are not graphic and cpu intensive, but you can forget about anything remotely taxing.

It ran Bassoon reasonably fast, but all in all you really don’t want a raspberry when doing high quality IOT, unless its headless code and node.js perhaps. Frameworks like Johnny #5 gives you a ton of GPIO features out of the box – in fact you can target 40 embedded systems without any change to your code. But for large, high quality web front-ends, the PI just wont cut it.

  • Skid-Row: 1 frame per second or less
  • Quake: Can’t even start, just forget it
  • Plex: Starts but it lags so much you can’t watch anything

But hey, I never expected $35 to give me a kick ass ARM experience anyways. There are 1000 things the PI does very well, but HTML is not one of them.

ODroid XU4

XU4CloudShellAssemble29

The ODroid packs a lot of power!

The ODroid being faster than the Raspberry PI is nothing new, but I was surprised at how much power this board delivers. I never expected it to give me a Linux experience close to that of a x86 PC; I mean we are talking about a 45€ SBC here. And it’s only 10€ more than the Raspberry PI, which is a toy at best. But the ODroid XU4 delivers a good Linux desktop; And it’s well worth the extra 10€ when compared to the PI.

Personally I don’t understand why people keep buying PI’s when there is so much better options on the market now. At least not if web technology is involved. A small server or emulator sure, but not HTML5 and browsers. The PI just cant handle it.

  • Skid-Row: 4-5 frames per second
  • Quake: Runs at very enjoyable speed (!)
  • Plex: Runs well but you may want to pick SD or 720p to avoid lags

What really shocked me was that ODroid XU4 can run Quake.js! The PI can’t even start that because it’s so demanding. It is one of the largest and most resource hungry asm.js projects out there – but ODroid XU4 did a fantastic job.

Now it’s not a silky smooth experience, I would guess something along the lines of 17-20 fps. But you know what? Thats pretty good for a $45 board.

I have owned far worse x86 PC’s in my day.

The Tinkerboard

Before i powered up the board I was reluctant to push it too far, because I thought it would fail me once again. I did hope that something had been done by Asus to rectify the situation though, because Asus really should have done a better job before releasing it. It’s now been roughly 6 months since I bought it, and roughly 8 months since it was released here in Europe. It would have been better for them to have waited with the release. I was not alone about butchering the whole board, its been a source of frustration for those that bought it. 75€ is not much, but no-one likes to throw money out the window like that.

Long story short: I downloaded the latest Ubuntu image and burned that to an SD card (I actually first downloaded the Debian Jessie image they have, but sadly you have to do a bit of work to turn that into a desktop system – so I decided to go for Ubuntu instead). If the drivers are in order I have a feeling the Jessie image will be even faster – Ubuntu has always been a high-quality distribution, but it’s also one of the most demanding. One might even say it’s become bloated. But it does deliver a near Microsoft Windows like experience which has served the Linux community well.

But the Tinkerboard really delivers! (click here for the video) Asus have cleaned up their act and implemented the drivers properly, and you can feel that the moment the desktop comes into view. With the PI you are always fighting with lagging performance. When you start a program the whole system freezes for a while, when you quit a program the system freezes – hell when you move the mouse around the system bloody freezes! Well that is not the case with the Tinkerboard that’s for sure. The tinkerboard feels more like running vanilla Ubuntu on a normal x86 PC to be honest.

  • Skid-Row: 10-15 frames per second
  • Quake: Full screen 32bit graphics, runs like hell
  • Plex: Plays back fullscreen HD, just awesome!

All I can say is this: if you are going to do any bit of embedded coding, regardless if you are using Smart Mobile Studio or some other devkit — this is the board to get (!)

Like already mentioned it does cost almost twice as much as the PI, but that extra 30€ buys you loads of extra power. It opens up so many avenues of code and you can explore software far more complex than both the PI and ODroid combined. With the tinkerboard you can finally deliver a state of the art product built with off the shelves web components. It’s in a league of its own.

The ‘tinker’ rocks at last

When I first bought the tinker i felt cheated. It was so frustrating because the specs were so good and the terrible performance just came down to sloppy work and Asus releasing it prematurely for cash (lets face it, they tapped into the lucrative market established by the PI foundation). By looking at the specs you knew it had the firepower to deliver so much, but it was held back by ridicules drivers.

There is still a lot that can be done to make the Tinkerboard run even faster. Like I mentioned Ubuntu is not the racecar of distributions out there. Ubuntu is fat, there is no other way of saying it. So if someone took the time to create a minimalistic Jessie image, recompile every piece with maximum llvm optimization and as few running services as possible — the tinkerboard would positively fly!

So do I recommend it? I am thrilled to say that yes, I can finally recommend the tinkerboard! It is by far the coolest board in my collection now. In fact it’s so good that I’m donating one to my daughter. She is presently using an iMac which is overkill for her needs at age 10. Now I can make a super simple menu with Netflix and Youtube, buy a nice touch-screen display and wall mount it in her room.

Well done Asus!

Where is PowerPC today?

August 5, 2017 7 comments
ppcproto_2_big

Phase 5 PowerUP board prototype

Anyone who messed around with computers back in the 90s will remember PowerPC. This was the only real alternative for Intel’s complete dominance with the x86 CPU’s and believe me when I say the battle was fierce! Behind the PowerPC you had companies like IBM and Motorola, companies that both had (or have) an axe to grind with Intel. At the time the market was split in half – with Intel controlling the business PC segment – while Motorola and IBM represented the home computer market.

The moment we entered the 1990s it became clear that Intel and Microsoft was not going to stay on their side of the fence so to speak. For Motorola in particular this was a death match in the true sense of the word, because the loss of both Apple and Commodore represented billions in revenue.

What could you buy in 1993?

The early 90’s were bitter-sweet for both Commodore and Apple. Faster and affordable PC’s was already a reality and as a consequence – both Amiga machines and Mac’s were struggling to keep up.

The Amiga 1200 still represented a good buy. It had a massive library of software, both for entertainment and serious work. But it was never really suited for demanding office applications. It did wonders in video and multimedia development, and of course games and entertainment – but the jump in price between A1200 and A4000 became harder and harder to justify. You could get a well equipped Mac with professional tools at that range.

Apple on the other hand was never really an entertainment company. Their primary market was professional graphics, desktop publishing and music production (Photoshop, Pro-tools, Logic etc. were exclusive Mac products). When it came to expansions and ports they were more interested in connecting customers to industrial printers, midi devices and high-volume storage. Mac was always a machine for the upper class, people with money to burn; The Amiga dominated the middle-class. It was a family type computer.

But Apple was not a company in hiding, neither from Commodore or the Wintel threat. So in 1993 they introduced the Macintosh Quadra series to the consumer market. Unlike their other models this was aimed at home users and students, meaning that it was affordable, powerful and could be used for both homework and professional applications. It was a direct threat to upper middle-class that could afford the big box Amiga machines.

Macintosh_Quadra_605

The 68k Macintosh Quadra came out in October of 1993

But no matter how brilliant these machines were, there was no hiding the fact that when it came to raw power – the PC was not taking any prisoners. It was graphically superior in every way and Intel started doubling the CPU speed exponentially year by year; Just like Moore’s law had predicted.

With the 486-DX2 looming on the horizon, it was game over for the old and faithful processors. The Motorola 68k family had been there since the late 70’s, it was practically an institution, but it was facing enemies on all fronts and simply could not stand in the way of evolution.

The PowerPC architecture

If you are in your 20’s you wont remember this, but back in the late 80’s early 90’s, the battle between computer vendors was indeed fierce. You have to take into consideration that Microsoft and Intel did a real number on IBM. Microsoft stabbed IBM in the back and launched Windows as a direct competitor for IBM’s OS2. When I write “stabbed in the back” I mean that literally because Microsoft was initially hired to create parts of OS/2. It was the typical lawsuit mess, not unlike Microsoft and Sun later, where people would pick sides and argue who the culprit really was.

As you can imagine IBM was both bitter and angry at Microsoft for stealing the home PC market in such a shameful way. They were supposed to help IBM and be their ally, but turned out to be their most fierce competitor. IBM had also created a situation where the PC was licensed to everyone (hence the term “ibm clone”) – meaning that any company could create parts for it and there was little IBM could do to control the market like they were used to. They would naturally get revenue from these companies in the form of royalties (and would later retire 99% of all their products. Why work when they get billions for doing nothing?), but at the time they were still in the game.

Motorola was in a bad situation themselves, with the 68k line of processors clearly incapable of facing the much faster x86 CPU’s. Something new had to be created to ensure their market share.

The result of this “marriage of necessity” was the PowerPC line of processors.

VMM-iMacs-LG

The Apple “Candy” Mac’s made PPC and computing sexy

Apple jumped on the idea. It was the only real alternative to x86. And you have to remember that – had Apple gone to x86 at that point, they would basically have fed the forces that wanted them dead. You could hardly make out where Microsoft started and Intel ended during the early 90s.

I’m going to spare you the whole fall and rebirth of Apple. Needless to say Apple came to the point where their branch of PowerPC processors caused more problems than they had benefits. The type of PowerPC processors Apple used generated an absurd amount of heat, and it was turning into a real problem. We see this in their later models, like the dual cpu G5 PowerMac where 40% of the cabinet is dedicated purely to cooling.

And yes, Commodore kicked the bucket back in 1994 so they never finished their new models. Which is a damn shame because unlike Apple they went with a dedicated RISC processor. These models did not suffer the heating problems the PPC’s used by Apple had to deal with.

Note: PPC and RISC are two sides of the same coin. PPC processors are RISC based, but naturally there exists hundreds of different implementations. To avoid a ton of arguments around this topic I treat PPC as something different from PA-RISC which Commodore was playing with in their Hombre “skunkworks” projects.

You can read all about Apple’s strain of PowePC processors here, and PA-RISC here.

PPC in modern computers?

I am going to be perfectly honest. When I heard that the new Amiga machines were based on PowerPC my reaction was less than polite. I mean who the hell would use PowerPC in our day and age? Surely Apple’s spectacular failure would stand as a warning for all time? I was flabbergasted to say the least.

the_red_one_498240f858553The Amiga One came out and I didn’t even give it the time of day. The Sam440 motherboards came out, I couldn’t care less. It would have been nice to own one, but the price at the time and the lack of software was just to disproportionate to make sense.

And now there is the Amiga x5000 and a smaller, more affordable A1222 (a.k.a “Tabour”) model just around the corner. And they are both equipped with a PPC CPU. There are just two logical conclusions you can make when faced with this: either the maker of these products is nuttier than a snicker’s bar, or there is something the general public doesn’t know.

What the general public doesn’t know has turned out to be quite a lot. While you would think PPC was dead and buried, the reality of PPC is not that simple. Turns out there is not just one PPC family (or branch) but several. The one that Apple used back in the day (and that MorphOS for some odd reason support) represents just one branch of the PPC tree if you like. I had no idea this was the case.

The first thing you are going to notice is that the CPU in the new Amiga’s doesn’t have the absurd cooling problems the old Mac’s suffered. There are no 20cm cooling ribs and you don’t need 2 fans on Ritalin to prevent a cpu meltdown; and you also don’t need a custom aluminium case to keep it cool (everyone thinks the “Mac Pro” cases were just to make them look cool. Turned out it was more literal, it was to turn the inside into a fridge).

In other words, the branch of PPC that we have known so far, the one marketed as “PowerPC” by Apple, Phase5 and everyone back in the 90’s is indeed dead and buried. But that was just one branch, one implementation of what is known as PPC.

Remember when ARM died?

When I started to dig into the whole PPC topic I could not help but think about the Arm processor. It’s almost spooky to reflect on how much we, the consumer, blindly accept as fact. Just think about it: You were told that PowerPC was the bomb, so you ended up buying that. Then you were told that PowerPC was crap and that x86 was the bomb, so you mentally buried PowerPC and bought x86 instead. The consumer market is the proverbial cheep farm where most of us just blindly accept whatever advertising tell us.

This was also the case with Arm. Remember a company called Acorn? It was a great british company that invented, among other things, the Arm core. I remember reading articles about Acorn when I was a kid. I even sold my Amiga for a while and messed around with an Acorn Archimedes. A momentary lapse of sanity, I know; I quickly got rid of it and bought back my Amiga. But I did learn a lot from messing around in RISC OS.

lg__MG_5441

The Acorn Archimedes, a brilliant RISC based machine that sadly didnt make it

My point is, everyone was told that Arm was dead back in the 80’s. The Acorn computers used a pure RISC processor at the time (again, PPC is a RISC based CPU but I treat them as separate since the designs are miles apart), but it was no secret that they were hoping to equip their future Acorn machines with this new and magic Arm thing. And reading about the power and speed of Arm was very exciting indeed. Sadly such a computer never saw the light of day back in the 80’s. Acorn went bust and the market rolled over Acorn much like it would Commodore later.

The point im trying to make is that, everyone was told that Arm died with Acorn. And once that idea was planted in the general public, it became a self-fulfilling prophecy. Arm was dead. End of story. It doesn’t matter that Acorn had set up a separate company that was unaffected by the bankrupcy. Once the public deem something as dead, it just vanish from the face of the earth.

Fast forward to our time and Arm is no longer dead, quite the opposite! It’s presently eating its way into just about every piece of electronics you can think of. In many ways Arm is what made the IOT revolution possible. The whole Raspberry PI phenomenon would quite frankly never have happened without Arm. The low price coupled with the fantastic performance -not to mention that these cpu’s rarely need cooling (unless you overclock the hell out of them) has made Arm the most successful CPU ever made.

The PPC market share

With Arm’s so-called death and re-birth in mind, let’s turn our eyes to PPC and look at where it is today. PPC has suffered pretty much the same fate as Arm once did. Once a branch of the tech is defined “dead” by media and spin-doctors, regardless if the PPC is actually a cluster of designs not a single design or “chip”, the general public blindly follows – and mentally bury the whole subject.

And yes I admit it, I am guilty of this myself. In my mind there was no distinction between PPC and PowerPC. Which is a bit like not knowing the difference between Rock & Roll as a genre, and KISS the rock band. If we look at this through a parallel what we have done is basically to ban all rock bands, regardless of where they are from, because one band once gave a lousy concert.

And that is where we are at. PPC has become invisible in the consumer market, even though it’s there. Which is understandable considering the commercial mechanisms at work here, but is really PPC dead? This should be a simple question. And commercial mechanisms not withstanding the answer is a solid NO. PPC is not dead at all. We have just parked it in a mental limbo. Out of sight, out of mind and all that.

Tray-of-Broadway-Processors

Playstation 3, Nintendo WII U and Playstation VR all use Freescale PPC

PPC today has a strong foothold in industrial computing. The oil sector is one market that use PPC SBC’s extensively (read: single board computers). You will find them in valve controllers, pump and drill systems and pretty much any systems that require a high degree of reliability.

You will also be surprised to learn that cheap PPC SBC’s enjoy the same low energy requirements people adopt Arm over (3.3 – 5.0 V). And naturally, the more powerful the chip – the more juice it needs.

The reason that PPC is popular and still being used with great success is first of all reliability. This reliability is not just physical hardware but also software. PPC gives you two RTOS’s (real-time operating system) to choose from. Each of them comes with a software development toolchain that rivals whatever Visual Studio has to offer. So you get a good-looking IDE, a modern and up to date compiler, the ability to debug “live” on the boards – and also real-time signal processing protocols. The list of software modules you can pick from is massive.

neutrino_desktop

QNX RTOS desktop, This is a module you can embed in your own products

The last part of that paragraph, namely real-time signal processing, is extremely important. Can you imagine having an oil valve with 40.000 cubic tons of pressure failing, but the regulator that is supposed to compensate doesn’t get the signal because Linux or Windows was busy with something else? It get’s pretty nutty at that level.

The second market can be found with set-top boxes, game consoles and tv signal decoders. While this market is no doubt under attack from cheap Arm devices – PPC still has a solid grip here due to their reliability. PPC as an embedded platform has roughly two decades head start over Arm when it comes to software development. That is a lifetime in computing terms.

When developers look at technology for a product they are prototyping, the hardware is just one part of the equation. Being able to easily write software for the platform, perform live debugging of code on the boards, and maintain products over decades rather than consumer based 1-3 year warranties; it’s just a completely different ballgame. Technology like external satellite-dish parts runs for decades without maintenance. And there are good reasons why you dont see x86 or Arm here.

ps3-productthumbnailr-v2-us-14aug14

Playstattion 3 and the new PSX VR box both have a Freescale PPC cpu

As mentioned earlier, the PPC branch used today is not the same branch that people remember. I cannot stress this enough, because mixing these is like mistaking Intel for AMD. They may have many of the same features but ultimately they are completely different architectures.

The “PowerPC” label we know from back in the day was used to promote the branch that Apple used. Amiga accelerators also used that line of processors for their PowerUP boards. And anyone who ever stuffed a PowerUP board in their A1200 probably remember the cooling issues. I bought one of the more affordable PowerUP boards for my A1200, and to this day I associate the whole episode as a fiasco. It was haunted by instability, sudden crashes and IO problems – all of it connected to overheating.

But PPC today as delivered by Freescale Semiconductors (bought by NXP back in 2015) are different. They don’t suffer the heat problem of their remote and extinct cousins, have low power requirements and are incredibly reliable. Not to mention leagues more powerful than anything Apple, Phase 5 or Commodore ever got their hands on.

Is Freescale for the Amiga a total blunder?

Had you asked me a few days back chances are I would said yes. I have known for a while that Freescale was used in the oil sector, but I did not take into consideration the strength of the development tools and the important role an RTOS system holds in a critical production environment.

I must also admit that I had no idea that my Playstation and Nintendo consoles were PPC based. Playstation 4 doesn’t use PPC on its motherboard, but if you buy the fantastic and sexy VR add-on package, you get a second module that is again – a PPC based product.

It also turns out that IBM’s high-end mainframes, those Amazon and Microsoft use to build the bedrock for cloud computing are likewise PPC based. So once again we see that PPC is indeed there and it’s playing an important role in our lives – but most people don’t see it. So all of this is a matter of perspective.

Basic_Pack

The Nintendo WII U uses a Freescale PPC cpu, not exactly a below-par gaming system

But the Amiga x5000 or A1222 will not be controlling a high-pressure valve or serving half a million users (hopefully); so does this affect the consumer at all? Does any of this hold any value to you or me? What on earth would real-time feedback mean for a hobby user that just want to play some games, watch a movie or code demos?

The answer is: it could have a profound benefit, but it needs to be developed and evolved first.

Musicians could benefit greatly from the superior signal processing features, but as of writing I have yet to find any mention of this in the Amiga NG SDK. So while the potential is there I doubt we will see it before the Amiga has sold in enough volume.

Fast and reliable signal dispatching in the architecture will also have a profound effect on IPC (inter process communication), allowing separate processes to talk with each other faster and more reliably than say, windows or Linux. Programmers typically use a mutex or a critical-section to protect memory while it’s being delivered to another process (note: painting in broad strokes here), this is a very costly mechanism under Windows and Linux. For instance, the reason UAE is still single threaded is because isolating the custom chips in separate threads and having them talk – turned out to be too slow. If PPC is able to deal with that faster, it also means that processes would communicate faster and more interesting software can be made. Even practical things like a web-server would greatly benefit from real-time message dispatching.

ppc

There is no lack of vendors for PPC SBC’s online, here from Abaco Systems

So for us consumers, it all boils down to volume. The Freescale branch of PPC processors is not dead and will be around for years to come; they are sold by the millions every year to great variety of businesses; and while most of them operate outside the traditional consumer awareness, it does have a positive effect on pricing. The more a processor is sold the cheaper it becomes.

Most people feel that the Amiga x5000 is to expensive for a home computer and they blame that on the CPU. Forgetting that 50% of the sub total goes into making the motherboard and all the parts around the CPU. The CPU alone does not represent the price of a whole new platform. And that’s just the hardware! On top of this you have the job of re-writing a whole operating system from scratch, add features that have evolved between 1994 and 2017, and make it all sing together through custom written drivers.

So it’s not your average programming project to say the least.

But is it really too expensive? Perhaps. I bought an iMac 2 years back that was supposed to be my work machine. I work as a developer and use VMWare for almost all my coding. Turned out the i5 based beauty just didn’t have the ram. And fitting it with more ram (it came with 16 gigabytes, I need at least 32) would cost a lot more than a low-end PC. The sad part is that had I gone for a PC I could have treated myself to an i7 with 32 gigabyte ram for the same price.

I later bit the bullet and bought a 3500€ Intel i7 monster with 64 gigabytes of ram and the latest Nvidia graphics card. Let’s just say that the Amiga x5000 is reasonable in context with this. I basically have an iMac i have no use for, it just sits there collecting dust and is reduced to a music player.

Secondly we have to look at potential. The Mac and Windows machines now have their potential completely exposed. We know what these machines do and it’s not going to change any time soon.

The Amiga has a lot of hidden potential that has yet to be realized. The signal processing is just one of them. The most interesting is by far the Xena chip (XMOS) that allow developers to implement custom hardware in software. It might sound like FPGA but XMOS is a different technology. Here you write code using a custom C compiler that generates a special brand of opcodes. Your code is loaded onto a part of the chip (the chip is divided into X number of squares, each representing a piece of logic, or “custom chip” if you like) and will then act as a custom-chip.

790_2

The Amiga x5000 in all her glory, notice the moderate cooling for the CPU

The XENA technology could really do wonders for the Amiga. Instead of relying on traditional library files that are executed by the main CPU, things like video decoding, graphical effecs, auxiliary 3D functionality and even emulation (!) can now be dealt-with by XENA and executed in parallel with the main CPU.

If anything is going to make or break the Amiga, it wont be the Freescale PPC processor – it will be the XENA chip and how they use it to benefit the consumer.

Just imagine running UAE almost solely on the XENA chip, emulating 68k applications at near native speed – without using the main CPU at all? Sounds pretty good! And this is a feature you wont find on a PC motherboard. Like always they will add it should it become popular, but right now it’s not even on the radar.

So I for one do believe that the next generation Amiga machines have a shot. The A1222 is probably going to be the defining factor. It will retail at an affordable price (around 450€) and will no doubt go head-to-head with both consoles and mid-range PC’s.

So like always it’s about volume, timing and infrastructure. Everything but the actual processor to be honest.

Last words

Its been a valuable experience to look around and read up on PPC. When I started my little investigation I had a dark picture in my head where the new Amiga machines were just a waste of time. I am happy to say that this is not true and the Freescale processors are indeed alive and kicking.

It was also interesting to see how widespread PPC technology really is. It’s not just a specialist platform, although that is absolutely where it’s strength is financially; it ships in everything from your home router to your tv-signal decoder or game system. So it does have a foot in the consumer market, but like I have outlined here – most consumers have parked it in a blind-spot and we associate the word “PowerPC” with the fiasco of Apple in the past. Which is a bit sad because it’s neither true or fair.

djnick-amigaos4-screengrab

Amiga OS 4.x is turning out to be a very capable system

I have no problem seeing a future where the Amiga becomes a viable commercial product again. I think there is some way to go before that happens, and the spear-head is going to be the A1222 or a similar product.

But like I have underlined again and again – it all boils down to developers. A platform is only as good as the software you can run on it, and Hyperion should really throw themselves into porting games and creativity software. They need to build up critical mass and ship the A1222 with a ton of titles.

For my personal needs I will be more than happy just owning the x5000. It doesn’t need to be a massive commercial success, because Amiga is in my blood and I will always enjoy using it. And yes it is a bit expensive and I’m not in the habit of buying machines like this left and right. But I can safely say that this is a machine that I will be enjoying for many, many years into the future – regardless of what others may feel about it.

I would suggest that Hyperion lower their prices to somewhere around 1000€ if possible. Right now they need to think volume rather than profit, and hopefully Hyperion will start making the OS compatible with Arm. Again my thoughts go to volume and that IOT and embedded systems need an alternative to Linux and Windows 10 embedded.

But right now I’m itching to start developing for it – and I’m not alone 🙂