Why buy a Vampire accelerator?

August 24, 2017 Leave a comment

With the Amiga about to re-enter the consumer market, a lot of us “old timers” are busy knocking dust of our old machines. And I love my old machines even though they are technically useless by modern standards. But these machines have a lot of inspiration in them, especially if you write code. And yes there is a fair bit of nostalgia involved in this, there is no point in lying about any of this.

I mean, your mobile phone is probably 100 times faster than a vintage Amiga. But like you will discover with the new machines that are about to hit the market, there is more to this computer than you think. But vintage Amiga? Sadly they lack the power to anything useful [in the “modern” sense].

Enter the vampire

The Vampire is a product that started shipping about a year ago. It’s a FPGA based accelerator, and it’s quite frankly turning the retro scene on its head! Technically it’s a board that you just latch onto the CPU socket of your classical Amiga; it then takes over the whole machine and replace the CPU and chipset with its versions of these. Versions that are naturally a hell of a lot faster!

vanpireThe result is that the good old Amiga is suddenly beefy enough to play Doom, Quake, MP3 files and MPG video (click here to read the datasheet). In short: this little board gives your old Amiga machine a jolt of new life.

Emulation vs. FPGA

Im not going to get into the argument about FPGA not being “real”, because that’s not what FPGA is about. Nor am I negative to classical hardware – because I own a ton of old Amiga gear myself. But I will get in your face when it comes to buying a Vampire.

Before we continue I just want to mention that there are two models of the vampire. There is the add-on board I have just mentioned which is again divided into different models for various Amiga versions (A600, A500 so far). The second model is a completely stand-alone vampire motherboard that wont even need a classic Amiga to work. It will be, for all means and purposes, a stand alone SBC (single board computer) that you just hook up power, video, storage and mouse – and off you go!

This latter version, the stand-alone, is a project I firmly believe in. The old boards have been out of production since 1993 and are getting harder to come by. And just like people they will eventually break down and stop working. There is also price to consider because getting your 20-year-old A500 fixed is not easy. First of all you need a specialist that knows how to fix these old things, and he will also need parts to work with. Since parts are no longer in production and homebrew can only go so far, well – a brand new motherboard that is compatible in every way sounds like a good idea.

There is also the fact that FPGA can reach absurd speeds. It has been mentioned that if the Vampire used a more expensive FPGA modules, 68k based Amiga’s could compete with modern processors. Can you imagine a 68k Amiga running side by side with the latest Intel i7 processors? Sounds like a lot of fun if you ask me !

20934076_1512859975447560_99984790080699660_o

Amiga 1000, in my view the best looking Amiga ever produced

But then there is emulation. Proper emulation, which for Amiga users can only mean one thing: UAE in all its magnificent diversity and incarnations.

Nothing beats firing up a real Amiga, but you know what? It has been greatly exaggerated. I recently bought a sexy A1000 which is the first model that was ever made. This is the original Amiga, made way back before Commodore started to mess around with it. It cost me a small fortune to get – but hey, it was my first ever Amiga so I wanted to own one again.

But does it feel better than my Raspberry PI 3b powered A500? Nope. In fact I have only fired up the A1000 twice since I bought it, because having to wait for disks to load is just tedious (not to mention that you can’t get new, working floppy disks anymore). Seriously. I Love the machine to bits but it’s just damn tedious to work on in 2017. It belongs to the 80s and no-one can ever take away its glory or it’s role in computer history. That achievement stands forever.

High Quality Emulation

If you have followed my blog and Amiga escapades, you know that my PI 3b based Amiga, overclocked to the hilt, yields roughly 3.2 times the speed of an Amiga 4000/040. This was at one point the flagship Commodore computer. The Amiga 4000’s were used in movie production, music production, 3d rendering and heavy-duty computing all over the world. And the 35€ Raspberry PI gives you 3.2 times the power via the UAE4Arm emulator. I don’t care what the vampire does, the PI will give it the beating of its life.

Compiling anything, even older stuff that is a joke by today standard, is painful on the Raspberry PI. Here showing my retro-fitted A500 PI with sexy led keyboard. It will soon get a makeover with an UP board :)

My retrofitted Raspberry PI 3b Amiga. Serious emulation at high speed allowing for software development and even the latest Freepascal 3.x compiler

Then suddenly, out of the blue, Asus comes along with the Tinkerboard. A board that I hated when it first came out (read part-1 here, part-2 here) due to its shabby drivers. The boards have been collecting dust on my office shelf for six months or so – and it was blind luck that i downloaded and tested a new disk image. If you missed that part you can read the full article here.

And I’m glad I did because man – the Tinkerboard makes the Raspberry PI 3b look like a toy! Asus has also adjusted the price lately. It was initially priced at 75€, but in Norway right now it retails for about 620 NKR – or 62€. So yes, it’s about twice the price of the PI – but it also gives you twice the memory, twice the graphics performance, twice the IO performance and a CPU that is a pleasure to work with.

The Raspberry PI 3b can’t be overclocked to the extent the model 1 and 2 could. You can over-volt it and tweak the GPU and memory and make it run faster. But people don’t call that “overclock” in the true sense of the word, because that means the CPU is set to run at speeds beyond the manufacturing specifications. So with the PI 3b there is relatively little you can do to make it run faster. You can speed it up a little bit, but that’s it. The Tinkerboard can be overclocked to the hilt.

A1222

The A1222 motherboard is just around the corner [conceptual art]

Out of the box it runs at 1.5 Ghz, but if you add a heatsink, fan (important) and a 3A PSU – you can overclock it to 2.6 Ghz. And like the PI you can also tweak memory and gpu. So the Tinkerboard will happily run 3 times faster than the PI. If you add a USB3 harddisk you will also beef up IO speeds by 100 megabyte a second – which makes a huge difference. Linux does memory paging and it slows down everything if you just use the SD card.

In short: if you fork out 70€ you get a SBC that runs rings around both the vampire and the Raspberry PI 3b. If we take height for some Linux services and drivers that have to run in the background, 3.2 x 3 = 9.6. Lets round that off to 9 since there will be performance hits by the background services. But still — 70€ for an Amiga that runs 9 times faster than A4000 @ MC68040 cpu ? That should blow your mind!

I’m sorry but there has to be something wrong with you if that doesn’t get your juices flowing. I rarely game on my classic Amiga setup. I’m a coder – but with this kind of firepower you can run some of the biggest and best Amiga titles ever made – and the Tinkerboard wont even break a sweat!

You can’t afford to be a fundamentalist

There are some real nutbags in the Amiga community. I think we all agree that having the real deal is a great experience, but the prices we see these days are borderline insane. I had to fork out around 500€  to get my A1000 shipped from Belgium to Norway. Had tax been added on the original price, I would have looked at something in the 700€ range. Still – 500€ for a 20-year-old computer that can hardly run Workbench 1.2? Unless you add the word “collector” here you are in fact barking mad!

If you are looking to get an Amiga for “old times sakes”, or perhaps you have an A500 and wonder if you should fork out for the Vampire? Will it be worth the 300€ pricetag? Unless you use your Amiga on a daily basis I can’t imagine what you need a vampire for. The stand-alone motherboard I can understand, that is a great idea – but the accelerator? 300€?

I mean you can pay 70€ and get the fastest Amiga that ever existed. Not a bit faster, not something on second place – no – THE FASTEST Amiga that has ever existed. If you think playing MP3 and MPG media files is cool with the vampire, then you are in for a treat here because the same software will work. You can safely download the latest patches and updates to various media players on the classic Amiga, and they will run just fine on UAE4Arm. But this time they will run a hell of a lot faster than the Vampire.

14269402_10153769032280906_1149985553_n

My old broken A500 turned into an ass-kicking, battle hardened ARM monster

You really can’t be a fundamentalist in 2017 when it comes to vintage computers. And why would you want to? With so much cool stuff happening in the scene, why would you want to limit your Amiga experience to a single model? Aros is doing awesome stuff these days, you have the x5000 out and the A1222 just around the corner. Morphos is stable and good on the G5 PPC — there has never been a time when there were so many options for Amiga enthusiasts! Not even during the golden days between 1989-1994 were there so many exciting developments.

I love the classic Amiga machines. I think the Vampire stand-alone model is fantastic and if they ramp up the fpga to a faster model, they have in fact re-created a viable computer platform. A 68080 fpga based CPU that can go head to head with x86? That is quite an achievement – and I support that whole heartedly.

But having to fork out this amount of cash just to enjoy a modern Amiga experience is a bit silly. You can actually right now go out and buy a $35 Raspberry PI and enjoy far better results than the Vampire is able to deliver. How that can be negative? I have no idea, nor will I ever understand that kind of thinking. How do any of these people expect the Amiga community to grow and get new, young members if the average price of a 20-year-old machine costs 500€? Which incidentally is 50€ more than a brand new A1222 PPC machine capable of running OS 4.

And with the Tinkerboard you can get 9 times the speed of an A4000? How can that not give you goosebumps!

People talk about Java and Virtual-Machines like its black magic. Well UAE gives you a virtual CPU and chipset that makes mince-meat of both Java and C#. It also comes with one of the largest software libraries in the world. I find it inconceivable that no-one sees the potential in that technology beyond game playing – but when you become violent or nasty over hardware, then I guess that explains quite a bit.

I say, use whatever you can to enjoy your Amiga. And if your perfect Amiga is a PI or a Tinkerboard (or ODroid) – who cares!

I for one will not put more money into legacy hardware. I’m happy that I have the A1000, but that’s where it stops for me. I am looking forward to the latest Amiga x5000 PPC and cant wait to get coding on that – but unless the Appollo crew upgrades to a faster FPGA I see little reason to buy anything. I would gladly pay 500 – 1000 € for something that can kick modern computers in the behind. And I imagine a lot of 68k users would be willing to do that as well. But right now PPC is a much better option since it gives you both 68k and the new OS 4 platform in one price. And for affordable Amiga computing, emulation is now of such quality that you wont really notice the difference.

And I love coding 68k assembler on my Amibian emulator setup. There is nothing quite like it 🙂

 

The Tinkerboard Strikes Back

August 20, 2017 Leave a comment

For those that follow my blog you probably remember the somewhat devastating rating I gave the Tinkerboard earlier this year (click here for part 1, and here for part 2). It was quite sad having to give such a poor rating to what is ultimately a fine piece of hardware. I had high hopes for it – in fact I bought two of the boards because I figured there was no way it could suck with that those specs. But suck it did and while the muscle was there, the drivers were in such a state that it never emerged for the user. It was released prematurely, and I think most people that bought it agrees on this.

asus

The initial release was less than bad, it was horrible

Since my initial review those months ago good things have happened. Asus seem to have listened to the “poonami” of negative feedback and adapted their website accordingly. Unlike the first time I visited when you literally had to dig into recursive menus (which was less than intuitive in this case) just to download the software – the disk images are now available at the bottom of the product page. So thumbs up for that (!)

They have also made the GPIO programming API a lot easier to get; downloading it is reduced to a “one liner” for C developers, which is the way it should be. And they have likewise provided wrappers for other languages, like ever popular python and scratch.

I am a bit disappointed that they don’t provide freepascal units. A lot of developers use object pascal on these board after all, because Object Pascal gives you a better balance between productivity and depth. Pascal is easier to learn (it was designed for that after all) but avoids some of the pitfalls of C/C++ while retaining all the good things. Porting over C headers is fairly easy for a good pascal programmer – but it would be cool of Asus remember that there are more languages in the world than C and python.

All of this aside: the most important change of all is what Asus has done with the drivers! They have finally put together drivers that shows off the capabilities of the hardware and unleash the speed we all hoped for when the board was first announced. And man does it show! My previous experience with the Tinkerboard was horrible; it was the text-book example of a how not to release a product (the whole release has been odd; Asus is a huge, multi-national corporation. Yet their release had basement 3 man band written all over it).

So this is fantastic news! Finally the Tinkerboard delivers and can be used for real life projects!

Smart IOT

At The Smart Company we both create and use our core product, Smart Mobile Studio, to deliver third-party solutions. As the name implies Smart is a software development system initially made for mobile applications; but it quickly grew into a much larger toolchain and is exceptionally good for making embedded applications. With embedded applications I mean things that run on kiosk systems, cash machines and stuff like that; basically anything with a touch-screen that does something.

smarts

The Smart desktop gives you a good starting point for embedded work

One of the examples that ship with Smart Pascal is a fully working desktop embedded environment. Smart compiles for both ordinary browsers (JavaScript environments with a traditional HTML5 display) but also for node.js, which is JavaScript unbound by the strict rules of a browser. Developers typically use node.js to write highly scalable server software, but you are naturally not limited to that. Netflix is written 100% in Node.js, so we are talking serious firepower here.

Our embedded environment is called The Smart Desktop (also known as Amibian.js) and gives you a ready-made node.js back-end that couples with a HTML5 front-end. This is a ready to use environment that you can deploy your own applications through. Things like storage, a nice looking UI, user logon and credentials and much, much more is all implemented for you. You don’t have to use it of course, you can write your own system from scratch if you like. We created “Amibian” to demonstrate just how powerful Smart Pascal can be in the right hands.

With this in mind – my main concern when testing SBC’s (single board computers) is obviously web performance. By default JavaScript is a single core event-driven runtime system; you can spawn threads of course but its done somewhat different from how you would work in Delphi or C++.  JavaScript is designed to be system friendly and a gentle giant if you like, which has turned out to be a good thing – because the way JS schedules execution makes it ideal for clustering!

Most people find it hard to believe that JavaScript can outperform native code, but the JavaScript runtimes of today is almost a whole eco system in themselves. With JIT compilers and LLVM optimization — it’s a whole new ballgame.

Making a scale

To give you a better context to see where the Tinkerboard is on a scale, I decided to set up a couple of simple tests. Nothing fancy, just running the same web applications and see how each of them perform on different boards. So I used the same 3 candidates as before, namely the Raspberry PI 3b, the Hardkernel ODroid XU4 and last but not least: the Asus Tinkerboard.

I setup the following applications to compile with the desktop system, meaning that they were compiled with the Smart project. We got plenty of web applications but for this I wanted to pack the most demanding apps in our library:

  • Skid-Row intro remake using the CODEF library
  • Quake 3 asm.js build
  • Plex

OK let’s go through them and see where the chips land!

The Raspberry PI 3b

bassoon

Bassoon ran well, its not that demanding

The Raspberry PI was aweful (click here for a video). There is no doubt that native applications like UAE4Arm runs extremely well on the PI (which contains hand optimized assembler, not exactly a fair fight)- but when it comes to modern HTML5 the PI doesn’t stand a chance. You could perhaps use a Raspberry PI 3b for simple applications which are not graphic and cpu intensive, but you can forget about anything remotely taxing.

It ran Bassoon reasonably fast, but all in all you really don’t want a raspberry when doing high quality IOT, unless its headless code and node.js perhaps. Frameworks like Johnny #5 gives you a ton of GPIO features out of the box – in fact you can target 40 embedded systems without any change to your code. But for large, high quality web front-ends, the PI just wont cut it.

  • Skid-Row: 1 frame per second or less
  • Quake: Can’t even start, just forget it
  • Plex: Starts but it lags so much you can’t watch anything

But hey, I never expected $35 to give me a kick ass ARM experience anyways. There are 1000 things the PI does very well, but HTML is not one of them.

ODroid XU4

XU4CloudShellAssemble29

The ODroid packs a lot of power!

The ODroid being faster than the Raspberry PI is nothing new, but I was surprised at how much power this board delivers. I never expected it to give me a Linux experience close to that of a x86 PC; I mean we are talking about a 45€ SBC here. And it’s only 10€ more than the Raspberry PI, which is a toy at best. But the ODroid XU4 delivers a good Linux desktop; And it’s well worth the extra 10€ when compared to the PI.

Personally I don’t understand why people keep buying PI’s when there is so much better options on the market now. At least not if web technology is involved. A small server or emulator sure, but not HTML5 and browsers. The PI just cant handle it.

  • Skid-Row: 4-5 frames per second
  • Quake: Runs at very enjoyable speed (!)
  • Plex: Runs well but you may want to pick SD or 720p to avoid lags

What really shocked me was that ODroid XU4 can run Quake.js! The PI can’t even start that because it’s so demanding. It is one of the largest and most resource hungry asm.js projects out there – but ODroid XU4 did a fantastic job.

Now it’s not a silky smooth experience, I would guess something along the lines of 17-20 fps. But you know what? Thats pretty good for a $45 board.

I have owned far worse x86 PC’s in my day.

The Tinkerboard

Before i powered up the board I was reluctant to push it too far, because I thought it would fail me once again. I did hope that something had been done by Asus to rectify the situation though, because Asus really should have done a better job before releasing it. It’s now been roughly 6 months since I bought it, and roughly 8 months since it was released here in Europe. It would have been better for them to have waited with the release. I was not alone about butchering the whole board, its been a source of frustration for those that bought it. 75€ is not much, but no-one likes to throw money out the window like that.

Long story short: I downloaded the latest Ubuntu image and burned that to an SD card (I actually first downloaded the Debian Jessie image they have, but sadly you have to do a bit of work to turn that into a desktop system – so I decided to go for Ubuntu instead). If the drivers are in order I have a feeling the Jessie image will be even faster – Ubuntu has always been a high-quality distribution, but it’s also one of the most demanding. One might even say it’s become bloated. But it does deliver a near Microsoft Windows like experience which has served the Linux community well.

But the Tinkerboard really delivers! (click here for the video) Asus have cleaned up their act and implemented the drivers properly, and you can feel that the moment the desktop comes into view. With the PI you are always fighting with lagging performance. When you start a program the whole system freezes for a while, when you quit a program the system freezes – hell when you move the mouse around the system bloody freezes! Well that is not the case with the Tinkerboard that’s for sure. The tinkerboard feels more like running vanilla Ubuntu on a normal x86 PC to be honest.

  • Skid-Row: 10-15 frames per second
  • Quake: Full screen 32bit graphics, runs like hell
  • Plex: Plays back fullscreen HD, just awesome!

All I can say is this: if you are going to do any bit of embedded coding, regardless if you are using Smart Mobile Studio or some other devkit — this is the board to get (!)

Like already mentioned it does cost almost twice as much as the PI, but that extra 30€ buys you loads of extra power. It opens up so many avenues of code and you can explore software far more complex than both the PI and ODroid combined. With the tinkerboard you can finally deliver a state of the art product built with off the shelves web components. It’s in a league of its own.

The ‘tinker’ rocks at last

When I first bought the tinker i felt cheated. It was so frustrating because the specs were so good and the terrible performance just came down to sloppy work and Asus releasing it prematurely for cash (lets face it, they tapped into the lucrative market established by the PI foundation). By looking at the specs you knew it had the firepower to deliver so much, but it was held back by ridicules drivers.

There is still a lot that can be done to make the Tinkerboard run even faster. Like I mentioned Ubuntu is not the racecar of distributions out there. Ubuntu is fat, there is no other way of saying it. So if someone took the time to create a minimalistic Jessie image, recompile every piece with maximum llvm optimization and as few running services as possible — the tinkerboard would positively fly!

So do I recommend it? I am thrilled to say that yes, I can finally recommend the tinkerboard! It is by far the coolest board in my collection now. In fact it’s so good that I’m donating one to my daughter. She is presently using an iMac which is overkill for her needs at age 10. Now I can make a super simple menu with Netflix and Youtube, buy a nice touch-screen display and wall mount it in her room.

Well done Asus!

Where is PowerPC today?

August 5, 2017 2 comments
ppcproto_2_big

Phase 5 PowerUP board prototype

Anyone who messed around with computers back in the 90s will remember PowerPC. This was the only real alternative for Intel’s complete dominance with the x86 CPU’s and believe me when I say the battle was fierce! Behind the PowerPC you had companies like IBM and Motorola, companies that both had (or have) an axe to grind with Intel. At the time the market was split in half – with Intel controlling the business PC segment – while Motorola and IBM represented the home computer market.

The moment we entered the 1990s it became clear that Intel and Microsoft was not going to stay on their side of the fence so to speak. For Motorola in particular this was a death match in the true sense of the word, because the loss of both Apple and Commodore represented billions in revenue.

What could you buy in 1993?

The early 90’s were bitter-sweet for both Commodore and Apple. Faster and affordable PC’s was already a reality and as a consequence – both Amiga machines and Mac’s were struggling to keep up.

The Amiga 1200 still represented a good buy. It had a massive library of software, both for entertainment and serious work. But it was never really suited for demanding office applications. It did wonders in video and multimedia development, and of course games and entertainment – but the jump in price between A1200 and A4000 became harder and harder to justify. You could get a well equipped Mac with professional tools at that range.

Apple on the other hand was never really an entertainment company. Their primary market was professional graphics, desktop publishing and music production (Photoshop, Pro-tools, Logic etc. were exclusive Mac products). When it came to expansions and ports they were more interested in connecting customers to industrial printers, midi devices and high-volume storage. Mac was always a machine for the upper class, people with money to burn; The Amiga dominated the middle-class. It was a family type computer.

But Apple was not a company in hiding, neither from Commodore or the Wintel threat. So in 1993 they introduced the Macintosh Quadra series to the consumer market. Unlike their other models this was aimed at home users and students, meaning that it was affordable, powerful and could be used for both homework and professional applications. It was a direct threat to upper middle-class that could afford the big box Amiga machines.

Macintosh_Quadra_605

The 68k Macintosh Quadra came out in October of 1993

But no matter how brilliant these machines were, there was no hiding the fact that when it came to raw power – the PC was not taking any prisoners. It was graphically superior in every way and Intel started doubling the CPU speed exponentially year by year; Just like Moore’s law had predicted.

With the 486-DX2 looming on the horizon, it was game over for the old and faithful processors. The Motorola 68k family had been there since the late 70’s, it was practically an institution, but it was facing enemies on all fronts and simply could not stand in the way of evolution.

The PowerPC architecture

If you are in your 20’s you wont remember this, but back in the late 80’s early 90’s, the battle between computer vendors was indeed fierce. You have to take into consideration that Microsoft and Intel did a real number on IBM. Microsoft stabbed IBM in the back and launched Windows as a direct competitor for IBM’s OS2. When I write “stabbed in the back” I mean that literally because Microsoft was initially hired to create parts of OS/2. It was the typical lawsuit mess, not unlike Microsoft and Sun later, where people would pick sides and argue who the culprit really was.

As you can imagine IBM was both bitter and angry at Microsoft for stealing the home PC market in such a shameful way. They were supposed to help IBM and be their ally, but turned out to be their most fierce competitor. IBM had also created a situation where the PC was licensed to everyone (hence the term “ibm clone”) – meaning that any company could create parts for it and there was little IBM could do to control the market like they were used to. They would naturally get revenue from these companies in the form of royalties (and would later retire 99% of all their products. Why work when they get billions for doing nothing?), but at the time they were still in the game.

Motorola was in a bad situation themselves, with the 68k line of processors clearly incapable of facing the much faster x86 CPU’s. Something new had to be created to ensure their market share.

The result of this “marriage of necessity” was the PowerPC line of processors.

VMM-iMacs-LG

The Apple “Candy” Mac’s made PPC and computing sexy

Apple jumped on the idea. It was the only real alternative to x86. And you have to remember that – had Apple gone to x86 at that point, they would basically have fed the forces that wanted them dead. You could hardly make out where Microsoft started and Intel ended during the early 90s.

I’m going to spare you the whole fall and rebirth of Apple. Needless to say Apple came to the point where their branch of PowerPC processors caused more problems than they had benefits. The type of PowerPC processors Apple used generated an absurd amount of heat, and it was turning into a real problem. We see this in their later models, like the dual cpu G5 PowerMac where 40% of the cabinet is dedicated purely to cooling.

And yes, Commodore kicked the bucket back in 1994 so they never finished their new models. Which is a damn shame because unlike Apple they went with a dedicated RISC processor. These models did not suffer the heating problems the PPC’s used by Apple had to deal with.

Note: PPC and RISC are two sides of the same coin. PPC processors are RISC based, but naturally there exists hundreds of different implementations. To avoid a ton of arguments around this topic I treat PPC as something different from PA-RISC which Commodore was playing with in their Hombre “skunkworks” projects.

You can read all about Apple’s strain of PowePC processors here, and PA-RISC here.

PPC in modern computers?

I am going to be perfectly honest. When I heard that the new Amiga machines were based on PowerPC my reaction was less than polite. I mean who the hell would use PowerPC in our day and age? Surely Apple’s spectacular failure would stand as a warning for all time? I was flabbergasted to say the least.

the_red_one_498240f858553The Amiga One came out and I didn’t even give it the time of day. The Sam440 motherboards came out, I couldn’t care less. It would have been nice to own one, but the price at the time and the lack of software was just to disproportionate to make sense.

And now there is the Amiga x5000 and a smaller, more affordable A1222 (a.k.a “Tabour”) model just around the corner. And they are both equipped with a PPC CPU. There are just two logical conclusions you can make when faced with this: either the maker of these products is nuttier than a snicker’s bar, or there is something the general public doesn’t know.

What the general public doesn’t know has turned out to be quite a lot. While you would think PPC was dead and buried, the reality of PPC is not that simple. Turns out there is not just one PPC family (or branch) but several. The one that Apple used back in the day (and that MorphOS for some odd reason support) represents just one branch of the PPC tree if you like. I had no idea this was the case.

The first thing you are going to notice is that the CPU in the new Amiga’s doesn’t have the absurd cooling problems the old Mac’s suffered. There are no 20cm cooling ribs and you don’t need 2 fans on Ritalin to prevent a cpu meltdown; and you also don’t need a custom aluminium case to keep it cool (everyone thinks the “Mac Pro” cases were just to make them look cool. Turned out it was more literal, it was to turn the inside into a fridge).

In other words, the branch of PPC that we have known so far, the one marketed as “PowerPC” by Apple, Phase5 and everyone back in the 90’s is indeed dead and buried. But that was just one branch, one implementation of what is known as PPC.

Remember when ARM died?

When I started to dig into the whole PPC topic I could not help but think about the Arm processor. It’s almost spooky to reflect on how much we, the consumer, blindly accept as fact. Just think about it: You were told that PowerPC was the bomb, so you ended up buying that. Then you were told that PowerPC was crap and that x86 was the bomb, so you mentally buried PowerPC and bought x86 instead. The consumer market is the proverbial cheep farm where most of us just blindly accept whatever advertising tell us.

This was also the case with Arm. Remember a company called Acorn? It was a great british company that invented, among other things, the Arm core. I remember reading articles about Acorn when I was a kid. I even sold my Amiga for a while and messed around with an Acorn Archimedes. A momentary lapse of sanity, I know; I quickly got rid of it and bought back my Amiga. But I did learn a lot from messing around in RISC OS.

lg__MG_5441

The Acorn Archimedes, a brilliant RISC based machine that sadly didnt make it

My point is, everyone was told that Arm was dead back in the 80’s. The Acorn computers used a pure RISC processor at the time (again, PPC is a RISC based CPU but I treat them as separate since the designs are miles apart), but it was no secret that they were hoping to equip their future Acorn machines with this new and magic Arm thing. And reading about the power and speed of Arm was very exciting indeed. Sadly such a computer never saw the light of day back in the 80’s. Acorn went bust and the market rolled over Acorn much like it would Commodore later.

The point im trying to make is that, everyone was told that Arm died with Acorn. And once that idea was planted in the general public, it became a self-fulfilling prophecy. Arm was dead. End of story. It doesn’t matter that Acorn had set up a separate company that was unaffected by the bankrupcy. Once the public deem something as dead, it just vanish from the face of the earth.

Fast forward to our time and Arm is no longer dead, quite the opposite! It’s presently eating its way into just about every piece of electronics you can think of. In many ways Arm is what made the IOT revolution possible. The whole Raspberry PI phenomenon would quite frankly never have happened without Arm. The low price coupled with the fantastic performance -not to mention that these cpu’s rarely need cooling (unless you overclock the hell out of them) has made Arm the most successful CPU ever made.

The PPC market share

With Arm’s so-called death and re-birth in mind, let’s turn our eyes to PPC and look at where it is today. PPC has suffered pretty much the same fate as Arm once did. Once a branch of the tech is defined “dead” by media and spin-doctors, regardless if the PPC is actually a cluster of designs not a single design or “chip”, the general public blindly follows – and mentally bury the whole subject.

And yes I admit it, I am guilty of this myself. In my mind there was no distinction between PPC and PowerPC. Which is a bit like not knowing the difference between Rock & Roll as a genre, and KISS the rock band. If we look at this through a parallel what we have done is basically to ban all rock bands, regardless of where they are from, because one band once gave a lousy concert.

And that is where we are at. PPC has become invisible in the consumer market, even though it’s there. Which is understandable considering the commercial mechanisms at work here, but is really PPC dead? This should be a simple question. And commercial mechanisms not withstanding the answer is a solid NO. PPC is not dead at all. We have just parked it in a mental limbo. Out of sight, out of mind and all that.

Tray-of-Broadway-Processors

Playstation 3, Nintendo WII U and Playstation VR all use Freescale PPC

PPC today has a strong foothold in industrial computing. The oil sector is one market that use PPC SBC’s extensively (read: single board computers). You will find them in valve controllers, pump and drill systems and pretty much any systems that require a high degree of reliability.

You will also be surprised to learn that cheap PPC SBC’s enjoy the same low energy requirements people adopt Arm over (3.3 – 5.0 V). And naturally, the more powerful the chip – the more juice it needs.

The reason that PPC is popular and still being used with great success is first of all reliability. This reliability is not just physical hardware but also software. PPC gives you two RTOS’s (real-time operating system) to choose from. Each of them comes with a software development toolchain that rivals whatever Visual Studio has to offer. So you get a good-looking IDE, a modern and up to date compiler, the ability to debug “live” on the boards – and also real-time signal processing protocols. The list of software modules you can pick from is massive.

neutrino_desktop

QNX RTOS desktop, This is a module you can embed in your own products

The last part of that paragraph, namely real-time signal processing, is extremely important. Can you imagine having an oil valve with 40.000 cubic tons of pressure failing, but the regulator that is supposed to compensate doesn’t get the signal because Linux or Windows was busy with something else? It get’s pretty nutty at that level.

The second market can be found with set-top boxes, game consoles and tv signal decoders. While this market is no doubt under attack from cheap Arm devices – PPC still has a solid grip here due to their reliability. PPC as an embedded platform has roughly two decades head start over Arm when it comes to software development. That is a lifetime in computing terms.

When developers look at technology for a product they are prototyping, the hardware is just one part of the equation. Being able to easily write software for the platform, perform live debugging of code on the boards, and maintain products over decades rather than consumer based 1-3 year warranties; it’s just a completely different ballgame. Technology like external satellite-dish parts runs for decades without maintenance. And there are good reasons why you dont see x86 or Arm here.

ps3-productthumbnailr-v2-us-14aug14

Playstattion 3 and the new PSX VR box both have a Freescal PPC cpu

As mentioned earlier, the PPC branch used today is not the same branch that people remember. I cannot stress this enough, because mixing these is like mistaking Intel for AMD. They may have many of the same features but ultimately they are completely different architectures.

The “PowerPC” label we know from back in the day was used to promote the branch that Apple used. Amiga accelerators also used that line of processors for their PowerUP boards. And anyone who ever stuffed a PowerUP board in their A1200 probably remember the cooling issues. I bought one of the more affordable PowerUP boards for my A1200, and to this day I associate the whole episode as a fiasco. It was haunted by instability, sudden crashes and IO problems – all of it connected to overheating.

But PPC today as delivered by Freescale Semiconductors (bought by NXP back in 2015) are different. They don’t suffer the heat problem of their remote and extinct cousins, have low power requirements and are incredibly reliable. Not to mention leagues more powerful than anything Apple, Phase 5 or Commodore ever got their hands on.

Is Freescale for the Amiga a total blunder?

Had you asked me a few days back chances are I would said yes. I have known for a while that Freescale was used in the oil sector, but I did not take into consideration the strength of the development tools and the important role an RTOS system holds in a critical production environment.

I must also admit that I had no idea that my Playstation and Nintendo consoles were PPC based. Playstation 4 doesn’t use PPC on its motherboard, but if you buy the fantastic and sexy VR add-on package, you get a second module that is again – a PPC based product.

It also turns out that IBM’s high-end mainframes, those Amazon and Microsoft use to build the bedrock for cloud computing are likewise PPC based. So once again we see that PPC is indeed there and it’s playing an important role in our lives – but most people don’t see it. So all of this is a matter of perspective.

Basic_Pack

The Nintendo WII U uses a Freescal PPC cpu, not exactly a below-par gaming system

But the Amiga x5000 or A1222 will not be controlling a high-pressure valve or serving half a million users (hopefully); so does this affect the consumer at all? Does any of this hold any value to you or me? What on earth would real-time feedback mean for a hobby user that just want to play some games, watch a movie or code demos?

The answer is: it could have a profound benefit, but it needs to be developed and evolved first.

Musicians could benefit greatly from the superior signal processing features, but as of writing I have yet to find any mention of this in the Amiga NG SDK. So while the potential is there I doubt we will see it before the Amiga has sold in enough volume.

Fast and reliable signal dispatching in the architecture will also have a profound effect on IPC (inter process communication), allowing separate processes to talk with each other faster and more reliably than say, windows or Linux. Programmers typically use a mutex or a critical-section to protect memory while it’s being delivered to another process (note: painting in broad strokes here), this is a very costly mechanism under Windows and Linux. For instance, the reason UAE is still single threaded is because isolating the custom chips in separate threads and having them talk – turned out to be too slow. If PPC is able to deal with that faster, it also means that processes would communicate faster and more interesting software can be made. Even practical things like a web-server would greatly benefit from real-time message dispatching.

ppc

There is no lack of vendors for PPC SBC’s online, here from Abaco Systems

So for us consumers, it all boils down to volume. The Freescale branch of PPC processors is not dead and will be around for years to come; they are sold by the millions every year to great variety of businesses; and while most of them operate outside the traditional consumer awareness, it does have a positive effect on pricing. The more a processor is sold the cheaper it becomes.

Most people feel that the Amiga x5000 is to expensive for a home computer and they blame that on the CPU. Forgetting that 50% of the sub total goes into making the motherboard and all the parts around the CPU. The CPU alone does not represent the price of a whole new platform. And that’s just the hardware! On top of this you have the job of re-writing a whole operating system from scratch, add features that have evolved between 1994 and 2017, and make it all sing together through custom written drivers.

So it’s not your average programming project to say the least.

But is it really too expensive? Perhaps. I bought an iMac 2 years back that was supposed to be my work machine. I work as a developer and use VMWare for almost all my coding. Turned out the i5 based beauty just didn’t have the ram. And fitting it with more ram (it came with 16 gigabytes, I need at least 32) would cost a lot more than a low-end PC. The sad part is that had I gone for a PC I could have treated myself to an i7 with 32 gigabyte ram for the same price.

I later bit the bullet and bought a 3500€ Intel i7 monster with 64 gigabytes of ram and the latest Nvidia graphics card. Let’s just say that the Amiga x5000 is reasonable in context with this. I basically have an iMac i have no use for, it just sits there collecting dust and is reduced to a music player.

Secondly we have to look at potential. The Mac and Windows machines now have their potential completely exposed. We know what these machines do and it’s not going to change any time soon.

The Amiga has a lot of hidden potential that has yet to be realized. The signal processing is just one of them. The most interesting is by far the Xena chip (XMOS) that allow developers to implement custom hardware in software. It might sound like FPGA but XMOS is a different technology. Here you write code using a custom C compiler that generates a special brand of opcodes. Your code is loaded onto a part of the chip (the chip is divided into X number of squares, each representing a piece of logic, or “custom chip” if you like) and will then act as a custom-chip.

790_2

The Amiga x5000 in all her glory, notice the moderate cooling for the CPU

The XENA technology could really do wonders for the Amiga. Instead of relying on traditional library files that are executed by the main CPU, things like video decoding, graphical effecs, auxiliary 3D functionality and even emulation (!) can now be dealt-with by XENA and executed in parallel with the main CPU.

If anything is going to make or break the Amiga, it’s not going to be the Freescal PPC processor – it’s going to be the XENA chip and how they use it to benefit the consumer.

Just imagine running UAE almost solely on the XENA chip, emulating 68k applications at near native speed – without using the main CPU at all? Sounds pretty good! And this is a feature you wont find on a PC motherboard. Like always they will add it should it become popular, but right now it’s not even on the radar.

So I for one do believe that the next generation Amiga machines have a shot. The A1222 is probably going to be the defining factor. It will retail at an affordable price (around 450€) and will no doubt go head-to-head with both consoles and mid-range PC’s.

So like always it’s about volume, timing and infrastructure. Everything but the actual processor to be honest.

Last words

Its been a valuable experience to look around and read up on PPC. When I started my little investigation I had a dark picture in my head where the new Amiga machines were just a waste of time. I am happy to say that this is not true and the Freescale processors are indeed alive and kicking.

It was also interesting to see how widespread PPC technology really is. It’s not just a specialist platform, although that is absolutely where it’s strength is financially; it ships in everything from your home router to your tv-signal decoder or game system. So it does have a foot in the consumer market, but like I have outlined here – most consumers have parked it in a blind-spot and we associate the word “PowerPC” with the fiasco of Apple in the past. Which is a bit sad because it’s neither true or fair.

djnick-amigaos4-screengrab

Amiga OS 4.x is turning out to be a very capable system

I have no problem seeing a future where the Amiga becomes a viable commercial product again. I think there is some way to go before that happens, and the spear-head is going to be the A1222 or a similar product.

But like I have underlined again and again – it all boils down to developers. A platform is only as good as the software you can run on it, and Hyperion should really throw themselves into porting games and creativity software. They need to build up critical mass and ship the A1222 with a ton of titles.

For my personal needs I will be more than happy just owning the x5000. It doesn’t need to be a massive commercial success, because Amiga is in my blood and I will always enjoy using it. And yes it is a bit expensive and I’m not in the habit of buying machines like this left and right. But I can safely say that this is a machine that I will be enjoying for many, many years into the future – regardless of what others may feel about it.

I would suggest that Hyperion lower their prices to somewhere around 1000€ if possible. Right now they need to think volume rather than profit, and hopefully Hyperion will start making the OS compatible with Arm. Again my thoughts go to volume and that IOT and embedded systems need an alternative to Linux and Windows 10 embedded.

But right now I’m itching to start developing for it – and I’m not alone 🙂

Amiga OS 4, object pascal and everything

August 2, 2017 Leave a comment

Those that read my blog knows that I’m a huge fan of the Commodore Amiga machines. This was a line of computers that took the world by storm around 1985 and held its ground until 1993. Sadly the company had to file for bankruptcy after a series of absurd financial escapades by its management.

carlsassenrath_teamamiga

The original team before it fell prey to mismanagement

The death of Commodore is one of the great tragedies in computing history. There is no doubt that Commodore represented a much-needed alternative to Microsoft and Apple – and the death of Commodore meant innovation of technology took a turn for the worse.

Large books have been written on this subject, as well as great documentaries and movies – so I’m not going to dig further into the drama here. Ars Technica has a range of articles covering the whole story, so if you want to understand how the market got the way it is today, head over and read up on the story.

On a personal level I find the classic Amiga machines a source of great inspiration even now. Despite Commodore dying in the 90’s, today 30 years after the fact I still stumble over amazing source-code on this awesome computer; There are a few things in Amiga OS that “hint” to its true age, but ultimately the system has aged with amazing elegance and grace. It just blows people away when they realize that the Amiga desktop hit the market in 1984 – and much of what we regard as a modern desktop experience is actually inherited from the Amiga.

amigasys4_04

Amiga OS is highly customizable. Here showing OS 3.9 [the last of the classic OS versions]

As I type this the Amiga is going through a form of revival. It’s actually remarkable to be a part of this because the scope of such an endeavour is monumental. But even more impressive is just how many people are involved. It’s not like some tiny “computer cult” where a bunch of misfits hang out in sad corners of the internet. Nope, we are talking about thousands of educated and technical people who still use their Amiga computers on a daily basis.

For instance: the realization of the new Amiga models have cost £ 1.2 million, so there are serious players involved in this.

The user-base is varied of course, it’s not all developers and engineers. You have gamers who love to kick back with some high quality retro-gaming. You have graphics designers who pixel large masterpieces (an almost lost art in this day and age). And you have musicians who write awesome tracks; then use that to spice up otherwise flat and dull PC based tracks.

What is even more awesome is the coding. Even the latest Freepascal has been ported, so if you were expecting people hand punching hex-codes you will be disappointed. While the Amiga is old in technical terms, it was so far ahead of the competition that people are surprised just how capable the classic systems are.

And yes, people code games, demos and utility programs for the classical Amiga systems even today. I just installed a Dropbox cloud driver on my system and it works brilliantly.

The brand new Amiga

Classic Amiga machines are awesome, but this post is not about the old models; it’s about the new models that are coming out now. Yes, you read right: next generation Amiga computers that have finally become a reality. Having waited for 22 years I am thrilled to say that I just ordered a brand new Amiga 5000! (and cant wait to install Freepascal and start coding).

It’s also quite affordable. The x5000 model (which is the power system) retails at around €1650, which is roughly half the price I paid for my Intel i7, Nvidia GeForce GTX 970 workstation. And the potential as a developer is enormous.

Just think about the onslaught of Delphi code I can port over, and how instrumental my software can become by getting in early. Say what you will about Freepascal but it tends to be the second compiler to hit a platform after GCC. And with Freepascal in place a Delphi developer can do some serious magic!

20431276_643626252509574_7473564293748990830_nRight. So the first Amiga  is the power model, the Amiga 5000. This can be ordered today. It cost the same as a good PC (1600€ range depending on import tax and vat). This is far less than I paid for my crap iMac (that I never use anymore).

The power model is best suited for people who do professional work on the machine. Software development doesn’t necessarily need all the firepower the x5000 brings, but more demanding tasks like 3d rendering or media composition will.

The next model is the A1222 which is due out around x-mas 2017 /slash/ first quarter

IMG_9251

The A1222 “Tabour”

2018. You would perhaps expect a mid-range model, something retailing at around €800 or there abouts – but the A1222 is without a doubt a low-end model.

It should retail for roughly €450. I think this is a great idea because AEON (who makes hardware) have different needs from Hyperion (who makes the new Amiga OS [more about that further into the article]). AEON needs to get enough units out to secure the foundation – while Hyperion needs vertical market penetration (read: become popular and also hit other hardware platforms as well). These factors are mutually exclusive, just like they are for Windows and OS X. Which is probably why Apple refuse to sell OS X without a mac, or they could end up competing with themselves.

A brave new Amiga OS

But there is more to this “revival” than just hardware. Many would even say that hardware is the least interesting about the next generation systems, and that the true value at this point in time is the new and sexy operating system. Because what the world needs now more than hardware (in my opinion) is a lightweight alternative to Linux and Windows. A lean, powerful, easy to use, highly customizable operating system that will happily boot on a $35 Raspberry PI 3b, or a $2500 Intel i7 monster. Something that makes computing fun, affordable and most of all: portable!

AmigaOS 41 Final (Commodore-Amiga, 2015, Amiga)_2

My setup of Amiga OS 4, with FPC and Storm C/C++

And with lean I have to stress that the original Amiga operating system, the classic 3.x system that was developed all the way to the end – was initially created to thrive in as little as 512kb. At most I had 2 megabytes of ram in my Amiga 1200 and that was ample space to write and run large programs, play the latest games and enjoy the rich, colorful and user-friendly desktop environment. We have to remember that Amiga had a multi-tasking, window based OS a decade before Microsoft.

Naturally the next-generation systems is built to deal with the realities of 2017 and beyond, but incredibly enough the OS will run just fine with as little as 256 megabytes. Not even Windows embedded can boot up on that. Linux comes close with distributions like Puppy and DSL, but Amiga OS 4 gives you a lot more functionality out of the box.

What way to go?

OK so we have new hardware, but what about the software? Are the new Amiga’s supposed to run some ancient version of Amiga OS? Of-course not! The people behind the new hardware have teamed up with a second company, Hyperion, that has believe it or not, done a full re-implementation of Amiga OS! And naturally they have taken the opportunity to get rid of annoying behavior – and adding behavior people expect in 2017 (like double-clicking on a window header to maximize it, easy access to menus and much more). Visually Amiga OS 4  is absolutely gorgeous. Just stunning to look at.

Now there are many different theories and ideas about where a new Amiga should go. Sadly it’s not just as simple as “hey let’s make a new amiga“; the old system is literally boiled in patent and legislation issues. It is close to an investors worst nightmare since ownership is so fragmented. Back when Commodore died, different parts of the Amiga was sold to different companies and individuals. The main reason we havent seen a new Amiga until now – is because the owners have been fighting between themselves. The Amiga as we know it has been caught in limbo for close to two decades.

My stance on the whole subject is that Trevor Dickenson, the man behind the next generation Amiga systems, has done the only reasonable thing a sane human being can when faced with a proverbial patent kebab: the old hardware is magical for us that grew up on it – but by todays standard they are obsolete dinosaurs. The same can be said about the Amiga OS 3.9. So Trevor has gone for a full re-implementation and hardware.

The other predominant idea is more GNU/Linux in spirit, where people want Amiga OS to be platform independent (or at least written in a way that makes the code run on different hardware as long as some fundamental infrastructure exists). This actually resulted in a whole new OS being written, namely Aros, which is a community made Amiga OS clone. A project that has been perpetually maintained for 20 years now.

131-3

Aros, a community re-implementation of Amiga OS for x86

While I think the guys behind Aros should be applauded, I do feel that AEON and Hyperion have produced something better. There are still kinks to work out on both systems – and don’t get me wrong: I am thrilled that Aros is available, I just enjoy OS 4 more than I do Aros. Which is my subjective opinion of course.

New markets

Right. With all this in mind, let us completely disregard the old Amiga, the commodore drama and instead focus on the new operatingsystem as a product. It doesn’t take long before a few thrilling opportunities present themselves.

The first that comes to my mind is how well suited OS 4 would be as an embedded platform. The problem with Linux is ultimately the same that haunts OS X and Windows, namely that size and complexity grows proportionally over time. I have seen Linux systems as small as 20 megabytes, but for running X based full screen applications, taking advantage of hardware accelerated graphics – you really need a bigger infrastructure. And the moment you start adding those packages – Linux puts on weight and dependencies fast!

embedded-systems-laboratory-systems

The embedded market is one place where Amiga OS would do wonders

With embedded systems im not just talking about head-less servers or single application devices. Take something simple like a ticket booth, an information kiosk or POS terminal. Most of these run either Windows embedded or some variation of Linux. Since both of these systems require a fair bit of infrastructure to function properly, the price of the hardware typically start at around 300€. Delphi and C++ based solutions, at least those that I have seen, end up using boards in the 300€ to $400€ range.

This price-tag is high considering the tasks you need to do in a POS terminal or ticket system. You usually have a touch enabled screen, a network connection, a local database that will cache information should the network be down – the rest is visual code for dealing with menus, options, identification and fault tolerance. If a visa terminal is included then a USB driver must also be factored in.

These tasks are not heavy in themselves. So in theory a smaller system if properly adapted for it could do the same if not better job – at a much better price.

More for less, the Amiga legacy

Amiga OS would be able to deliver the exact same experience as Windows and Linux – but running on more cost-effective hardware. Where modern Windows and Linux typically need at least 2 gigabyte of ram for a heavy-duty visual application, full network stack and database services – Amiga OS is happy to run in as little as 512 megabytes. Everything is relative of course, but running a heavy visual application with less than a gigabyte memory in 2017 is rare to say the least.

Already we have cut cost. Power ARM boards ships with 4 gigabytes of ram, powered by a snappy ARM v9 cpu – and medium boards ship with 1 or 2 gigabytes of ram and a less powerful cpu. The price difference is already a good 75€ on ram alone. And if the CPU is a step down, from ARM v9 to ARM v8, we can push it down by a good 120€. At least if you are ordering in bulk (say 100 units).

The exciting part is ultimately how well Amiga OS 4 scales. I have yet to try this since I don’t have access to the machine I have ordered yet – and sadly Amiga OS 4.1 is compiled purely for PPC. This might sound odd since everyone is moving to ARM, but there is still plenty of embedded systems based on PPC. But yes, I would urge our good friend Trevor Dickenson to establish a migration plan to ARM because it would kill two birds with one stone: upgrading the faithful Amiga community while entering into the embedded market at the same time. Since the same hardware is involved these two factors would stimulate the growth and adoption of the OS.

amihw

The PPC platform gives you a lot of bang-for-the-buck in the A1222 model

But for sake of argument let’s say that Amiga OS 4 scales exceptionally well, meaning that it will happily run on ARM v8 with 1 gigabyte of ram. This would mean that it would run on systems like the Asus Tinkerboard that retails at 60€ inc. vat. This would naturally not be a high performance system like the A5000, but embedded is not about that – it’s about finding something that can run your application safely, efficiently and without problems.

So if the OS scales gracefully for ARM, we have brought the cost down from 300€ to 60€ for the hardware (I would round that up to 100€, something always comes up). If the customers software was Windows-based, a further 50€ can be subtracted from the software budget for bulk licensing. Again buying in bulk is the key.

Think different means different

Already I can hear my friends that are into Linux yell that this is rubbish and that Linux can be scaled down from 8 gigabytes to 20 megabytes if so needed. And yes that is true. But what my learned friends forget is that Linux is a PITA to work with if you havent spent a considerable amount of time learning it. It’s not a system you can just jump into and expect to have results the next day. Amiga OS has a much more friendly architecture and things that are often hard to do on Windows and Linux, is usually very simple to achieve on the Amiga.

Another fact my friends tend to forget is that the great majority of commercial embedded projects – are done using commercial software. Microsoft actually presented a paper on this when they released their IOT support package for the Raspberry PI. And based on personal experience I have to agree with this. In the past 20 years I have only seen 2 companies that use Linux as their primary OS both in products and in their offices. Everyone else uses Windows embedded for their products and day-to-day management.

So what you get are developers using traditional Windows development tools like Visual Studio or Delphi (although that is changing rapidly with node.js). And they might be outstanding programmers but Linux is still reserved for server administrators and the odd few that use it on hobby basis. We simply don’t have time to dig into esoteric “man pages” or explore the intricate secrets of the kernel.

The end result is that companies go with what they know. They get Windows embedded and use an expensive x86 board. So where they could have paid 100€ for a smaller SBC and used Amiga OS to deliver the exact same product — they are stuck with a 350€ baseline.

Be the change

The point of this little post has been to demonstrate that yes, the embedded market is more than open for alternatives. Linux is excellent for those that have the time to learn its many odd peculiarities, but over the past 20 years it has grown into a resource hungry beast. Which is ironic because it used to be Windows that was the bloated scapegoat. And to be honest Windows embedded is a joy to work with and much easier to shape to your exact needs – but the prices are ridicules and it wont perform well unless you throw at least 2 gigabyte on it (relative to the task of course, but in broad strokes that’s the ticket).

But wouldn’t it be nice with a clean, resource friendly and extremely fast alternative? One where auto-starting applications in exclusive mode was a “one liner” in the startup-sequence file? A file which is actually called “startup-sequence” rather than some esoteric “init.d” alias that is neither a folder or an archive but something reminiscent of the Windows registry? A system where libraries and the whole folder structure that makes up drivers, shell, desktop and service is intuitively named for what they are?

asus

Amiga OS could piggyback on the wave of low-cost ARM SBC’s that are flooding the market

You could learn how to use Amiga OS in 2 days tops; but it holds great depth so that you can grow with the system as your needs become more complex. But the general “how to” can be picked up in a couple of days. The architecture is so well-organized that even if you know nothing about settings, a folder named “prefs” doesn’t leave much room for misinterpretation.

But the best thing about AmigaOS is by far how elegant it has been architected. You know, when software is planned right it tends to refactor out things that would otherwise be an obstacle. It’s like a well oiled machinery where each part makes perfect sense and you don’t need a huge book to understand it.

From where I am standing, Amiga OS is ultimately the biggest asset the Hyperion and AEON have to offer. I love the new hardware that is coming out – but there is no doubt in my mind, and I know I am right about this, that the market these companies should focus on now is not PPC – but rather ARM and embedded systems.

It would take an effort to port over the code from a PPC architecture to ARM, but having said that – PPC and ARM have much more in common than say, PPC and x86.

I also think the time is ripe for a solid power ARM board for desktop computers. While smaller boards gets most of the attention, like the Raspberry PI, the ODroid XU4 and the (S)Tinkerboard – once you move the baseline beyond 300€ you see some serious muscle. Boards like iMX6 OpenRex SBC Ultra packs a serious punch, and like expected it ships with 4 gigabyte of ram out of the box.

While it’s impossible to do a raw comparison between the A1222 and the iMX6 OpenRex, I would be surprised if the iMX6 delivered terrible performance compared to the A1222 chipset. I am also sure that if we beefed up the price to 700€, aimed at home computing rather than embedded – the ARM power boards involved would wipe the floor with PPC. There are a ton of factors at play here – a fast CPU doesn’t necessarily mean better graphics. A good GPU should make up at least 1/5 of the price.

Another cool factor regarding ARM is that the bios gives you a great deal of features you can incorporate into your product. All the ARM board I have gives you FAT32 support out of the box for instance, this is supported by the SoC itself and you don’t need to write filesystem drivers for it. Most boards also support Ext2 and Ext3 filesystems. This is recognized automatically on boot. The rich bios/mini kernel is what makes ARM so attractive to code for, because it takes away a lot of the boring, low-level tasks that took months to get right in the past.

Final words

This has been a long article, from the early years of Commodore – all the way up to the present day and beyond. I hope some of my ideas make sense – and I also hope that those who are involved in the making of the new Amiga perhaps pick up an idea or two from this material.

Either way I will support the Amiga with everything I got – but we need a couple of smart ideas and concrete plans on behalf of management. And in my view, Trevor is doing exactly what is needed.

While we can debate the choice of PPC, it’s ultimately a story with a long, long background to it. But thankfully nothing is carved in stone and the future of the Amiga 5000 and 1222 looks bright! I am literally counting the days until I get one!

Amibian.js on bitbucket

August 1, 2017 Leave a comment

The Smart Pascal driven desktop known as Amibian.js is available on bitbucket. It was hosted in a normal github repository earlier – so make sure you clone out from this one.

About Amibian.js

Amibian is a desktop environment written in Smart Pascal. It compiles to JavaScript and can be used through any modern HTML5 compliant browser. The project consists of both a client and server, both written in smart pascal. The server is executed by node.js (note: please install PM2 to have better control over scaling and task management: http://pm2.keymetrics.io/).

smartdeskAmibian.js is best suited for embedded projects, such as kiosk systems. It has been used in tutoring software for schools, custom routers and a wide range of different targets. It can easily be molded into a rich environment for SAD (single application devices) based software – but also made to act more as a real operating system:

  • Class driven filesystem, easy to target external services
    • Ram device-type
    • Browser cache device-type
    • ZIPfile device-type
    • Node.js device-type
  • Cross domain application hosting
    • Traditional IPC protocol between hosted application and desktop
    • Shared resources
      • css styling
      • glyphs and images
    • Event driven visual controls
  • Windowing manager makes it easy to implement custom applications
  • Support for fullscreen API

Amibian ships with UAE.js (based on the SAE.js codebase) making it possible to run Amiga software directly on the desktop surface.

The bitbucket repository is located here: https://bitbucket.org/hexmonks/client

 

Drag and drop with smart pascal

July 28, 2017 Leave a comment

Drag and drop under HTML5 is incredibly simple; even more simple than Delphi’s mechanisms. Having said that it can be a PITA to work with due to the async nature of the JavaScript API.

This functionality is just begging to be isolated in a non-visual controller (read: component), and it’s on my list of RTL features. But it will have to wait until we have wiped the list clean.

fig9-1

Drag and drop is useful for many web applications

Anyways, people asked me about a simple way to capture a drag & drop event and kidnap the file-data without any type for form tags involved. So here is a very simple ad-hoc example.

The FView variable is a reference to a visible control. In this case the form itself, so that you can drop files anywhere.

FView.handle.ondragover := procedure (event: variant)
begin
  // In order to hijack drag & drop, this event must prevent
  // the default behavior. So we hotwire it
  event.preventDefault();
end;

FView.Handle.ondrop := procedure (event: variant)
begin
  event.preventDefault();

  var ev := event.dataTransfer;
  if (ev) then
  begin
    if (ev.items) then
    begin
      for var x:=0 to ev.items.length-1 do
      begin
        var LItem := ev.items[x];
        if (LItem) then
        begin
          if string(LItem.kind).ToLower() = "file" then
          begin
            var file := LItem.getAsFile();

            var reader: variant;
            asm
              @reader = new FileReader();
            end;
            reader.onload := procedure (data: variant)
            begin
              if reader.readyState = 2 then
              begin
                writeln("File data ready:");

                var binbuffer := reader.result;
                var raw: TDefaultBufferType;
                asm
                  @raw = new Uint8Array(@binbuffer);
                end;

                var Buffer := TBinaryData.Create();
                Buffer.AppendBuffer(raw);

                writeln(buffer.ToBase64());

              end;
            end;
            reader.readAsArrayBuffer(file);

          end;
        end;
      end;
    end;
  end;
end;

Object models, a Smart Pascal example

July 22, 2017 Leave a comment

Most information developers work with is hierarchal and organized in a classical parent and child manner. Parent-child relationships is so universal that they are everywhere, from how visual controls are organized to how elements in a document is stored. No matter if it’s a visual treeview in Delphi, an html element in a document or entries in a pdf-file; it’s pretty much all organized in a series of inter-linked, parent-child relationships.

Since parent-child trees is so universal, developers usually end up with a unit that contains a couple of simple base-classes. Typically these classes are used to create a model of things. The tree can contain the actual data itself – or more commonly, it’s used to represent a model that is later processed or realized visually.

synchro

Most frameworks are essentially parent-child hierarchies

Here is an example of such a unit. It compiles under Smart Pascal and should work fine for most versions. It has virtually no dependencies.

Populate the data property with whatever data you want to represent, then you can enumerate and work with the model. What you use it for is eventually up to you. I use it a lot when dealing with menus; it makes it easier to define menus and sub-menus and then simply feed the model to a construction routine.

unit DataNodes;

interface

uses
  System.Types;

type

  // for older SMS versions that lack "system.types", un-remark this:
  // TEnumResult = (erContinue = $A0, erBreak = $10);

  TCustomDataNode = class;
  TCustomDataNodeList = array of TCustomDataNode;

  TDataNodeEnumEnterProc = function (const Root: TCustomDataNode): TEnumResult;
  TDataNodeEnumExitProc = function (const Root: TCustomDataNode): TEnumResult;
  TDataNodeEnumProc = function  (const Child: TCustomDataNode): TEnumResult;
  TDataNodeCompareProc = function (const Value: variant): boolean;

  TCustomDataNode = class
  protected
    property  Parent: TCustomDataNode;
    property  Caption: string;
  public
    property  Data: variant;
    property  Children: TCustomDataNodeList;

    procedure Clear;

    function  Search(const Compare: TDataNodeCompareProc): TCustomDataNode;

    procedure ForEach(const Before: TDataNodeEnumEnterProc;
              const Process: TDataNodeEnumProc;
              const After: TDataNodeEnumExitProc); overload;

    procedure ForEach(const Process: TDataNodeEnumProc); overload;

    function  Serialize: string;
    class function  Parse(const JSonData: string): TCustomDataNode;

    constructor Create(const NodeOwner: TCustomDataNode;
                NodeText: string; const NodeData: variant); overload;

    constructor Create(const NodeOwner: TCustomDataNode;
                const NodeData: variant); overload;
  end;

  TDataNode = class(TCustomDataNode)
  public
    property  Parent;
    property  Caption;
  end;

  TDataNodeTree = class(TCustomDataNode)
  public
    property  Caption;
  end;

implementation

//#############################################################################
// TCustomDataNode
//#############################################################################

constructor TCustomDataNode.Create(const NodeOwner: TCustomDataNode;
            NodeText: string; const NodeData: variant);
begin
  Parent := NodeOwner;
  Caption := NodeText;
  Data := NodeData;
end;

constructor TCustomDataNode.Create(const NodeOwner: TCustomDataNode;
         const NodeData: variant);
begin
  Parent := NodeOwner;
  Data := NodeData;
end;

function TCustomDataNode.Serialize: string;
begin
  asm
    @result = JSON.stringify(@self);
  end;
end;

class function TCustomDataNode.Parse(const JSonData: string): TCustomDataNode;
begin
  asm
    @result = JSON.parse(@JSONData);
  end;
end;

procedure TCustomDataNode.Clear;
begin
  try
    for var x := 0 to Children.length-1 do
    begin
      if Children[x] <> nil then
        Children[x].free;
    end;
  finally
    Children.Clear();
  end;
end;

function TCustomDataNode.Search(const Compare: TDataNodeCompareProc): TCustomDataNode;
var
  LResult: TCustomDataNode;
begin
  ForEach( function (const Child: TCustomDataNode): TEnumResult
    begin
      if Compare(Child.Data) then
      begin
        LResult := Child;
        result := TEnumResult.erBreak;
      end;
    end);
  result := LResult;
end;

procedure TCustomDataNode.ForEach(const Before: TDataNodeEnumEnterProc;
          const Process: TDataNodeEnumProc;
          const After: TDataNodeEnumExitProc);
begin
  if assigned(Before) then
  Before(self);
  try
    if assigned(Process) then
    begin
      for var LChild in Children do
      begin
        if Process(LChild) = erBreak then
        break;
      end;
    end;
  finally
    if assigned(After) then
    After(self);
  end;
end;

procedure TCustomDataNode.ForEach(const Process: TDataNodeEnumProc);
begin
  if assigned(Process) then
  begin
    for var LChild in Children do
    begin
      if Process(LChild) = erBreak then
      break;
    end;
  end;
end;

end.

LDef parser done

July 21, 2017 Leave a comment

Note: For a quick introduction to LDef click here: Introduction to LDef.

Great news guys! I finally finished the parser and model builder for LDef!

02237439ec5958f6ec7362f726a94696-cogwheels-red-circle-icon-by-vexelsThat means we just need to get the assembler ported. This is presently running fine under Smart Pascal (I like to prototype things there since its faster) – and it will be easy to port it over to Delphi and Freepascal after the model has gone through the steps.

I’m really excited about this project and while I sadly don’t have much free time – this is a project I truly enjoy working on. Perhaps not as much as Smart Pascal which is my baby, but still; its turning into a fantastic system.

Thoughts on the architecture

One of the things I added support for, and that I have hoped that Embarcadero would add to Delphi for a number of years now, is support for contract coding. This is a huge topic that I’m not jumping into here, but one of the features it requires is support for entry and exit sections. Essentially that you can define code that executes before the method body and directly after it has finished (before the result is returned if it’s a function).

This opens up for some very clever means of preventing errors, or at the very least give the user better information about what went wrong. Automated tests also benefits greatly from this.

For example,  a normal object pascal method looks, for example, like this:

procedure TForm1.MySpecialMethod;
begin
  writeln("You called my-special-method")
end;

The basis of contract design builds on the classical and expands it as such:

procedure TForm1.MySpecialMethod;
  Before()
  begin
    writeln("Before my-special-method");
  end;

  After()
  begin
    writeln("After my-special-method");
  end;

begin
  writeln("You called my-special-method")
end;

Note: contract design is a huge system and this is just a fragment of the full infrastructure.

What is cool about the before/after snippets, is that they allow you to verify parameters before the body is even executed, and likewise you get to work on the result before the value is returned (if any).

You mights ask, why not just write the tests directly like people do all the time? Well, that is true. But there will also be methods that you have no control over, like a wrapper method that calls a system library for instance. Being able to attach before/after code for externally defined procedures helps take the edge off error testing.

Secondly, if you are writing a remoting framework where variant data and multi-threaded invocation is involved – being able to check things as they are dispatched means catching potential errors faster – leading to better performance.

As always, coding techniques is a source of argument – so im not going into this now. I have added support for it and if people don’t need it then fine, just leave it be.

Under LDef assembly it looks like this:

public void main() {
  enter {
  }

  leave {
  }
}

Well I guess that’s all for now. Hopefully my next LDef post will be about the assembler being ready – leaving just the linker. I need to experiment a bit with the codegen and linker before the unit format is complete.

The bytecode-format needs to include enough information so that the linker can glue things together. So every class, member, field etc. must be emitted in a way that is easy and allows the linker to quickly look things up. It also needs to write the actual, resulting method offsets into the bytecode.

Have a happy weekend!

FMX 4 linux gets an update

July 20, 2017 Leave a comment

The Firemonkey framework that allows you to compile for Linux desktop (Linux x86 server is already supported) just got a nice update. Amoung the changes is a nice Radial Gradient pattern – and several bugs squashed.

fmxl

This is an awesome addition if you already have Delphi XE 10.2 and if writing Ubuntu desktop applications is something you want – then this is the package to get!

Check it out: http://fmxlinux.com/

Smart Pascal: Amibian vs. FriendOS

July 20, 2017 Leave a comment

This is not a new question, and despite my earlier post I still get hammered with these on a weekly basis – so lets dig into this subject and clean it up.

I fully understand that for non-developers suddenly having two Amiga like web desktops can be a bit confusing; especially since they superficially at least do many of the same things. But there is actually a lot of co-incidence surrounding all this, as well as evolution of the general topic. People who work with a topic will naturally come up with the same ideas from time to time.

But ok, lets dig into this and clear away any confusion

You know about FriendOS right? It looks a lot like Amibian

20100925-Designer

Custom native web servers has been a part of Delphi for ages, so it’s not that exciting for us

“A lot” is probably stretching it. But ok:  FriendOS is a custom server system with a sexy desktop front-end written in HTML5. So you have a server that is custom written to interact with the browser in a special way. This might sound like a revolution to non-developers but it’s actually an established technology; its been a part of Delphi and C++ builder for at least 12 years now (Intraweb being the best example, Raudus another). So if you are wondering why im not dazzled, it’s because this has been there for a while.

The whole point of Amibian.js is to demonstrate a different path; to get away from the native back-end and to make the whole system portable and platform independent. So in that regard the systems are diametrically different.

maxresdefault

Custom web servers that talk to your web-app is old news. Delphi developers have done this for a decade at least and it’s not really interesting at this point. Node.js holds much greater promise.

What FriendOS has done that is unique, and that I think is super cool – is to couple their server with RDP (remote desktop protocol) and some nice video streaming for smooth video chat. Again these are off the shelves parts that anyone can add once you have a native back-end, it’s not really hard to code but time-consuming; especially when you are potentially dealing with large number of users spawning threads all over the place. I think Friend-Labs have done an exceptional good job here.

When you combine these features it creates the impression of an operating system like environment. And this is perfectly fine for ordinary users. It all depends on your needs and what exactly you use the computer for.

And just to set the war-mongers straight: FriendOS is not going up against Amibian. it’s going up against ChromeOS, Nayu and and a ton of similar systems; all of them with deep pockets and an established software portfolio. We focus on software development. Not even in the same ballpark.

To be perfectly frank: I see no real purpose for a web desktop except when connected to a context. There has to be an advantage beyond isolating web functions in one place. You need something special that your system does better than others, or different than others. Amibian has been about UAE.js and to run retro games in a familiar environment. And thus create a base that Amiga lovers can build on and play with. Again based on our prefab for customers that make embedded systems and use our compiler and RTL for that.

If you have a hardware product like a NAS, a ticket system or a retro-game machine and want to have a nice web front-end for it: then it makes sense. But there is absolutely nothing in both our systems that you can’t whip-up using Intraweb or Raudus in a few weeks. If you have the luxury of a native back-end, then adding Active Directory support is a matter of dropping a component. You can even share printers and USB devices over the wire if you like, this has been available to Delphi and c++ developers for ages. The “new” factor here, which FriendOS does very well i might add, is connectivity.

This might sound like criticism but it’s really not. It’s honesty and facts. They are going to need some serious cash to take on Google, Samsung, LG and various other players that have been doing similar things for a long time (or about to jump on the same concepts) — Amibian.js is for Amiga fans and people who use Smart Pascal to write embedded applications. We don’t see anything to compete with because Amibian is a prefab connected to a programming language. FriendOS is a unification system.

A programming language doesnt have the aspirations of a communication company. So the whole “oh who is best” or “are you the same” is just wrong.

Ok you say it’s not competing, but why not?

To understand Amibian.js you first need to understand Smart Pascal (see Wikipedia article on Smart Pascal). Smart Pascal (smartmobilestudio.com) is a software development studio for writing software using web technology rather than native machine-code. It allows you to create whatever you like, from games to servers, or kiosk software to the next Facebook clone.

Our focus is on enabling our customers to quickly program robust mobile applications, servers, kiosk software, games or large JavaScript projects; products that would otherwise be hard to manage if all you have is vanilla JavaScript. I mean why spend 2 years coding something when you can do it in 2 months using Smart? So a web desktop is just ridicules when you understand how large our codebase is and the scope of the product.

smart

Under Smart Pascal what people know as Amibian.js is just a project type. There is no competition between FriendOS and Amibian because a web desktop represents a ridicules small piece of our examples; it’s literally mistaking the car for the factory. Amibian is not our product, it is a small demo and prefab (pre fabricated system that others can download and build on) project that people use to save time. So under Smart, creating your own web desktop is a piece of cake, it’s a click, and then you can brand it, expand it and do whatever you like with it. Just like you would any project you create in Visual Studio, Delphi or C++ builder.

So we are not in competition with FriendOS because we create and deliver development tools. Our customers use Smart Pascal to create web environments both large and small, and naturally we deliver what they need. You could easily create a FriendOS clone in Smart if you got the skill, but again – that is but a tiny particle in our codebase.

Really? Amibian.js is just a project under Smart Pascal?

Indeed. Our product delivers a full object-oriented pascal compiler, debugger and IDE. So you can write classes, use inheritance and enjoy all the perks of a high-level language — and then compile this to JavaScript.

You can target node.js, the browser and about 90+ embedded devices out of the box. The whole point of Smart Pascal is to avoid the PITA that is writing large applications in JavaScript. And we do this by giving you a classical programming language that was made especially for application authoring, and then compile that to JavaScript instead.

Screenshot

Amibian.js is just a tiny, tiny part of what Smart Pascal is all about

This is a massive undertaking that started back in 2009/2010 and involves a high-quality compiler, linker, debugger and code generator; a full IDE with a ton of capabilities and last but not least: a huge run-time library that allows you to work with the DOM (document object model, or HTML) and node.js from the vantage point of a programmer.

Most people approach web development as a designer. They write html and then style them using a stylesheet. They work with colors, aspects and pages. Which means people who traditionally write programs falls between two chairs: first they must learn about html and css, and secondly a language which is ill equipped for large scale applications (imagine writing adobe photoshop in nothing but JS. Sure it’s possible, but wouldnt you rather spend a month coding that than a year? In a language that actually makes sense?).

With Smart you approach web development like you do writing programs. You work with visual controls, change properties, write code in response to events. Even writing your own visual controls that you can re-use and inherit from later is both fun and easy. So rather than ending up with a huge was of spaghetti code, which sadly is the fate of most large-scale JavaScript projects — Smart lets you work like you are used to. In a language better suited for the task.

And yes, I was not kidding when I said this was a huge undertaking. The source code in our codebase is close to 2.5 gigabytes. And keep in mind that this is source-code and libraries. So it’s not something you slap together over the weekend.

20108543_10154652373945906_5493167218129195758_n

The Smart source-code is close to 2.5 gigabytes. It has taken years to complete

But why do Amibian and FriendOS both focus on the Amiga?

That is pure co-incidence. The guys over at Friend Labs started out on the Amiga just like we did. So when I updated our desktop project (previously called Quartex Media Desktop) the Amiga look and feel came natural to me.

commodoreI’m a huge retro-computing fan that loves the Amiga. When I sat down to rewrite our window manager I loved the way Amiga OS 4.x looked, so I decided to implement an UI inspired by that.

People have to remember that the Amiga was a huge success in Scandinavia, so finding developers that are in their late 30s or early 40s that didn’t own an Amiga is harder than you think.

So the fact that we all root our ideas back to the Amiga is both co-incidence and a mutual passion for a great platform. One that really should have survived the financial onslaught of fat CEO’s and thir minions in the board.

But Amibian does a lot of what FriendOS does?

Probably. JavaScript is multi-tasking by default so if loading external URL’s into window containers, doing live resize and other things is what you refer to then yes. But that is the nature of web programming. Its like creating a bucket if you want to carry water; it is a natural first step of an evolutionary pattern. It’s not like FriendOS is copying us I would imagine.

240_F_61497799_GnuUiuJliH9AyOJTeo6i3bS8JNN7wgr2

For the record Smart started back in 2010 and the media desktop came in with the first hotfix, so its been available years before Friend-Labs even existed. Creating a desktop has not been a huge part of what we do because mobile applications, building a rich and solid run-time-library with hundreds of classes for our customers – and making an IDE that is great to use, that is our primary job.

We didn’t even know FriendOS existed. Let alone that it was a Norwegian product.

But you posted that you worked for FriendOS earlier?

Yes I did, very briefly. I was offered a position and I worked there for a month. It was a chance to work side by side with legends like David John Pleasance, ex head of Commodore for europe; and also my childhood hero Francois Lionet, author of Amos Basic for the Amiga way back in the 80’s and 90s.

blastfromthepast

We never forget our childhood heroes

Sadly we had our wires crossed. I am an awesome object pascal developer, while the guys at Friend-Labs are awesome C developers. I work primarily on Windows while they work mostly on Linux. So in essence they hired a Delphi developer to work in a language he doesn’t know on a platform he havent used.

They simply took for granted that I worked in C/C++, while I took for granted that they used object pascal. Its an easy mistake to make and its not the first time; and probably not the last.

Needless to say the learning curve would be extremely high for any developer (learning a new operating-system and programming language at the same time as you are supposed to be productive).

When my girlfriend suddenly faced a life threatening illness the situation became worse. It was impossible for me to commute or leave her side for the unforeseeable future; so when you add the six months learning curve to this situation; six months of not being able to contribute on the level I am used to; well I am old enough to know how that ends. So I did what was best for everyone and resigned.

Besides, I am a damn good Delphi developer with standing invitation to many companies; so it made more sense to just take a step backwards. Which was not fun because I really enjoyed the short time I was there. But, it was not meant to be.

And that is basically all there is to it.

Ok. But if Smart is a development tool, will it support Friend-OS ?

This is something that I really want to do. But since The Smart Company is a proper company with stocks, shareholders and investors – it’s not a decision I can take on my own. It is something that must be debated by the board. But personally yeah, I would love that.

friend

As they grow, so does the need for proper development tools

One of the reasons I hope FriendOS succeeds is because it’s a win-win situation. The more they expand the more relevant Smart becomes. Say what you will about JavaScript but writing large and complex applications is not easy by any measure.

So the moment we introduce Smart Pascal for Friend, their users will be able to write large applications rapidly, with better time-to-market and consequent ROI. So it’s a win-win. If they succeed then we get a bigger market; If they don’t we havent lost anything.

This may sound extremely self-serving, but Friend-Labs have had the same chance as everyone else to invest in Smart; our investor plans have been available for quite some time, and we have to do what is best for our company.

But what about Amibian, was it just a short thing?

Not at all. It is put on hold for a few months while we release the next generation RTL. Which is probably the biggest update in the history of Smart Pascal. We have a very clear agenda ahead of us and Amibian.js is (as underlined) a very small part of what we do.

But Amibian is written using our next generation RTL, and without that our customers cant really do much with it. So it’s important to get the RTL out first and then work on the IDE to reflect its many new features. After that – Amibian.js development will continue.

The primary target for Amibian.js is embedded devices and kiosk systems, coupled with full-screen web applications and hardware front-ends (NAS and backup devices being great examples). So the desktop will run on affordable, off the shelves hardware starting at $40 and all the way up to the most powerful and expensive x86 boards on the market. Cheap solutions like Raspberry PI, ODroid XU4 and Tinkerboard will deliver what you today need a dedicated $120 x86 board to achieve.

kiosk-systems

Our desktop will run on many targets and is platform independent by design

This means that our deskop has a wildly different modus operandi. We will not require a constant connection to a remote server. Amibian will happily boot up on a single device, regardless of processor type.

Had we coded our backend using Delphi or C++ builder (native like FriendOS have done) we would have been finished months ago. And I could have caught up with FriendOS in a couple of months if I wanted to. But that is not in our agenda. We have written our server framework for node.js as we coded the desktop  – which means it’s platform and OS agnostic by design. If node.js runs, Amibian will run. It wont care if you are running on a $40 embedded board or the latest Intel i9 cpu.

Last words

I really hope this has helped and that the confusion between Amibian.js and our agenda, versus what Friend-Labs is doing, is now clearer.

Amibian666

From Norway with love

I wish Friend-Labs the very best and hope they are successful in their endeavour. They have worked very hard on the product and deserve that. And while I might come over as arrogant at times, im really not.

Web desktops have been around for a long time now (Asustor is my favorite) through Delphi and C++ builder and that is just facts. But that doesn’t mean you can’t put things together in new and interesting ways! Smart itself was first put together by existing technology. It was said to be impossible by many because JavaScript and object pascal are unthinkable companions. But it turned out to be a perfect match.

As for the future – personally I don’t believe in the web-desktop outside a specific context, something to give it purpose if you like. I believe for instance that Amibian.js will be awesome for Amiga users when its running on a $99 ARM laptop. Where the system boots straight into a full-screen desktop and where UAE.js is fully integrated into the core, making retro-gaming and running old programs close to seamless. That I can believe in.

But it would make no sense running Amibian or FriendOS in a browser on top of a Windows desktop or a full Ubuntu X session. Unless the virtual desktop functions as your corporate window with access to company mail, documents and essentially what every web-based intranet already does. So once again we end up with the fact that this has already been done. And unless you create a unique context for it, it just wont have any appeal. This is also why I havent pursued the same tech Friend-Labs have, because that’s not where the exciting stuff is happening.

But I will happily be proven wrong, because that means an even bigger market for us should we decide to support the platform.

LDef and bytecodes

July 14, 2017 Leave a comment

LDef, short for Language Definition format, is a standard I have been formulating for a couple of years. I have taken my experience with writing various compilers and parsers, and also my experience of writing RTL’s and combined it all into a standard.

programming-languages-for-iot-e1467856370607LDef is a way for anyone to create their own programming language. Just like popular libraries and packages deals with the low-level stuff, like Gr32 which is an excellent graphics library — LDef deals with the hard stuff and leaves you with the pleasant job of defining what the language should look like.

The idea is to make a language construction kit if you like, where the underlying engine is flexible enough to express the languages we know and love today – and also powerful enough to express new ideas. For example: let’s say you want to create an awesome new game system (just as an example, it applies to any system that can be automated). You have the means and skill to create the actual engine – but how are you going to market it? You will be up against monoliths like Unity and simple “click and play” engines like ClickTeam Fusion, Game Maker and the likes.

Well, the only way to make good games is hard work. There is no two ways about it. You can fake your way only so far – so at the end of the day you want to give your users something solid.

In our example of publishing a game-engine, I think that you would stand a much better chance of attracting users if you hooked that engine up to a language. A language that is easy to use, easy to learn and with commands that are both specific and non-specific to your engine.

There are some flavours of Basic that has produced knock-out games for decades, like BlitzBasic. That language alone has produced hundreds of titles for both PC, XBox and even Nintendo. So it’s insanely fast and not a pushover.

And here is the cool part about LDEF: namely that it makes it easy for you to design your own languages. You can use one of the pre-defined languages, like object pascal or visual basic if that is what you like – but ultimately the fun begins when you start to experiment with new ideas and language features. And it’s fun when you get to that point, because all the nitty gritty is handled. You get to focus on the superficial stuff like syntax and high level functions. So you can shave off quite a bit of development time and make coding fun again!

The paradox of faster bytecodes

Bytecodes used to be to slow for anything substantial. On 16-bit machines bytecodes were used in maybe one language (that I know of) and that was the ‘E’ compiler. The E language was maybe 30 years ahead of its time and is probably the only language I can think of that fits cloud programming like hand in glove. But it was also an excellent system automation language (scripting) and really turned some heads back in the late 80s and early 90s. REXX was only recently added to OS X, some 28 years after the Amiga line of computers introduced it to the general public.

ldef_bytecodes

Bytecode dump of a program compiled with the node.js version of the compiler

In modern times bytecodes have resurfaced through Java and the .NET framework which for some reason caused a stir in the whole development community. I honestly never bought into the hype, but I am old enough to remember the whole story – so I’m probably not the Microsoft demographic anyways. Java hyped their virtual machine opcodes to the point of exhaustion. People really are stupid. Man did they do a number on CEO’s and heads of R&D around the world.

Anyways, end of the story was that Intel and AMD went with it and did some optimizations that could help bytecodes run faster. The stack was optimized with Java, because let’s face it – it is the proverbial assault on the hardware. And the cache was expanded on command from the emper.. eh, Microsoft. Also (if I remember correctly) the “jump to pointer” and various branch instructions were made to execute faster. I remember reading about this in Dr. Dobbs Journal and Microsoft Developer Magazine; granted it was a few years ago. What was interesting is the symbiotic relationship that exists between Intel and Microsoft, I really didn’t know just how closely knit these guys were.

Either way, bytecodes in 2017 is capable of a lot more than they ever were on 16-bit and early 32-bit systems. A cpu like Intel i5 or i7 will chew through bytecodes like a warm knife on butter. It depends on how you orchestrate the opcodes and how much work you delegate to the various instructions.

Modeled instructions

Bytecodes are cool but they have to be modeled right, or its all going to end up as a bloated, slow and limited system. You don’t want to be to low-level, otherwise what is the point of bytecodes? Bytecodes should be a part of a bigger picture, one that could some day be modeled using FPGA’s for instance.

The LDef format is very flexible. Each instruction is ultimately a single 32-bit longword (4 bytes) where each byte holds key information about the command, what data is forward in the cache and how it should be read.

The byte organization is:

  • 0 – Actual opcode
  • 1 – Instruction layout

Depending on the instruction layout, the next two bytes can hold different values. The instruction layout is a simple value that defines how the data for the instruction is passed.

  • Constant to register
  • Variable to register
  • Register to register
  • Register to variable
  • Register to stack
  • Stack to register
  • Variable to variable
  • Constant to variable
  • Stack to variable
  • Program counter (PC) to register
  • Register to Program counter
  • ED (exception data) to register
  • Register to exception-data

As you can probably work out from the information here, this list hints to some archetectual features. Variables are first class citizens in LDef, they are allocated, managed and released using instructions. Constants can be either automatically handled and references by id (a resource chunk is linked to the class binary) or “in place” and compiled directly into the assembly as part of the instruction. For example

load R[0], "this is a test"

This line of code will take the constant “this is a test” and move it into register #0. You can choose to have the text-data stored as a proper resource which is appended to the compiled bytecode (all classes and modules have a resource chunk) or just compile “as is” and have the data read directly. The first option is faster and something you can adjust with compiler optimization options. The second option is easier to work with when you debug since you can see the data directly as a part of the debug memory dump.

And last but not least there are registers. 32 of them in number (so for the low-level coders out there you should have few limitations with regards to register mapping). All operations (like divide, multiply etc) operate on registers only. So to multiply two variables they first have to be moved into registers and the multiplication is executed there – then you can move the result to a variable afterwards.

ldef_asm

LDef assembly code. Simple but extremely effective

The reason registers is used in my runtime system – is because you will not be able to model a FPGA with high-level concepts like “variables” should someone every try to implement this as hardware. Things like registers however is very easy to model and how actual processors work. You move things from memory into a cpu register, perform an action, and then move the result back into memory.

This is where Java made a terrible mistake. They move all data onto the stack and then call the operation. This simplifies execution of instructions since there is never any registers to keep track of, but it just murders stack-space and renders Java useless on mobile devices. The reason Google threw out classical Java (e.g Java as bytecodes) is due to this fact (and more). After the first android devices came out they quickly switched to a native compiler – because Java was too slow, to power-hungry and required too much memory (especially stack space) to function properly. Battery life was close to useless and the only way to save Java was to go native. Which is laughable because the entire point of Java was mobility, “compile once run everywhere” — yeah well, that didn’t turn out to well did it 😀

Dot net improved on this by adding a “load resource” type instruction, where each method will load in the constant data by number – and they are loaded into pre-defined slots (the variables you have used naturally). Then you can execute operations in typical “A + B to C” style (actually all of that is omitted since the compiler already knows both A, B and C). This is much more stack friendly and places performance penalty on the common language runtime (CLR).

Sadly Microsoft’s platform, like everything Microsoft does, requires a pretty large infrastructure. It’s not simple, elegant and fast – it’s more monolithic, massive and resource hungry. You don’t see .net being the first thing ported to a new platform. You typically see GCC followed by Freepascal.

LDef takes the bytecode architecture one step further. On assembly level you reference data using identifiers just like .net, and each instruction is naturally executed by the runtime-engine – but data handling is kept within the virtual realm. You are expected to use the registers as temporary holding slots for your information. And no operations are ever done directly on a variable.

The benefit of this is:

  • Better payload balancing
  • Easier to JIT since the architecture is closer to real assembly
  • Retains important aspects of how real hardware works (with FPGA in mind)

So there are good reasons for the standard, all of them very good.

C like intermediate language

With assembler so clearly defined you would expect  assembly to be the way you work. In essence that what you do, but since OOP is built into the system and there are structures you are expected to populate — structures that would be tedious to do in raw unbridled assembler, I have opted for a C++ inspired intermediate language.

ldef_app

The LDEF assembler kitchen sink

You would half expect me to implement pascal, but truth be told pascal parsing is more complex than C parsing, and C allows you to recycle parsers more easily, so dealing with sub structures and nested regions is less maintainance and easier to write code for.

So there is no spesific reason why I picked C++ as a intermediate language. I would prefer pascal but I also think it would cause a lot of confusion since object pascal will be the prime citizen of LDef languages. My other language, N++ also used curley brackets so I’m honestly not strict about what syntax people prefer.

Intermediate language features supported are:

  • Class declarations
  • Struct declarations
  • Parameter to register mapping
  • Before mehod code (enter)
  • After method code (leave)
  • Alloc section for class fields
  • Alloc section for method variables

The before and after code for methods is very handy. They allow you to define code that should execute before the actual procedure. On a higher level when designing a new language, this is where you would implement custom allocation, parameter testing etc.

So if you call this method:

function testcode() {
    enter {
      writeln("this is called before the method entry");
    }
    leave { 
      writeln("this is called after the method exits");
    }
  writeln("this is the method body");
}

Results in the following output:

this is called before the method entry
this is the method body
this is called after the method exits

 

When you work with designing your language, you eventually.

Truly portable

Now I have no aspirations in going into competition with neither Oracle, Microsoft or anyone in between. Like most geeks I do things I find interesting and enjoy working within a field of computing that is stimulating and personally rewarding.

Programming languages is an area where things havent really changed that much since the golden 80s. Sure we have gotten a ton of fancy new software, and the way people use languages has changed – but at the end of the day the languages we use havent really changed that much.

JavaScript is probably the only language that came out of the blue and took the world by storm, but that is due to the central role the browser holds for the internet. I sincerely doubt JavaScript would even have made a dent in the market otherwise.

LDef is the type of toolkit that can change all this. It’s not just another language, and it’s not just another bytecode engine. A lot of thought has gone into its architecture, not just notions of “how can we do this or that”, but big ideas about the future of computing and how IOT will sculpt the market within 5-8 years. And the changes will be permanent and irrevocable.

Being able to define new languages will be utmost important in the decade ahead. We don’t even know the landscape yet but we can extrapolate some ideas based on where technology is going. All of it in broad strokes of course, but still – there are some fundamental facts about computers that the timeless and havent aged a day. It’s like mathematics, the Pythagorean theorem may be 2500 years old but it’s just as valid today as it was back then. Principles never die.

I took the example of a game engine at the start of this article. That might have been a poor choice for some, but hopefully the general reader got the message: the nature of control requires articulation. Regardless if you are coding an invoice system or a game engine, factors like time, portability and ease of use will be just as valid.

There is also automation to keep your eye on. While most of it is just media hype at this point, there will be some form of AI automation. The media always exaggerates things, so I think we can safely disregard a walking, self-aware Terminator type robot replacing you at work. In my view you can disregard as much as 80% of what the media talks about (regardless of topic). But some industries will see wast improvement from automation. The oil and gas sector are the most obvious. A the moment security is as good as humans can make them – which means it is flawed and something goes wrong every day around the globe. But smart pumping stations and clever pressure measurements and handling will make a huge difference for the people who work with oil. And safer oil pipelines means lives saved and better environmental control.

The question is, how do we describe programs 20 years from now? Is our current tools up for the reality of IOT and billions of connected devices? Do we even have a language that runs equally well as a 1000 instance server-cluster as it would as a stand alone program on your desktop? When you start to look into parallel computing and multi-cluster data processing farms – languages like C# and C++ makes little sense. Node.js is close, very close, but dealing with all the callbacks and odd limitations of JavaScript is tedious (which is why we created Smart Pascal to begin with).

The future needs new things. And for that to happen we first need tools to create them. Which is where my passion is.

Node, native and beyond

When people create compilers and programming languages they often do so for a reason. It could be that their own tools are lacking (which was my initial motivation), or that they have thought of a better way to achieve something; the reasons can be many. In Microsofts case it was revenge and spite, since they were unsuccessful in stealing Java away from Sun Microsystems (Oracle now owns Java).

LDEF

LDef binaries are fairly straight forward. The less fluff the better

Point is, you implement your idea using the language you know – on the platform you normally use. So for me that is object pascal on windows. I’m writing object pascal because while the native compiler and runtime is written in Delphi – it is made to compile under Freepascal for Linux and OS X.

But the primary work is done in Smart Pascal and compiled to JavaScript for node.js. So the native part is actually a back-port from Smart. And there is a good reason I’m doing it this way.

First of all I wanted a runtime and compiler system that would require very little to run. Node.js has grown fat in features over the past couple of years – but out of the box node.js is fast, portable and available almost anywhere these days. You can write some damn fast and scalable cloud servers with node (and with fast i mean FAST, as in handling thousands of online gamers all playing complex first person worlds) and you can also write some stable and rock solid system services.

Node is turning into a jack of all trades, capable of scaling and clustering way beyond what native software can do. Netflix actually re-wrote their entire service stack using node back in 2014. The old C++ and ASP approach was not able to handle the payload. And every time they did a small change it took 45 minutes to compile and get a binary to test. So yeah, node.js makes so much more sense when you start looking a big-data!

So I wanted to write LDef in a way that made it portable and easy to implement. Regardless of platform, language and features. Out of the box JavaScript is pretty naked stuff and the most advanced high-level feature LDef uses is buffers to deal with memory. everything else is forced to be simple and straight forward. No huge architecture or global system services, just a small and fast runtime and your binaries. And that’s all you need to run your compiled applications.

Ultimately, LDef will be written in LDef itself and compile itself. Needing only a small executable stub to be ported to a new platform. Most of mono C# for Linux is written in C# itself – again making it super easy to move mono between distros and operating systems. You can’t do that with the Visual Studio, at least not until Microsoft wants you to. Neither would you expect that from Apple XCode. Just saying.

The only way to achieve the same portability that mono, freepascal and C/C++ has to offer, is naturally to design the system as such from the beginning. Keep it simple, avoid (operatingsystem) globalization at all cost, and never-ever use platform bound APIs except in the runtime. Be Posix but for everything!

Current state of standard and licensing

The standard is currently being documented and a lot of work has been done in this department already. But it’s a huge project to document since it covers not only LDEF as a high-level toolkit, but stretches from the compiler to the source-code it is designed to compile to the very binary output. The standard documentation is close to a book at this stage, but that’s the way it has to be to ensure every part is understood correctly.

But the question most people have is often “how are you licensing this?”.

Well, I really want LDEF to be a free standard. However, to protect it against hijacking and abuse – a license must be obtained for financial entities (as in companies) using the LDEF toolkit and standard in commercial products.

I think the way Unreal software handles their open-source business is a great example of how things should be done. They never charge the little guy or the Indie developer – until they are successful enough to afford it. So once sales hits a defined sum, you are expected to pay a small percentage in royalties. Which is only fair since Unreal engine is central to the software to begin with.

So LDef is open source, free to use for all types of projects (with an obligation to pay a 3% royalty for commercial products that exceeds $4999 in revenue). Emphasis is on open source development. As long as the financial obligations by companies and developers using LDEF to create successful products is respected, only creativity sets the limit.

If you use LDEF to create a successful product where you make 50.000 NKR (roughly USD 5000) you are legally bound to pay 3% of your product revenue monthly for the duration of the product. Which is extremely little (3% of $5000 is $150 which is a lot less than you would pay for a Delphi license, the latter costing between upwards of USD 3000).

 

Freepascal online, awesome work!

June 20, 2017 Leave a comment

Freepascal is, with the exception of gcc – the gnu c/c++ compiler toolchain, probably the most versatile and adaptable compiler in the world. Unlike most compilers outside the C camp – Freepascal is complete toolchain, and it has more than a few similarities to c/c++ in that it’s largely self-reliant.

MUI_PascalDemo

Freepascal scales from the old mc68000 family to the latest Intel i7, impressive!

FPC is easy to port over to new (or old) platforms, and as far as languages go – object pascal has a level of productivity that I have yet to find in other languages.

Now I have no interest in language wars or fanboy stereotyping, all languages have aspects that are unique and beneficial. So this is not an attempt to promote A over B.

Targeting both old and new from a single codebase

While mainstream and work related topics never brush into this – there is actually a healthy group of people still using older platforms. Naturally this is well within the “hobby” range – where passion and curiosity are the prime motivators rather than cash. But still, it is interesting to see how little of modern software is capable of scaling back to processors 10, 15 or 20 years out of date.

Alb42 is the nickname of a very industrious individual who has taken it on himself to port freepascal back to his roots. Which just happens to be the Amiga line of computers.

MiniSlugAmiga

Some of the best arcade games ever made are “vintage” by nature

But he has also done a lot of work for modern systems. A new Amiga was actually released this year (although most people havent noticed it) – and finding development software for a brand new platform is not easy. You have a variant of C/C++ naturally – but the honest truth is that C/C++ is not the language most people use to create desktop productivity software in. That is left to object pascal, C# and variations of Java. Delphi alone is responsible for a huge portfolio of popular software on Windows for instance. Only surpassed recently by monoliths like C#.

So getting freepascal running on Amiga OS 4 (which is the latest and modern operating system shipping with the new Amiga x5000 model) is very important to the market share of the platform.

Do it online!

Alb42 has setup an online compiler testbed where you can type in your freepascal snippets and compile to executable code on the spot. You can choose between MorphOS, Classic 68k, Aros (x86) and Amiga OS 4 (ppc). It’s a very simple setup yet it’s possible to do ad-hoc compilation and play around with the system 🙂

onlinecompile

While humble, Alb42 has setup an interesting online kitchen sink that demonstrates how flexible and powerful object pascal really is

To give it a go, head over to: https://home.alb42.de/fpamiga/

And as always, visit his blog for both VMware images with cross compilation and binaries for your favorite retro system.

But perhaps even more important, your new x5000 (!)

Dont forget to join us at Amiga Dispatch on Facebook.

Cheers!

A day with the Tinkerboard, part 2

June 10, 2017 Leave a comment

Please read the first article here before you continue.

Having gone through a wealth of alternative boards, alternative to the Raspberry PI 3b that is; I have to admit that I am really, really disappointed by the results. Not just with the Tinkerboard but also boards like the ODroid XU4 and the whole fruit-basket of banana, orange and pineapple PI clones.

asus

What a waste

And please keep in mind that I have not been out to discredit anyone. I have been genuinely interested in these alternatives for many reasons, not just emulation or multimedia. I have no problem paying that extra 15£ for a system that delivers twice the power of the PI, in fact I even paid close to 150£ for the UP board. Which is also the only board that so far has delivered on its promise. Microsoft Windows and the horrors of eMMC storage considered.

Part of what I work with these days include node.js and full screen browser views, often touch enabled for POS (point of sale) type machines. This is an exciting field because for the first time in history we can now create rich, responsive and rock solid solutions using off the shelves web technology.

So while emulation of older systems is interesting for hobby purposes, the performance of vital technologies like the browser – and consequently the performance of JavaScript and the underlying JIT compilation, is naturally paramount for what I do.

So I was really looking forward to the extra power these boards advertize. Both CPU, number of cores, extra memory and last but not least – the benefits of a fast GPU. The latter directly impacts the effects we can use in our user-interface. And while effects may seem superficial, its important for touch feedback and the customers experience.

desktop_embedded

We have all come across touch devices that give little or no visual feedback. We all know what its like when you type something and the UI just hangs while talking with a server. This is why web tech is so important today. In many ways Html5 and websocket is re-defining what we think of as software.

So while I bring little pie-charts and numbers to the table, I do bring honesty and perspective. It’s not all fun and games.

Was it really fair?

It has been voiced by a couple that perhaps my way of testing things is unfair. Like I mentioned in my previous article the PI has a healthy head-start and a rich community that not only consumes and demands, but also contributes substantially to the value of the product. A well established community around a product is worth its weight in gold. The alternative boards have yet to establish that and when it comes to the Tinkerboard, it’s pretty much just getting started.

Secondly, as far as emulation goes, we have to remember that those who make these products rarely have emulation in mind (and probably not Amiga which by any standard is esoteric compared to Nintendo and Sega consoles). If we look at what people use gadgets like the Raspberry PI for, I think products like Kodi, Apache and various DIY programming experiments out-rank all the retro systems combined. And it’s clear that the people behind Tinkerboard is more media oriented than anything else; it ships with hardware support for 4k video playback after all.

Having said that, I do feel I gave both the ODroid and Tinkerboard a fair trial. I did not approach these boards as a developer, but rather as a consumer. I have focused on the experience of familiar things – like using a browser and how well JavaScript runs, does the UI lag under stress; things that any customer will face after purchase.

UAE4Arm vs FS-UAE

As far as UAE (Unix Amiga Emulator) there is one thing that might be unfair. Amibian is based around something called UAE4Arm which is a highly optimized version of the Amiga emulator. I would even go so far as to say its THE most optimized UAE version on any platform – Windows and OSX included.

The Amiga was a lot more than just a games machine, it was way ahead of both Windows and OS X for a decade

The Amiga was a lot more than just a games machine, it was way ahead of both Windows and OS X for a decade

Comparing that with FS-UAE is perhaps unfair. FS-UAE is based on the older, non optimized codebase. It’s known for stable emulation and good JIT performance. Sadly the JIT engine only works on x86 and is thus disabled on ARM systems. So to be honest it can’t hold a candle against UAE4Arm. The latter uses lookup tables for everything, fast switch cases instead of long sequences of “if then else” code (easier for the compiler to optimize), and it benefits from hardcore llvm level 3 optimization. So I agree, comparing FS to UAE4Arm is like comparing a tiger with a kitten. It’s not even in the same ballpark.

Optimization on this level is almost a lost art these days. People are used to languages like C# and Java – which are hopelessly bloated (for those that have seen what raw assembler and hand optimized pascal can deliver). On the Amiga we learned how to shave off a clock-cycle here and there – and the end result really made a huge difference.

The same type of low-level, every cycle counts, type optimization is used throughout UAE4Arm. FS-Uae is safe, slow and can hardly deliver A500 speed on a Raspberry PI. Yet UAE4Arm spits out 3.2 times the power of an Amiga 4000 with a 68040 cpu. That was the flagship power machine back in the day, and this little $40 card delivers 3 times the power under emulation.

Its like comparing a muffled 1978 Lada with a pissed off 2017 Lamborghini.

Fair is fair

Having tested just about every distro I could find for the Tinker, even back-tracking to older releases in hope that the drivers there would be better suited for the board (perhaps the new drivers were released in a hurry, or a mistake was made) the last straw really was Android. My hope was that Android could have better drivers since it sees a lot of development. It has giants like Google and Samsung behind it – so the last glimmer of light was that Asus had focused on Android rather than Linux.

podcast-event

Will Android save the say and give the Tinker it’s due credit?

I downloaded the latest image, burnt it and booted – thinking that this should finally put things right. Well it didn’t. Quite the opposite.

The first boot took almost an hour. Yes, an hour. I suspect this is because it did a network restore – downloading packages and requirements from the Internet. But that is still no excuse. I have tried Android on the PI it and it was up and running in less than 10 minutes; and that includes having to set some initial user information.

When the Android desktop finally came into view, it was stripped down with no repository package manager. In other words you would have to manually install packages. But that would have been OK had it not been for the utterly sluggish performance.

Android on the PI is no race-car, but at least it doesn’t freeze and lock-up every five second. Clicking on a link in the browser froze the whole system, and you were asked if you wanted to terminate the process again and again while Android figured out the meaning of life or whatever. Every single time.

I’m sorry but this is the worst experience I have ever had with a board. The ODroid XU4 at least tries to deliver a good experience. I had no problems finding Linux editions that ran well (especially gaming systems, Kodi and some homebrew distros) – and the chrome build for the ODroid gave far better results than both the PI and Tinkerboard. It was not the power-house it had been hyped up to be, but at least it delivered a normal desktop experience (even the PI locks up when facing demanding tasks, but the Tinker has an 8 core cpu, twice the ram and a supposedly superior GPU. With the capacity of running 8 threads and doing task scheduling in batches of 8 – the Tinkerboard should remain both stable and interactive under pressure).

I am sorry but I cannot recommend the Tinkerboard at all. The ODroid gives you a slight edge over the PI and if someone baked a more optimized Linux distro for it (it ships with a variant of Ubuntu) then ODroid XU4  would kick serious ass. But the Tinkerboard is a complete disaster.

Do not waste your money on the Tinkerboard. It may have the hardware but the software to utilize and express that power is simply not there! It has been the most pointless purchase I have done this year.

A common waste

The common denominator between these alternatives is the notion that more power equals more stuff. Which ultimately results in biting over more than they can chew. A distro like Pixel is lightweight, optimized and the PI foundation has spent a considerable amount of time making sure the drivers run as fast as possible.

Weird Science, a fun but completely fictional movie from 1985. I just posted this to liven up an otherwise boring trip down memory lane

Saying it doesnt make it so, except in Weird Science

What these alternative hardware suppliers don’t seem to get is that – what is the point of having a cpu that is twice as fast as the PI, when you waste it on a more demanding version of Linux? Both the ODroid and Tinkerboard ship with Ubuntu clones. And while Ubuntu is wonderful to work with on a high-end x86 machine, 32 gigabytes of ram and a i7 CPU growling away under your desk, it is without a doubt the worst possible OS you could pick for a tiny ARM board.

To put things into perspective: The Tinkerboard has the “theoretical” cpu power of a Samsung Galaxy S4 mobile phone (2012). Yet in their infinite wisdom they decided Ubuntu is what is really going to show off this fart of a device.

It makes absolutely no sense from a consumer’s perspective. As a user you want that extra power for running programs. You want to use the extra power on your stuff. Yet these guys seem to think that wasting all the extra power on infrastructure is going to score points.

So if anyone from the Tinkerboard or ODroid camp reads this, please remember the old saying “If you cannot innovate, emulate”. Make a fast, stable and small Jesse distro like Pixel. Focus on optimization. Compile whatever you can with maximum optimization. Shave off every clock cycle you can. Hand carve the drivers in assembler if that’s what it takes.

Because the sad result is, that right now the Raspberry PI is outperforming you with lesser hardware. There is no point in buying a Tinkerboard because it actually delivers a worse experience. It has bitten off more than it can chew and it’s trying to be something it’s not.

A day with the Tinkerboard, part 1

June 7, 2017 Leave a comment

Anyone into IOT mini computers, be it as a hobby or professionally – have no doubt heard about the Tinkerboard. The Tinkerboard is an alternative to the Raspberry PI (aren’t they all) produced by hardware giant Asus. Everything from its form-factor (size) to its color coordinated GPIO pins is more or less the same as the PI. What separates the two is that the Tinker gives you double the CPU power, double the ram and pretty much double of everything. If we compare it with the Raspberry PI 3b that is.

But you don’t get double for free. The Tinkerboard retails on Amazon for USD 60 which means something in the ballpark of 55£ for those that live in the UK. For the rest of us that live in far-away places like Norway, prices and taxes may vary.

tinker

Before I continue I want to write a few words about pricing. The word “expensive” is one I keep hearing when it comes to the Tinkerboard, but that is extremely unfair considering what you get for your money.

The Raspberry PI 3b is an awesome product and packs a lot of goodies into a credit-card size computer. The same is true for the Tinkerboard, but they have packed even more stuff into the same space! The difference in price is just 15£ – but those 15£ gives you the power to emulate demanding platforms like Nintendo Wii, Sega Dreamcast and Playstation (if you are into retro gaming). Platforms the Raspberry PI 3b dont stand a chance emulating (at least not smoothly).

For serious applications all that extra cpu power can be the difference between success and failure. In many ways the Tinker feels a bit like working on a low-end laptop. If you are building a movie server for your living-room the Tinkerboard will ble able to convert and stream movies much smoother and better than the PI. And if you are looking for a device to power your next kiosk system or POS device, again the extra memory and cpu power is going to make all the difference. And with just 15£ between the two models, I cannot imagine why anyone would pick the PI if gaming or movie streaming is on the menu.

Oh and it has built-in wi-fi and bluetooth support! Here are the specs as listed on Amazon:

  • High performance Quad Core ARM SOC 1.8GHz with 2GB of RAM -The Tinker board features the Rock chip RK3288 Soc with Mali – T764 GPU and 2GB of Dual Channel DDR3 memory
  • Non shared GBit LAN, Shielded Wi-Fi with upgradable antenna support
  • Highly compatible PCB & Topology, Tinker board offer extensive compatibility with SBC accessories & chassis
  • HD Audio & HD & UHD Video Support – Tinker board supports HD audio 192/24bit audio along with accelerated HD & UHD ( 4K ) video playback support* requires use of Rock chip video player in TinkerOS
  • DIY Friendly Design – Tinker board features multiple DIY friendly use features including a color coded GPIO header, silkscreen PCB and color coded pull tabs

That sounds pretty nice doesnt it 🙂

One does not simply take on the mighty PI

Being an alternative to the Raspberry PI is not as easy as it sounds. You would imagine that it all boils down to the specs right? So if you beef up the cpu, ram and gpu everyone will throw away their PI’s and buy your stuff? If only that was true. In which case the Tinkerboard would be awesome since it outperforms the PI on every level.

But it’s not that simple. There is more to a product than just raw power.

One of the cool things about the Raspberry PI is the community. The knowledge base of the product if you like. In order to build a community and be a success, everything has to click. I mean just look at the third-party eco-system that exists around the PI; all those companies selling add-ons and widgets, sensors and remote controls. The PI itself is quite dull. It’s all that extra stuff that makes it so fun to play with. And then you have things like excellent customer service, a forum where people help each other out, well written documentation — the value of the product suddenly exceeds whatever you paid for it to begin with.

And the Tinkerboard is not alone in trying to get a piece of the action. It competes with boards like the ODroid XU4, the x86 based UP boards, Latte Panda, The Pine boards and a whole fruit-basket of banana, orange and whatever PI clones. All of them trying to get a bite of the phenomenon that is the Raspberry PI.

So Asus got their work cut out for them.

First impressions

Having unpacked the board and plugged in all the cables (which is exactly as dull as it sounds) the first thing you do is look for software. Which naturally should be a one-click operation on their website.

Well, the first thing that annoyed me was exactly that: the Asus website. The Tinkerboard is not something that belongs on a sterile, streamlined corporate site. It belongs on website that should be easy to navigate, with plenty of information that hobbyists and “tinkers” need to get started. The Asus website simply lacks the warmth and friendliness of the PI foundation website. I even had to google around a couple of times just to find the disk images. These should be readily available on the site, on the first page even – to make sure customers can get up and running quickly.

tinkerweb

It’s a little bit clangy and a little bit jammy

The presentation of the board was ok, but you really shouldnt have to use an external search engine to locate essential data for something you just bought. Unlike the PI foundation, Asus seem to have put little effort into what customers get once they have bought the board. As for community I dont even know if there is one.

When you finally manage to find the bloody download page, these are your options (there are 17 downloads in total, counting older OS images, config files and schematics):

  • TinkerOS 1.8
  • TinkerOS 1.6 Beta (my initial download)
  • Android, also beta

And don’t expect a polished Linux distro like Pixel either. You get a clean distro with some typical stuff pre-loaded (python, perl for some reason, and various tidbits) but nothing close to the quality of Pixel.

Note: Why they would list older beta releases in their downloads is beyond me. It only adds to the confusion of a sterile list of items. And customers who just got their boards will be eager to try it out and hence make a mistake like I did. Beta and unstable releases should be in a separate category, while the stable and up-to-date images are listed first.

The second thing that really irritated me was something as simple as adjusting the keyboard layout and setting the locale. These are simply not present in preferences at all. Once again you have to google around until you find the terminal commands to manually set these things, which utterly defeats booting into a desktop environment to begin with. This might seem trivial for someone who knows Linux well, but to a novice it can take hours. Things this fundamental should be available in preferences or at the very least as a script. Like raspi-config on the PI.

But OK. If we look away from these superficial things and focus on the actual board, things are not half bad.

It’s the insides that counts

The first thing you are going to notice is how much faster the system is compared to the PI. When you start an application on the PI that is demanding, like libre-office or Lazarus, the whole system can freeze-up for a few seconds while the CPU go through the roof. I was pleased to find that this is not the case here (at least not as disruptive). You can start a program and the system continues to be responsive while things load.

What made me furious though was the quality of the graphics drivers. Performance wise the Tinkerboard is twice (actually a bit more) as fast as the PI on all fronts – and the GPU is supposed to deliver a vastly superior graphics experience. Yet when I ran Amibian.js (the Smart Pascal JavaScript desktop system) in Chrome – the CSS hardware effects were painfully slow. I even questioned if GPU was used at all (Note: which I later confirmed was the case, so much for downloading a beta image by mistake). I was amazed at the cpu power of the Tinkerboard because despite having to do everything purely with the CPU (moving pixels in memory, allocating device independent bitmaps, rasterizing layers and complex compositions in X) and it was more than usable.

But it sure as hell was not the superior graphics experience I was hoping for.

I also noticed that when moving a normal desktop window (X window that is) rapidly around the screen, it would lag behind and re-trace your steps until it caught up with the mouse pointer. This was very odd since most composition engines would have eliminated these movements as much as possible – and the GPU would do all the work. Again, I blame the drivers and pondered what could possibly make a GPU perform so badly.

gpuwrong

Im supriced it booted at all, nothing is working

So while the system is notably faster than the PI, I get the sense that the drivers and tooling is not really polished. The hardware no doubt got juice but without drivers being carefully written for the board – most of the extra juice you pay for is wasted.

In the end I opened up chrome and typed “chrome://gpu” to see what was going on. And not surprisingly I was correct. It wasnt using the GPU at all and I had wasted hours on something that should have been clearly marked “Lacks hardware acceleration” in bold, italic, underlined big red letters. Like the PI foundation did when they shipped Raspberry PI 2 without GPU drivers. But they promptly delivered and it was ok, because you were informed up front.

 

Back to scratch

After a trip back to google I downloaded the right (or at least working) image, namely the TinkerOS 1.8 stable. It booted up, resized the partition to fill the whole SD card automatically (and other boring bits).

The first thing I did was start chrome and use the “chrome://gpu” to get some statistics. And thankfully most of the items were now working and active.

gpubetter

Thats better, although i ponder why GPU DIB’s are not active

Emulation, the name of the game

While I have serious tests to perform there is no doubt that what I was most eager to play with was UAE (the Unix Amiga Emulator). Like most retro-heads I have a ton of Raspberry PI’s and ODroid’s around the house with the sole purpose of emulating older game or computer systems. And I think everyone knows that my favorite computer in the whole wide world is the Commodore Amiga.

On the PI, Gunnar’s native Amibian distro gives you awesome performance. If you overclock the ram, gpu and cpu using safe numbers – the PI 3b spits out roughly 3.2 times the power of a Mc68040 based Amiga 4000. In other words 3 times the power of a high-end Amiga from the early 90s. That is quite an achievement for a $40 board. You can only imagine my expectations for a board running more than twice as fast as the PI does (and costing that 15£ more).

18986661_10154521981875906_472132585_o

Boots fine but you can forget about using it. The PI is 100 times faster than this

One snag though was that UAE4Arm which is the highly optimized version of the Amiga Emulator doesnt exist for the Tinkerboard. In theory it should compile without problems since the latest version uses SDL exclusively. So there is no odd, deprecated UI toolkit like the old UAE4Arm used.

The only thing available was fs-uae. And while that is a very popular emulator on Windows and Mac, it’s sadly not optimized as much as UAE4Arm is. The latter uses lookup tables to the extreme, all if, then, else statements have been replaced by fast switches — and it’s just optimized beyond anything else running on ARM.

The result on the PI in phenomenal, but sadly fs-uae was all I could play with. I will try to compile UAE4Arm on the Tinkerboard later though. But right now im to busy.

Not optimized at all

It is hard to pinpoint exactly where the problem is, but I suspect the older codebase of fs-uae combined with poorly written drivers is what resulted in the absolutely sluggish behavor.

Dont get me wrong, if all you want to do is play some A500 or A1200 games the Tinker is more than capable. But if you want to run the same high-performance desktop as Amibian gives you on the PI – you can pretty much forget it.

I set the system to emulate an A4000, activated the JIT engine and set my desktop to match the display of the desktop. This should run like hell — yet it was slow as a snail. You could literally see the mouse-pointer slowly trying to keep up as I moved it across the desktop.

One interesting thing though. When I ran fs-uae on the first image, the beta image where everything was crap, it actually ran exceptionally well. But here on the so-called stable and “up to date” version, it was useless for anything but clean floppy gaming.

There are many factors involved in this. The drivers sits at the bottom just above the kernel and exposes the hardware so it can be used through a common API. On top of this you have stuff like OpenGL and on top of that again you have SDL which simplifies typical gaming or graphics tasks. This may sound like a long road for code to travel, but it’s actually very fast. If the drivers expose the power of the GPU that is.

I can only conclude at this point that the Tinkerboard has the firepower, I have seen the result of tests performed by others (and tested a retrogaming image) – and you feel the difference almost immediately when you boot the device. But the graphics drivers seems.. crippled somehow. It’s easy to point the finger at Asus and say they have done a terrible job at writing drivers here, but it can also be a bottleneck elsewhere in the system.

This is something Raspberry PI has got so right. Every part of the system has been optimized for the hardware, which makes the PI feel smooth even under a lot of stress. It delegates all the hard graphical stuff to the GPU – and the GPU just deals with it.

Sadly this is not the case with the Tinkerboard so far. And it really is such a shame because the specs speak for themselves.

Android

Next time: We will be giving Android a test and hopefully that will have drivers that are more up to date than whatever we have seen so far..

Smart Pascal: Memory and pointers

June 4, 2017 Leave a comment

Allocating, manipulating and working with memory is an important part of any programming language. While there are many ways of achieving the same result, comparing operations that work directly with the data using pointers – to code that is purely restricted to high-level, fixed datatype arrays is not even a competition. By using pointers you will be able to copy, fill, move and otherwise do things to large chunks of memory that would otherwise be slow and impractical.

Anyone who has ever coded their own game, or working with large blobs in a dataset knows this. I would even argue that you can’t even write a high-speed game in a native environment without pointers (unless you are using one of those game makers, which is not real programming anyways) – and the same can be said about database engines. Could you imagine writing a database engine without using pointers to move data around in memory? I would hope not.

Im not saying that you cant, I’m simply saying it would be a total waste of time not using pointers when the alternative would be 100 times slower.

The web stack and memory

JavaScript is known for many things, but manual memory management is certainly not one of them. That doesn’t mean JavaScript lacks any features for dealing with memory (those days are thankfully behind us), it simply means that your average JavaScript developer havent explored this aspect to the same extent as a C or Object Pascal developer would. These is no shame in this; If JavaScript is all you know then it would be a field you are unconfortable with.

But the JavaScript Virtual Machine and runtime-environment in a modern browser is actually quite good at memory. A lot has happened to JavaScript these past few years and regardless of what engine you use, be it Spidermonkey under Firefox or V8 under Chrome and Safari (and even Microsoft Edge) they have all been advanced with cool new features that object pascal and C/C++ coders are familiar with.

So fact is that a well written JavaScript function can move data around just as fast as your Object Pascal or C++ code. You just need to know the rules. And also get things into a framework that is easier to work with. JavaScript may have the functionality – but like all things JavaScript its not pretty and takes some getting used to. Which is why the Smart RTL is so nice to have.

How does memory work under JSVM?

First a few words about JSVM (the JavaScript virtual machine). First of all the runtime and the environment are two different things. If you have ever used a script engine in your Delphi or Lazarus applications, like DWScript, PascalScript or Lua for that matter – you will notice that out of the box these engines have no functionality except the fundamental language. They can run functions and create class instances, but the engine itself have no concept of a file or chunk of memory. Not unless you define those features and register the methods that delivers it.

So it is with the JavaScript runtime environment. It is initially empty. Engines like V8 or Spidermonkey have, out of the box, no special functionality at all. It is up to whomever uses the engine (in this case those that make browsers) to write and expose native objects to the engine. Objects that your code can access and consume.

The way memory is handled under Node.js for instance is much faster than how its done in a browser; because node has been optimized for server tasks rather than client tasks. There are also subtle differences between a node.js buffer-object and a browser buffer-object. This is where the strength of the Smart RTL becomes apparent, because it shields you from these differences and gives you a unified API that deals with all of this for you. So no matter if you work with node.js or FireFox – the same code will run identical on both JSVM configurations.

OK, lets look at what JavaScript has to offer. As mentioned above JavaScript is empty by default, and its up to the browser or node to expose a set of common objects you can access and use. And these objects are first and foremost:

  • Buffers
  • Views
  • Blobs

PS: We can look away from the latter since its not really interesting at this point.

Right, as you can probably guess the buffer object is the actual data. Or more correct, it is the container object for your data. It doesn’t have any read or write methods and it exists only to keep track of an allocated chunk of memory. Think of it as TMemoryStream without any read or write functionality.

But how do we access that memory? Well this is where things get interesting because JavaScript has a view object that you attach to your buffer. Its a bit like creating a TStreamWriter or TStreamReader object under object pascal.

The interesting part is not that a view object expose read and write methods, but that the view type defines a fixed datatype. A fixed datatype of your choosing.

Lets say you want to access your buffer as an array of longwords, well there is a view for that (and you can also create something called “typed arrays” connected to a buffer). But you can also access the same buffer using say a word (16 bit) integer if you wish. Since you can have multiple views attached to the same buffer, this opens up for some interesting optimization techniques – not to mention spectacular error messages when you get it wrong (ps: stick to the RTL and it will take care of you).

The rule is that unsigned datatypes (like uint8 for a “normal” byte) is prefixed with a “u”, while signed types have no prefix.

  • So views that access a buffer as unsigned types are prefixed with “u”
    • uint8buffer
    • uint16buffer [and so on]
  • Signed views lack the prefix
    • int8buffer
    • int16buffer

Now the Smart Pascal RTL will shield you from all the nitty-gritty – but it’s important to understand how memory and raw data is dealt with behind the scenes by the JSVM. And abstracting the reader writer methods from the actual data makes sense. In fact I implemented something similar myself earlier (my ByteRage Delphi library).

Once you get to know it, every piece falls into place.

Smart memory

The units we will be looking at are the following:

  • System.Types
  • System.Types.Convert
  • System.Memory
  • System.Memory.Allocation
  • System.Memory.Buffer

Start by opening up the System.Memory unit. Spend a few minutes getting an overview of the classes and functions there. And when you have had a peek focus on the class TAddress.

TAddress is a class that wraps around a JavaScript untyped buffer. It can be compared to a pointer holding the address to the memory you allocated (but also more, like the size). Let’s have a look at it:

  TAddress  = partial class(TObject)
  private
    FOffset:    Integer;
    FBuffer:    TMemoryHandle;
  protected
    function    GetSize: integer;virtual;
  public
    Property    Entrypoint: integer read FOffset;
    Property    Segment: TMemoryHandle read FBuffer;
    Property    Size: integer read GetSize;
    function    Addr(const Index: integer): TAddress;
    Constructor Create(const Segment: TMemoryHandle;
                const Offset: integer); overload; virtual;
    Destructor  Destroy;Override;
  end;

As you can see its very small and compact. It exposes a segment handle property which is a reference to the memory object it represents. It also expose size information, entrypoint and a curious Addr() function.

The Addr() function is for getting a sub address from within the memory buffer. Let’s say you want to write some data at position 1024 in your allocated buffer, well by calling FMyAddress.Addr(1023) it would return a new TAddress instance pointing at that particular place inside the buffer.

This is why we have an entrypoint property. This keeps track of what starting offset a TAddresse should start at. If you start at 1023, then all subsequent offsets will have the entrypoint added to them. They will be relative to this position. Think of it as the position property in TStream if you like. And you can naturally have as many TAddress instances pointing to the same buffer as you like – hence we use offsets to speed things up (instead of creating a new view object for every single reference).

Let’s allocate some memory:

procedure TForm1.ReserveMemory;
var
  LData: TAddress;
begin
  LData := TMarshal.Allocmem(1024 * 1024);
end;

The above code would allocate 1 megabyte of data, and LData now points to it. We can now use the other methods of TMarshal to work with the buffer. Let’s have a look at the methods TMarshal has to offer:

  TMarshal = class static
  public
    class property  UnManaged: TUnManaged;
    class function  AllocMem(const Size:Integer): TAddress;
    class procedure FreeMem(Const Segment: TAddress);

    class procedure Move(const Source:TAddress;
                    const Target:TAddress;
                    const Size:Integer);overload;

    class procedure Move(const Source:TMemoryHandle;
                    const SourceStart:Integer;
                    const Target:TMemoryHandle;
                    const TargetStart:Integer;
                    const Size:Integer);overload;

    class procedure FillChar(const Target:TAddress;
                    const Size:Integer;
                    const Value:char);overload;

    class procedure FillChar(const Target:TAddress;
                    const Size:Integer;
                    const Value:Byte);overload;

    class procedure ReAllocMem(var Segment: TAddress;
                    const Size:Integer);

    class function  ReadMemory(const Segment: TAddress;
                    const Size:Integer):TByteArray;overload;

    class procedure WriteMemory(const Segment: TAddress;
                    const Data:TByteArray);

    class procedure Fill(Const Buffer: TMemoryHandle; Offset: Integer;
                    ByteLen: Integer;const Value: Byte);
  end;

As you can see you pretty much have the same features as Delphi or Freepascal, albeit in a different form. Delphi being a native language doesnt need such intermediate objects, but the above code is actually leaps and bounds easier to work with than anything JavaScript gives you out of the box.

Lets say I want to write a longword 1024 bytes into our allocated memory. You would then use the Addr() function we talked about earlier to simplify things:

procedure TForm1.ReserveMemory;
var
  LData: TAddress;
begin
  LData := TMarshal.Allocmem(1024 * 1024);
  TMarshal.WriteMemory(LData.Addr(1023), TDataType.Int32ToBytes($BABECAFE));
end;

Now you are probably wondering – why waste time on converting the $BABECAFE value to bytes? Well you dont have to, but it’s one way of doing it. You can ofcourse create a uint32 view and write it directly like you would an array. But in order to make you more familiar with the RTL landscape I figured it would be valuable to start with the byte conversion. Which by the way saves you a lot of time. So have a peek at TDataType in System.Types.Convert.pas, the features there will impress you once you realize how little JavaScript gives you- and how much the RTL makes of it.

Moving up to TBinaryData

Now messing around with low-level byte buffers can be rewarding, but its not really the level you want to work on. You want the power of low-level stuff but with the infrastructure of high-level programming.

In the unit “system.memory.buffer” you will find a class that is written to cover all of your binary needs. It is optimized for reading and writing data fast and also writing various datatypes directly into the buffer without TAddress being a factor.
So while you can enjoy the low-level functionality of allocmem – you are actually expected to use TBinaryData. TMemoryStream is also fine, but TBinaryData will always be faster.

And it comes with methods for pretty much everything you could think of. Including portabilty! Be it buffer to stream, buffer to arrays or the other way around. TBinaryData has a lot of cool features. With ZLib just added to the new RTL – compression and zip file stuff will be added to this class (through partial classes, so only when you include the zlib unit will the methods appear in the class):

  TBinaryData = partial class(TAllocation
      ,IBinaryDataReadAccess
      ,IBinaryDataWriteAccess
      ,IBinaryDataReadWriteAccess
      ,IBinaryDataBitAccess
      ,IBinaryDataImport
      ,IBinaryDataExport)
  private
    FDataView:  JDataView;
  protected
    function    GetByte(const Index: integer):Byte;
    procedure   SetByte(const Index: integer;const Value:Byte);
  protected
    function    GetBitCount: integer; virtual;
    function    GetBit(const bitIndex: integer): boolean;
    procedure   SetBit(const bitIndex: integer;const value: boolean);
  protected
    function    OffsetInRange(Offset: integer): boolean;
  protected
    procedure   HandleAllocated; override;
    procedure   HandleReleased; override;
  public
    property    BitCount:integer read GetBitCount;
    property    Bytes[const ByteIndex: integer]: byte read GetByte write SetByte; default;
    property    Bits[const BitIndex: integer]: boolean read GetBit write SetBit;

    function    Allocation: IAllocation;

    procedure   AppendBytes(const Bytes: TByteArray);virtual;
    procedure   AppendStr(const Text: string);virtual;
    procedure   AppendMemory(const Buffer: TBinaryData; const ReleaseBufferOnExit: boolean);virtual;
    procedure   AppendBuffer(const Raw: TMemoryHandle);overload;
    procedure   AppendFloat32(const Value: Float32);virtual;
    procedure   AppendFloat64(const Value: Float64);virtual;

    procedure   CopyFrom(const Buffer: TBinaryData; const Offset: integer; const ByteLen: integer);
    procedure   CopyFromMemory(const Raw: TMemoryHandle;
                Offset: integer; ByteLen: integer);

    function    CutBinaryData(Offset: integer;ByteLen: integer): TBinaryData;
    function    CutStream(const Offset: integer; const ByteLen: integer): TStream;
    function    CutTypedArray(Offset: integer; ByteLen: integer): TMemoryHandle;

    procedure   Write(const Offset: integer; const Data: TByteArray);Overload;
    procedure   WriteFloat32(const Offset: integer; const Data: Float32);
    procedure   WriteFloat64(const Offset: integer; const Data: Float64);

    procedure   Write(const Offset:integer;const Data:TBinaryData);overload;
    procedure   Write(const Offset:integer;const Data:TMemoryHandle);overload;
    procedure   Write(const Offset:integer;const Data:String);overload;
    procedure   Write(const Offset:integer;const Data:integer);overload;
    procedure   Write(const Offset:integer;const Data:Boolean);overload;

    function    ReadFloat32(Offset:integer):Float;virtual;
    function    ReadFloat64(Offset:integer):Float;virtual;
    function    ReadBool(Offset:integer):Boolean;Overload;
    function    ReadInt(Offset:integer):integer;overload;
    function    ReadStr(Offset:integer;ByteLen:integer):String;overload;
    function    ReadBytes(Offset:integer;ByteLen:integer):TByteArray;overload;

    function    Clone: TBinaryData;

    procedure   SaveToStream(const Stream: TStream);
    procedure   LoadFromStream(const Stream: TStream);

    procedure   FromBase64(FileData: string);
    function    ToBase64: string;
    function    ToString: string;
    function    ToTypedArray: TMemoryHandle;
    function    ToBytes: TByteArray;
    function    ToStream: TStream;
    function    ToHexDump(BytesPerRow: integer;
                Options: TBufferHexDumpOptions): string;

    {$IFDEF APPTYPE_VISUAL}
    procedure   LoadFromFile(const aFileURI:String; const OnReady:TNotifyEvent);
    {$ENDIF}

    Constructor Create(aHandle: TMemoryHandle); overload; virtual;
  end;

If you already have allocated memory, or perhaps your server has been given a handle to a buffer for a file – notice that the constructor is overloaded and can work with existing buffer objects. In such a case it will not release the memory even if you free the instance (since it doesnt own it, only borrows it).

I took it one step further and added bit-manipulation as well. So you can allocate a large buffer and process it on bit level if you need to. I actually used that a lot when doing some audio code. It is also very handy when keeping track of visible sprites in a game (“find me an idle sprite” where each bit represents a sprite’s busy state). And its even an important feature when writing database engines.

Well this was a quick introduction to low-level memory stuff in Smart. There is a ton of features I havent even mentioned yet, like being able to turn data into a URI object – that can be attached to anchor (A tag) objects and auto clicked to force a “save as” to appear. This is really neat if you have a web application that generates data or binary files the user can save.

Until next time !

Smart Pascal: stop sharing beta code

May 20, 2017 Leave a comment

I was a bit sad to hear that our RTL, despite the testing team being a very closed group of people, had made its way into the hands of others. People who then blog or comment that its unstable or that to many changes has been added.

When the phrase “early beta” and “our first sprint is to test the low-level functionality, like pointers, memory allocation and manipulation, streams, compression, codecs and type conversion” is clearly stated, then why on earth would someone start at the other end (the visual high level) which is due for changes weeks into the future?

Since this is doing the rounds I figured it is only fair that I clear up a few things:

An async run-time-library that should deliver identical behavior on 9 operating systems and a plethora of browsers, is not something you slap together over a weekend. Many of the solutions in SMS are the result of months of hard work; research and testing on both PC, Mac, various mobile devices and six different embedded boards.

I don’t know how many hours I have put into this project these past 6 years, but its more than I care to admit.

Obviously an early beta is not going to give you 100% compatibility. Especially not since the whole point of the first sprint was for our group to write, test and make sure the low-level methods are rock solid before we move on.

The second sprint is due soon, here we will look at databases, data binding, node.js and writing system level services (via the PM2 process manager).

The last sprint would focus on the visual, high level aspect of the RTL. Things like custom controls, layout, non visual components, event objects, messaging and user interface code.

I’m really gutted that we didn’t even make it through the first sprint before the code was shared. I have no idea who did this, and I frankly don’t care. But from now on I will only throw ball with people I trust and have a friendship with.

People doing reviews and publishing opinions on an upgrade that is presently 1/3 into the final release candidate – I just feel that is unfair.

Polyfills are not shims, and they are not dangerous

It was voiced that the new RTL loads in a couple of files that were considered dangerous by google. Actually that cannot be further from the truth.

But stop and think about this: why would I link to external JavaScript files for things like mutation-events and observers? If these things are available in browsers (which they have been for a long time) why would I need external files?

The answer is that these files are polyfills. And there is a huge difference between what is known as a shim, and a polyfill.

A polyfill is a small JavaScript library that checks if the browser supports the features you want. But if it doesn’t – then the polyfill will register its own code to compensate for the lack of a function or feature. In other words a polyfill is a full implementation (!)

So even if Google, Mozilla or Microsoft remove support for node-observers or mutation-events tomorrow, your application wont be affected by it. Because the polyfill libraries are just that: full library implementations with no dependencies.

In the early beta I shipped out both the observer and mutation-events library has referenced. This has already been changed, and unless you include SmartCL.Observer.pas or SmartCL.MutationEvents.pas – these will no longer be linked automatically. That was never the plan either.

Had one of you (I have come across 3 different places with comments and info on the RTL around the web) bothered to just ask me, I would have included you in the beta process and explained this.

The executables have become so large!

This is another statement I stumbled over: “The new RTL is huge and it generates large JavaScript binaries!”.

First of all you were using an early beta, which means things will be in flux while each level of the RTL is being tested, probed and verified. The pascal uses section in some units referece code they no longer use. Cleaning up that is due for sprint 3. But again in our first sprint focus was on memory, binary data, type conversion, streams, compression and various low-level stuff – not high level code!

As for size that is actually wrong: our namespacing and unit fragmentation is there for a reason. By dividing files into 3 namespaces, fragmenting large units with many classes into more files, our linker is able to trim away unused code more efficiently; as well as optimize your code beyond anything people have seen so far.

poptions

Rule of thumb: when you have finished writing your application, always switch on the optimization like you see above for the final build. So yes, you are expected to use packing, obfuscation etc. before deployment. Not just to protect your code, but also to make it load and run faster.

Since you will either use your applications on a mobile device, an embedded device or just online like a website (or desktop via nodewebkit compilation) – you can further speed up loading by activating gzip compression on the server. You will be surprised just how fast and small your applications are when you tune them correctly.

Practical example

The Smart Embedded and Cloud desktop consists of roughly 40+ unit files. Since this desktop can emulate older systems, like the 68k Amiga, Atari, Super Nintendo and much more (more about those later) I have made linking these in optional (read: you have to include the units in order to link the code in, UAE.Core.pas being the primary file).

smartdesk

Without the 68k emulation layer (and various other emulators, which is just added for fun and inspiration) and Aros ROM files (the bios file for the emulator), we are looking at code size in the ballpark of 4.7 megabyte. Which is nothing compared to high-end native solutions.

If you include the 68k emulation layer, the end result is a 10 megabyte index.html file. Again this is small potatoes if you work with embedded systems or kiosk booths.

But since the new RTL design was created to be heavily optimized, far better than the older RTL ever was, the end result after compression is 467 kilobyte (!). When you compress the version that includes the 68k emulation layer – it optimizes down to 1.5 megabyte of code.

  • No 68k emulation layer = 4.7 megabyte uncompressed
  • No 68k emulation, compressed = 457 kilobyte (!)
  • 68k emulation added = 10.2 megabyte uncompressed
  • 68k emulation included, compressed = 1.5 megabyte
obfuscation

From 4.7 megabyte to 467 kilobyte

This level of compression and optimization was planned.  I have re-arranged the whole RTL in order to create the best possible circumstance for the compression and obfuscations modules – and I believe the results speaks for itself.

Is the new RTL larger? Indeed it is, with plenty of new classes, controls and libraries. The more features you want to enjoy, the more infrastructure must be in place. That is just how it is, regardless of language.

But you should also know that the new and longer inheritance chain has little or no impact on execution speed. This is what the compiler options “de-virtualization” and “smart linking” are all about. In most cases the code for TW3OwnedObject and TW3OwnedErrorObject are linked into TW3Component. Here is the inheritance chain as of writing:

  • TW3OwnedObject
    • TW3OwnedErrorObject
      • TW3Component
        • TW3TagObj
          • TW3TagContainer
            • TW3MovableControl
              • TW3CustomControl

Why did I change this? There are many reasons. First of all I wanted to make the RTL more compatible with Delphi, where names like TComponent and TCustomControl are known and understood by most. So when facing TW3Component or TW3CustomControl, most Delphi developers will intuitively know what these are all about. TW3Component is also the base class for non visual, drag & drop components.

  TW3OwnedErrorObject = class(TW3OwnedObject)
  private
    FLastError: string;
    FOptions:   TW3ErrorObjectOptions;
  protected
    property    ErrorOptions: TW3ErrorObjectOptions read FOptions;
    function    GetLastError: string;
    procedure   SetLastErrorF(const ErrorText: string;
                const Values: Array of const);
    procedure   SetLastError(const ErrorText: string); virtual;
    procedure   ClearLastError;
  public
    property    LastError: string read FLastError;
    constructor Create(const AOwner: TObject); override;
    destructor  Destroy; override;
  end;

There are also other benefits, like uniform error handling. While this might seem useless in an exception based language, there are places where an exception is not something you want to throw. Like in a system level service running under node.js.

In such cases you should catch whatever exception occures and call SetLastError() with the description. The caller will then check if something went wrong and can take measures.

Embedded, a different beast

One of the things Smart Pascal does exceptionally well is embedded platforms. Regardless of what you have been hired to do, be it a kiosk booth in a museum, a touch based vending machine, a parking payment terminal or just some awesome IOT project, Smart Pascal makes this childs play. And the reason its so easy to write powerful web applications is precisely due to the broad scope of our RTL.

Smart Pascal is not a tool for creating websites (well you can, but that’s not what it was designed for). It is a tool to write high quality, complex and feature rich applications.

The distinction between a “web application” and “website” may seem subtle at first, but it all boils down to depth. The Smart RTL now contains a huge amount of classes and third party libraries that simplify writing modern applications with depth.

Other frameworks that (for some reason) are very popular, like JQuery, JQuery UI and ExtJS lacks this depth. They look great, but thats it. I recently catched up with a project that had taken the developers a whole year to create in pure JS. It took me 3 weeks to finish what they spent over 12 months implementing. And that has to do with depth and proper inheritance – not that absurd prototype cloning JavaScript coders use.

To sum up: if you want to enjoy the same features and power in the browser as you do with Delphi or Lazarus on your desktop – then you cant expect Smart to produce binaries smaller than it’s native counterparts.

But more importantly, on embedded systems like the Raspberry PI, the UP x86 board or the new and sexy Tinkerboard — do you really think its going to matter if your application is 4, 8 or 10 megabyte? By now you should have realized that our compressor and obfuscator bakes that down to 460kb or 1.5 megabyte — both peanuts compared to similar system written in Java, C# or Delphi.

Ask and ye shall recieve

I dont really care who shared my RTL, but I am sad that it happened. Im not a difficult person when it comes to these things – all you have to do is ask.

Also remember that the RTL is a copyrighted product. It belongs to a registered financial entity (read: company) and is not subject for distribution.

FMX4Linux is coming, and we cant wait!

May 3, 2017 Leave a comment

When Embarcadero announced Linux support for their Tokyo release of Delphi, my soul literally left my body for a moment. Could it really be true? After all these years have Embarcadero done what many said would be impossible?

I must admit that people saying something is impossible has lost its sting for me. Over the past 3 years I have one of these impossible things on a weekly basis, yet people are just as shocked every single time. But this time – I was the one in a state of excitement.

As an outspoken (an understatement perhaps) active blogger, my skin has grown thick over the years. I also get to see a lot of cool tech long before it’s commercially available and mainstream – so it takes more for me to be swayed and dazzled. And you grow a healthy instinct for separating bullshit from true technical achievements too.

Tokyo

Delphi Tokyo is probably one of the finest Delphi editions to date

But yes, I admit it – this time Embarcadero surprised me in every positive way imaginable. If you follow my blog you know that I call it as I see it and hold little back. But this was a purely positive experience.

No I know what you are going to say; everyone knew about this right? Yeah me too. But my mind has been elsewhere lately with work and projects, so I didn’t catch the release buzz from the closed forums (well, “closed” is a matter of perspective, I have friends from Russia to the United States, and from china to the Sudan) and for the first time since the Borland days – Embarcadero got the drop on me.

I’m the kind of guy that runs on passion. Delphi, Smart Pascal and even object pascal as a general language is not just work for me – its something I love to use. I relax and enjoy myself when coding. So you can imagine my reaction when my boss sent me a message with “Download Tokyo and give me a report”. It was close to midnight but I was out of that bed faster than bacon on toast, ran into my home office wearing nothing but boxers and a Commodore t-shirt – and threw myself into Embarcadero Developer Network’s download section.

From hero to zero in 2 seconds

I think it was around 07:00 the next morning, one hour before I was due for work that the magic phrase “command line and system services [daemons] only” hit home. And yes I “kinda” knew that before – but maybe, just maybe Embarcadero had thrown in visual applications in the 11th hour. I even had a friendly bet with Jim McKeeth that a FMX UI solution would appear less than 24 hours after release (more about that later).

linuxproj

Now that is a beautiful thing

Either way, it was quite the anti-climax after all that work in VMWare installing Delphi from scratch, waiting, hoping and praying. First of all because I remember watching an in-depth technical review about Firemonkey by David Intersimone a few years back; the one where he describes the Firemonkey architecture in detail. Especially how the abstraction layer between the visual control framework and the actual os made it possible for FMX to quickly adapt to new environments. Firemonkey is a complex and highly adaptable framework, but its biggest strength is paradoxically enough its simplicity. A simplicity achieved through abstracting the UI from the rendering functionality. At least as much as possible, you still have to deal with OS level windowing, security and all of that – so it’s no walk in the park either.

In the presentation (sadly I don’t have a link to this one) David went to great lengths to explain that regardless of operating system, as long as someone implemented a driver class that exposed the set of features FMX needs – Firemonkey would run as long as the compiler supported the instruction set. Visual engines could be DirectX, OpenGL, Cairo or whatever makes sense on that particular platform. As long as the “bridge class” talking with the operating system is there – Firemonkey can run on a toaster if so be.

So Firemonkey has the same abstraction concept that we use in the VJL (Visual JavaScript Component Library) for Smart Pascal (differences not withstanding). If it’s Windows you are running on, DirectX is used; If it’s OS X then Apple’s implementation of OpenGL runs the show – and if you are on Linux you can pick between OpenGL and Cairo. I must admit I havent looked too closely at Cairo, but I know it was designed to make advanced composition and UI rendering more efficient.

Why it Tokyo didn’t ship with visual application support is beyond me, but considering the timing of what happened next, I have made my own conclusions. It doesnt really matter to be honest – the point is we got it and it rocks!

FMX4Linux to the rescue!

Remember the wager I mentioned with Jim McKeeth? It wasnt a serious wager, I just commented and said “a Linux FMX solution will appear within 24hrs, you can bet on it” and added a smiley. I had no idea who or how, I just knew it’s going to turn up.

Because one thing that is a sure bet – it’s that the Delphi community is a group made up of highly resourceful, inventive and clever people. And I was pretty sure that it wouldn’t take many hours before someone came up with a patch or framework to fill the void. And right I was. Less than 24 hours later and it was fact rather than conjecture.

fmxlinux

This is just a must have. There is no debate.

So less than 24 hours after Delphi Tokyo hit the shelves. Eugene Kryukov and Alexey Sharagin presented “FMX for Linux”. Giving you both the missing rendering back-end that talks to the system – and some kind of “widget mapping” (as far as I can understand, I wont pretend to know how they did it) that renders your UI according to GTK. So it’s not just a simple “patch”, theme or class that calls a handful of external routines; it’s a full visual implementation of FMX for Linux. Impressive? Oh yeah, and then some!

Let us explore!

Over the next few days I will be giving you an in-depth look at how this system works. I’m also going to test drive Html Components and see if we can get that running under Linux as well. Since FMX for Linux is still in development we have to take height for that – but being able to target Ubuntu is pretty cool! And Remobjects, glorious Remobjects SDK, if that works out of the box I will dance the jig and upload it to YouTube, I swear to cow!

Remobjects_linux

Linux is about to feel the full onslaught of object pascal

There is little doubt what the next Smart Pascal IDE will be based on, and when you combine FMX for Linux, HTML components, Remobjects SDK and TMS into one – you got serious firepower to play with that will give even the most hardened C/C++ QT developer reason to be scared. And that is before the onslaught of Data Abstract and Remobjects C# native compiler.

Oh man next weekend is going to be the best ever!

Let’s not forget ARM targets

asus

The “PI killer” has arrived!

On a second note we will also be looking at the my latest embedded toys – namely the Asus Tinkerboard. I just got two of them in the mail today. Its going to be exciting to see how it fares against the Raspberry PI, ODroid XU4 and the Intel Atom based UP board.

We will also be testing how these two cards can be clustered together using node.js to work as one – and see how that impacts performance for our node.js based Smart Desktop project. These are exciting times indeed!

To make things even more interesting I will be pitching the Tinkerboard against the ODroid XU4 (original version, not the pussy passive save the environment unicorn they push now) against both the x86 UP v1 and Raspberry 3b. Although I think the Raspberry PI is in for the beating of its life when the ODroid and Tinker is overclocked to blood lust mode!

smartdesk

The Smart desktop has a powerful node.js back-end that packs a punch

So, when LLVM optimized JavaScript runs Mc68040 machine-code at 4 times the speed of a high-end Amiga 4000, I will be content.

So let’s do another “that’s impossible” shall we 🙂

Smart-Pascal: A brave new world, 2022 is here

April 29, 2017 Leave a comment

Trying to explain what Smart Mobile Studio does and the impact it can have on your development cycle is very hard. The market is rampant with superficial frameworks that promises you the world, and investors have been taken for a ride by hyped up, one-click “app makers” more than once.

I can imagine that being an investor is a bit like panning for gold. Things that glitter the most often turn out to be worthless – yet fortunes may hide beneath unpolished and rugged surfaces.

Software will disrupt most traditional industries in the next 5-10 years.
Uber is just a software tool, they don’t own any cars, yet they are now the
biggest taxi company in the world. -Source: R.M.Goldman, Ph.d

So I had enough. Instead of trying to tell people what I can do, I decided I’m going to show them instead. As the american’s say: “talk is cheap”. And a working demonstration is worth a thousand words.

Care to back that up with something?

A couple of weeks ago I published a video on YouTube of our Smart Pascal based desktop booting up in VMWare. The Amiga forums went off the chart!

vmware

For those that havent followed my blog or know nothing about the desktop I’m talking about, here is a short summary of the events so far:


Smart Mobile Studio is a compiler that takes pascal, like that made popular in Delphi or Lazarus, and compiles it JavaScript instead of machine-code.

This product has shipped with an example of a desktop for years (called “Quartex media desktop”). It was intended as an example of how you could write a front-end for kiosk machines and embedded devices. Systems that could use a touch screen as the interface between customer and software.

You have probably seen those info booths in museums, universities and libraries? Or the ticket machines in subways, train-stations or even your local car-wash? All of those are embedded systems. And up until recently these have been small and expensive computers for running Windows applications in full-screen. Applications which in turn talk to a server or local database.

Smart Mobile Studio is able to deliver the exact same (and more) for a fraction of the price. A company in Oslo replaced their $300 per-board unit – with off the shelves $35 Raspberry Pi mini-computers. They then used Smart Pascal to write their client software and ran it in a fullscreen-browser. The Linux distribution was changed to boot straight into Firefox in full-screen. No Linux desktop, just a web display.

The result? They were able to cut production cost by $265 per unit.


Right, back to the desktop. I mentioned the Amiga community. This is a community of coders and gamers that grew up with the old Commodore machines back in the 80s and 90s. A new Amiga is now on the way (just took 20+ years) – and the look and feel of the new operating-system, Amiga OS 4.1, is the look and feel I have used in The Smart Desktop environment. First of all because I grew up on these machines myself, and secondly because the architecture of that system was extremely cost-effective. We are talking about a system that delivered pre-emptive multitasking in as little as 512Kb of memory (!). So this is my “ode to OS 4” if you will.

And the desktop has caused quite a stir both in the Delphi community, cloud community and retro community alike. Why? Because it shows some of the potential cloud technology can give you. Potential that has been under their nose all this time.

And even more important: it demonstrate how productive you can be in Smart Pascal. The operating system itself, both visual and non-visual parts, was put together in my spare time over 3 weeks. Had I been able to work on it daily (as a normal job) I would have knocked it out in a week.

A desktop as a project type

All programming languages have project types. If you open up Delphi and click “new” you are greeted with a rich menu of different projects you can make. From low-level DLL files to desktop applications or database servers. Delphi has it all.

delphistuff

Delphi offers a wide range of projects types you can create

The same is true for visual studio. You click “new solution” and can pick from a wide range of different projects. Web projects, servers, desktop applications and services.

Smart Pascal is the only system where you click “new project” and there is a type called “Smart desktop” and “Smart desktop application”. In other words, the power to create a full desktop is now an integrated part of Smart Pascal.

And the desktop is unique to you. You get to shape it, brand it and make it your own!

Let us take a practical example

Imagine a developer given the task to move the company’s aging invoice and credit system from the Windows desktop – to a purely web-based environment.

legacy2The application itself is large and complex, littered with legacy code and “quick fixes” going back decades. Updating such a project is itself a monumental task – but having to first implement concepts like what a window is, tasks, user space, cloud storage, security endpoints, look and feel, back-end services and database connectivity; all of that before you even begin porting the invoice system itself ? The cost is astronomical.

And it happens every single day!

In Smart Pascal, the same developer would begin by clicking “new project” and selecting “Smart desktop”. This gives him a complete desktop environment that is unique to his project and company.

A desktop that he or she can shape, adjust, alter and adapt according to the need of his employer. Things like file-type recognition, storage, getting that database – all of these things are taken care of already. The developer can focus on his task, namely to deliver a modern implementation of their invoice and credit software – not waste months trying to force JavaScript frameworks do things they simply lack the depth to deliver.

Once the desktop has the look and feel in order, he would have to make a simple choice:

  • Should the whole desktop represent the invoice system or ..
  • Should the invoice system be implemented as a secondary application running on the desktop?

If it’s a large and dedicated system where the users have no need for other programs running, then implementing the invoice system inside the desktop itself is the way to go.

If however the customer would like to expand the system later, perhaps add team management, third-party web-services or open-office like productivity (a unified intranet if you like) – then the second option makes more sense.

On the brink of a revolution

The developer of 2022 is not limited to the desktop. He is not restricted to a particular operating system or chip-set. Fact is, cloud has already reduced these to a matter of preference. There is no strategic advantage of using Windows over Linux when it comes to cloud software.

Where a traditional developer write and implement a solution for a particular system (for instance Microsoft Windows, Apple OS X or Linux) – cloud developers deliver whole eco systems; constellations of software constructed from many parts, both micro-services developed in-house but also services from others; like Amazon or Azure.

All these parts co-operate and can be combined through established end-point standards, much like how components are used in Delphi or Visual Studio today.

smartdesk

The Smart Desktop, codename “Amibian.js”

Access to products written in Smart is through the browser, or sometimes through a “paper thin” native host (Cordova Phonegap, Delphi and C/C++) that expose system level functionality. These hosts wrap your application in a native, executable container ready for Appstore or Google Play.

Now the visual content is typically the same, and is only adapted for a particular device. The real work is divided between the client (which is now very much capable) and your server back-end.

So people still write code in 2022, but the software behaves differently and is designed to function as a group (cluster). And this requires a shift in the way we think.

asmjs

Above: One of my asm.js prototype compilers. Lets just say it runs fast!

Scaling a solution from processing 100 invoices a minute to handling 100.000 invoices a minute – is no longer a matter of code, but of architecture. This is where the traditional, native only approach to software comes up short, while more flexible approaches like node.js is infinitely more capable.

What has emerged up until now is just the tip of the ice-berg.

Over the next five to eight years, everything is going to change. And the changes will be irrevocable and permanent.

Running your Smart Pascal server as a system-level daemon is very easy once you know what to look for :)

The Smart Desktop back-end running as a system service on a Raspberry PI

As the Americans say, talk is cheap – and I’m done talking. I will do this with you, or without you. Either way it’s happening.

Nightly-build of the desktop can be tested here: http://quartexhq.myasustor.com/

Smart Desktop: Inter frame communication

April 24, 2017 Leave a comment

What is a process in a web desktop environment? Been thinking a bit about this, and it’s actually not that easy to hang the reality of an external script or website running in a frame as a solid “concept” (I use the word entity in my research notes). Visually it appears as a secondary, isolated web application – but ultimately (or technically) it’s just another web object with its own rendering context.

amibian_cortanaWhat I’m talking about is programs. Amibian.js or Smart desktop can already run a wealth of applications – which ultimately are just external or internal web-pages (a very simple term in some cases, but it will have to do). But when you want complex behavior from an eco-system, you must also provide an equally complex means of communication.

And this brings me neatly to the heart of todays task – namely how the desktop can talk to running web-applications and visa versa. And there are ultimately only two real options to choose from here: pure client, or client-server.

Unlike other embedded / web desktops I want to do as much as possible client-side. It would take me five minutes to implement message dispatching if I did this server-side, but the less dependent we are on a server – the more flexible amibian.js will be in terms of integration.

Screenshot

The first Desktop-Application type running in the desktop! And its aware that it is running under Amibian. This frame/host awareness helps establish parent/child relationship

Turns out that the browser has an API for this. A message API that allows a main website to talk with other websites embedded in IFrames. Which is just what we need.

But its very crude and not as straight forward as you may think. You have commands like postMessage() and then you can set up an event-handler that fires whenever something comes through. But you also have security measures to think about. What if you interface with a web-application designed to kill the system? Without the safety of an abstraction layer that would indeed be a very easy hack.

The Smart message API

This has been in the RTL for quite some time but I havent really used it for much. I had one demo a couple of years back that used the message-api to handle off-screen rendering in a separate IFrame (sort of poor mans thread), but that was it.

Well, time to update and bring it into 2017! So I give you the new and improved inter-frame, inter-window and inter-domain message framework! (ta da!)

It should be self-evident how to use it, but a little context wont hurt:

  • You inherit from TW3MessageData and implement whatever data-fields you need
  • You send messages using the w3_SendMessage() method
  • You broadcast (i.e send the message to all frames and windows listening) messages using the w3_broadcastMessage method

But here comes the twist, because I hate having to write a bunch of if/then/else switches for messages. So instead I created a simple subscription system! Super small yet very efficient!

msgbroker

In other words, you inform the system which messages you support, which then becomes your subscription. Whenever the local dispatcher encounters that particular message type  – it will forward the message object to you.

Here is the code. Notice the helper functions that deals with finding the current domain and so on – this will be handy when establishing a parent/child relationship.

Also note: you are expected to inherit from these classes and shape them to your message types. I have implemented the bare-bones that mimic’s Windows message API. And yes, please google inter-frame communication while testing this.

unit SmartCL.Messages;

interface

uses
  w3c.dom,
  System.Types,
  System.Time,
  System.JSON,
  Smartcl.Time,
  SmartCL.System;

const
  CNT_QTX_MESSAGES_BASEID = 1000;

type

  TW3MessageData         = class;
  TW3CustomMsgPort       = class;
  TW3OwnedMsgPort        = class;
  TW3MsgPort             = class;
  TW3MainMessagePort     = class;
  TW3MessageSubscription = class;

  TW3MessageSubCallback  = procedure (Message: TW3MessageData);
  TW3MsgPortMessageEvent = procedure (Sender: TObject; EventObj: JMessageEvent);
  TW3MsgPortErrorEvent   = procedure (Sender: TObject; EventObj: JDOMError);

  (* The TW3MessageData represents the actual message-data which is sent
     internally by the system. Notice that it inherits from JObject, which
     means it supports 1:1 mapping from JSON.
     This is why Deserialize is a member, and Serialize is a class member. *)
  TW3MessageData = class(JObject)
  public
    property    ID: Integer;
    property    Source: String;
    property    Data: String;

    function Deserialize: string;
    class function Serialize(const ObjText: string): TW3MessageData;

    constructor Create;
  end;

  (* A "mesage port" is a wrapper around the [obj]->Window[]->OnMessage
     event and [obj]->Window[]->PostMessage API.
     Where [obj] can be either the main window, or an embedded
     IFrame->contentWindow. This base-class implements the generic behavior.
     You must inherit or use a decendant class if you dont know exactly what
     you are doing! *)
  TW3CustomMsgPort = Class(TObject)
  private
    FWindow:    THandle;
  protected
    procedure   Releasewindow;virtual;
    procedure   HandleMessageReceived(eventObj: JMessageEvent);
    procedure   HandleError(eventObj: JDOMError);
  public
    Property    Handle:THandle read FWindow;
    Procedure   PostMessage(msg:Variant;targetOrigin:String);virtual;
    procedure   BroadcastMessage(msg:Variant;targetOrigin:String);virtual;
    Constructor Create(WND: THandle);virtual;
    Destructor  Destroy;Override;
  published
    property    OnMessageReceived: TW3MsgPortMessageEvent;
    Property    OnError: TW3MsgPortErrorEvent;
  end;

  (* This is a baseclass for window-handles that already exist.
     You are expected to provide that handle via the constructor *)
  TW3OwnedMsgPort = Class(TW3CustomMsgPort)
  end;

  (* This message-port type is very different from both the
     ad-hoc baseclass and "owned" variation above.
     This one actually creates it's own IFrame instance, which
     means it's a completely stand-alone entity which doesnt need an
     existing window to dispatch and handle messages *)
  TW3MsgPort = Class(TW3CustomMsgPort)
  private
    FFrame:     THandle;
    function    AllocIFrame: THandle;
    procedure   ReleaseIFrame(aHandle: THandle);
  protected
    procedure   ReleaseWindow; override;
  public
    constructor Create; reintroduce; virtual;
  end;

  (* This message port represent the "main" message port for any
     application that includes this unit. It will connect to the main
     window (deriving from the "owned" base class) and has a custom
     message-handler which dispatches messages to any subscribers *)
  TW3MainMessagePort = Class(TW3OwnedMsgPort)
  protected
    procedure HandleMessage(Sender: TObject; EventObj: JMessageEvent);
  public
    constructor Create(WND: THandle); override;
  end;

  (* Information about registered subscriptions *)
  TW3SubscriptionInfo = Record
    MSGID:    integer;
    Callback: TW3MessageSubCallback;
  end;

  (* A message subscription is an object that allows you to install
     X number of event-handlers for messages you want to recieve. Its
     important to note that all subscribers to a message will get the
     same message -- there is no blocking or ownership concepts
     involved. This system is a huge improvement over the older WinAPI *)
  TW3MessageSubscription = class(TObject)
  private
    FObjects:   Array of TW3SubscriptionInfo;
  public
    function    ValidateMessageSource(FromURL: string): boolean; virtual;
    function    SubscribesToMessage(const MSGID: integer): boolean; virtual;
    procedure   Dispatch(const Message: TW3MessageData); virtual;
    function    Subscribe(const MSGID: integer; const CB: TW3MessageSubCallback): THandle;
    procedure   Unsubscribe(const Handle: THandle);

    Constructor Create; virtual;
    Destructor  Destroy; override;
  end;

  (* Helper functions which simplify message handling *)
  //function  W3_MakeMsgData:TW3MessageData;
  procedure W3_PostMessage(const msgValue: TW3MessageData);
  procedure W3_BroadcastMessage(const msgValue: TW3MessageData);

  (* Audience returns true if a message-ID have any
     subscriptions assigned to it *)
  function  W3_Audience(msgId: integer): boolean;

  (* This returns true if the current smart application is embedded in a frame.
     That can be handy to know when establishing parent/child relationships
     between main-application and child "programs" running in frames *)
  function AppRunningInFrame: boolean;
  function GetUrlLocation: string;
  function GetDomainFromUrl(URL: string; const IncludeSubDomain: boolean): string;

implementation

uses SmartCL.System;

var
_mainport:    TW3MainMessagePort = NIL;
_subscribers: Array of TW3MessageSubscription;

function AppRunningInFrame: boolean;
var
  LMyFrame: THandle;
begin
  // window.frameElement Returns the element (such as <iframe> or <object>)
  // in which the window is embedded, or null if the window is top-level.
  try
    // Note: attempting to read frameElement will throw a SecurityError
    // exception in cross-origin iframes. It should just return null,
    // but not all browsers play by the rules -- hence the try/catch
    asm
      @LMyFrame = window.frameElement;
    end;
  except
    on e: exception do
    exit;
  end;
  result := not (LMyFrame);
end;

function GetUrlLocation: string;
begin
  asm
    @result = window.location.hostname;
  end;
end;

function GetDomainFromUrl(Url: string; const IncludeSubDomain: boolean): string;
begin
  url := Url.trim();
  if url.length < 1 then
  begin
    asm
      @Url = window.location.hostname;
    end;
  end;
  asm
    @url = (@url).replace("/(https?:\/\/)?(www.)?/i", '');
    if (!@IncludeSubDomain) {
        @url = (@url).split('.');
        @url = (@url).slice((@url).length - 2).join('.');
    }
    if ((@url).indexOf('/') !== -1) {
      @result = (@url).split('/')[0];
    } else {
      @result = @url;
    }
  end;
end;

//#############################################################################
//
//#############################################################################

procedure QTXDefaultMessageHandler(sender:TObject;EventObj:JMessageEvent);
var
  x:      Integer;
  mItem:  TW3MessageSubscription;
  mData:  TW3MessageData;
begin
  mData := TW3MessageData.Serialize(EventObj.Data);
  //mData := new TW3MessageData();
  //mData.Serialize(EventObj.Data);

  for x:=0 to _subscribers.count-1 do
  Begin
    mItem := _subscribers[x];

    // Check that the message is subscribed to
    if mItem.SubscribesToMessage(mData.ID) then
    begin
      // Make sure subscriber validates the caller-domain
      if mItem.ValidateMessageSource( TVariant.AsString(EventObj.source) ) then
      begin
        (* We execute with a minor delay, allowing the browser to
         exit the function before we dispatch our data *)
        TW3Dispatch.Execute(procedure ()
        begin
          mItem.Dispatch(mData);
        end, 10);
      end;
    end;
  end;
end;

{ function W3_MakeMsgData: TW3MessageData;
begin
  result := new TW3MessageData();
  result.Source := "*";
end;      }

function W3_GetMsgPort: TW3MainMessagePort;
begin
  if _mainport = NIL then
  _mainport := TW3MainMessagePort.Create(BrowserAPI.Window);
  result := _mainport;
end;

procedure W3_PostMessage(const msgValue:TW3MessageData);
begin
  if msgValue<>NIL then
  W3_GetMsgPort().PostMessage(msgValue.Deserialize,msgValue.Source) else
  raise exception.create('Postmessage failed, message object was NIL error');
end;

procedure W3_BroadcastMessage(const msgValue:TW3MessageData);
Begin
  if msgValue<>NIL then
  W3_GetMsgPort().BroadcastMessage(msgValue,msgValue.Source) else
  raise exception.create('Broadcastmessage failed, message object was NIL error');
end;

function  W3_Audience(msgId: Integer): boolean;
var
  x:  Integer;
  mItem:  TW3MessageSubscription;
begin
  result:=False;
  for x:=0 to _subscribers.count-1 do
  Begin
    mItem:=_subscribers[x];
    result:=mItem.SubscribesToMessage(msgId);
    if result then
    break;
  end;
end;

//#############################################################################
// TW3MainMessagePort
//#############################################################################

Constructor TW3MessageData.Create;
begin
  self.Source:="*";
end;

function  TW3MessageData.Deserialize: string;
begin
  result := JSON.Stringify(self);
end;

class function TW3MessageData.Serialize(const ObjText: string): TW3MessageData;
begin
  result := TW3MessageData(JSON.parse(ObjText));
end;

//#############################################################################
// TW3MainMessagePort
//#############################################################################

constructor TW3MainMessagePort.Create(WND: THandle);
begin
  inherited Create(WND);
  OnMessageReceived := QTXDefaultMessageHandler;
end;

procedure TW3MainMessagePort.HandleMessage(Sender: TObject; EventObj: JMessageEvent);
begin
  QTXDefaultMessageHandler(self, eventObj);
end;

//#############################################################################
// TW3MessageSubscription
//#############################################################################

Constructor TW3MessageSubscription.Create;
begin
  inherited Create;
  _subscribers.add(self);
end;

Destructor TW3MessageSubscription.Destroy;
Begin
  _subscribers.Remove(self);
  inherited;
end;

function TW3MessageSubscription.Subscribe(const MSGID: integer; const CB: TW3MessageSubCallback): THandle;
var
  LObj: TW3SubscriptionInfo;
begin
  LObj.MSGID := MSGID;
  LObj.Callback := @CB;
  FObjects.add(LObj);
  result := LObj;
end;

procedure TW3MessageSubscription.Unsubscribe(const Handle: THandle);
var
  x:  Integer;
begin
  for x:=0 to FObjects.Count-1 do
  begin
    if variant(FObjects[x]) = Handle then
    Begin
      FObjects.delete(x,1);
      break;
    end;
  end;
end;

function TW3MessageSubscription.ValidateMessageSource(FromURL: string): boolean;
begin
  // by default we accept messages from anywhere
  result := true;
end;

function TW3MessageSubscription.SubscribesToMessage(const MSGID: integer): boolean;
var
  x:  Integer;
begin
  result:=False;
  for x:=0 to FObjects.Count-1 do
  Begin
    if FObjects[x].MSGID = MSGID then
    Begin
      result:=true;
      break;
    end;
  end;
end;

procedure TW3MessageSubscription.Dispatch(const Message:TW3MessageData);
var
  x:  Integer;
begin
  for x:=0 to FObjects.Count-1 do
  Begin
    if FObjects[x].MSGID = Message.ID then
    Begin
      if assigned(FObjects[x].Callback) then
      FObjects[x].Callback(Message);
      break;
    end;
  end;
end;

//#############################################################################
// TW3MsgPort
//#############################################################################

Constructor TW3MsgPort.Create;
begin
  FFrame:=allocIFrame;
  if (FFrame) then
  inherited Create(FFrame.contentWindow) else
  Raise Exception.Create('Failed to create message-port error');
end;

procedure TW3MsgPort.ReleaseWindow;
Begin
  ReleaseIFrame(FFrame);
  FFrame:=unassigned;
  Inherited;
end;

Procedure TW3MsgPort.ReleaseIFrame(aHandle:THandle);
begin
  If (aHandle) then
  Begin
    asm
      document.body.removeChild(@aHandle);
    end;
  end;
end;

function TW3MsgPort.AllocIFrame: THandle;
Begin
  asm
    @result = document.createElement('iframe');
  end;

  if (result) then
  begin
    /* if no style property is created, we provide that */
    if not (result['style']) then
    result['style'] := TVariant.createObject;

    /* Set visible style to hidden */
    result['style'].display := 'none';

    asm
      document.body.appendChild(@result);
    end;

  end;
end;

//#############################################################################
// TW3CustomMsgPort
//#############################################################################

Constructor TW3CustomMsgPort.Create(WND: THandle);
Begin
  inherited Create;
  FWindow := WND;
  if (FWindow) then
  begin
    FWindow.addEventListener('message', @HandleMessageReceived, false);
    FWindow.addEventListener('error', @HandleError, false);
  end;
End;

Destructor TW3CustomMsgPort.Destroy;
begin
  if (FWindow) then
  begin
    FWindow.removeEventListener('message', @HandleMessageReceived, false);
    FWindow.removeEventListener('error', @HandleError, false);
    ReleaseWindow;
  end;
  inherited;
end;

procedure TW3CustomMsgPort.HandleMessageReceived(eventObj:JMessageEvent);
Begin
  if assigned(OnMessageReceived) then
  OnMessageReceived(self,eventObj);
End;

procedure TW3CustomMsgPort.HandleError(eventObj:JDOMError);
Begin
  if assigned(OnError) then
  OnError(self,eventObj);
end;

procedure TW3CustomMsgPort.Releasewindow;
begin
  FWindow := Unassigned;
end;

Procedure TW3CustomMsgPort.PostMessage(msg:Variant;targetOrigin:String);
begin
  if (FWindow) then
  FWindow.postMessage(msg,targetOrigin);
end;

// This will send a message to all frames
procedure TW3CustomMsgPort.BroadcastMessage(msg:Variant;targetOrigin:String);
var
  x:  Integer;
  mLen: Integer;
begin
  mLen := TVariant.AsInteger(browserAPI.Window.frames.length);
  for x:=0 to mLen-1 do
  begin
    browserAPI.window.frames[x].postMessage(msg,targetOrigin);
  end;
end;

finalization
begin
  if assigned(_mainport) then
  _mainport.free;
end;

end.

Amibian.js and the Narcissus hack

April 22, 2017 Leave a comment

Wow, I must admit that I never really thought Amibian.js would become even remotely as popular as it has – yet people respond with incredible enthusiasm to our endeavour. I was just told that an article at Commodore USA mentioned us – that an exposure to 37000 readers. Add that to the roughly 40.000 people that subscribe to my feeds around the world and I must say: I hope I code something worthy of your time!

amibian_cortana

This is going to look and behave like the bomb when im done!

But there is a lot of stuff on the list before it’s even remotely finished. This is due to the fact that im not just juggling one codebase here – im juggling 5 separate yet interconnected codebases at the same time (!). First there is the Smart Mobile Studio RTL (run time library) which represents the foundation. This gives me object-oriented, fully inheritance driven visual controls. This have roughly 5 years of work behind it.

138-Quake_III_Arena-1479915663

Still #1 after all these years

On top of that you have the actual visual controls, like buttons, scrollbars, lists, css3 effect engines, tweening, database storage and a ton of low-level stuff. The browser have no idea what a window is for example, let alone how it should look or respond to users. So every little piece has to be coded by someone. And well, that’s what I do.

Next you have the workbench and operating-system itself. What you know as Amibian.js, Smart Workbench or Quartex media desktop – take your pick, but it’s already a substantial codebase spanning some 40 units with thousands of lines of code. It is divided into two parts: the web front-end that you have all seen; and the node.js backend that is not yet made public.

And on top of that you have the external stuff. Quake III didn’t spontaneously self-assemble inside the desktop, someone had to do some coding and make the two interface. Same with all the other features you have.

The worst so far (as in damn hard to get right) is the Ace text editor. Ace by itself is super easy to work with – but you may have noticed that we have removed it’s scrollbars and replaced these with Amiga scrollbars instead? That is a formidable challenge it its own right.

patch2

Aced it! Kidnapping signals was a challenge

Whats on the menu this weekend?

I noticed that on Linux that text-selection was utterly messed up, so when you moved a window around – it would suddenly start selecting the title text of other elements around the desktop. This is actually a bug in the browser – not my code; but I still have to code around it. Which I have now done.

I also solved selection for the console window (or any “text” container. A window is made up of many parts and the content region can be inherited from and replaced), so that should now work fine regardless of browser and platforms. Ace theming also works, and the vertical scrollbar is responding as expected. Still need a few tweaks to move right, but that is easy stuff. The hard part is behind us thankfully.

patch

Selection – works

Right now I’m working on ScummVM so that should be in place later today 🙂

Thats cool, but what motivates you?

2jd3x2q

Cult of Joy

Retro gaming is important, and we have to make it as easy for people to enjoy their retro gear without patent trolls ruining the fun. Im just so tired of how ruined the Amiga scene is by these (3 companies in particular).. thieves is the only word I can find that fits.

So fine! I will make my own. Come hell or high water. Free as a bird and untouchable.

So I have made some tools that will make it ridiculously easy for you to share, download and play your games online. Whenever you want, hosted where-ever you need and there is not a god damn thing people can do about it. When you realize how simple the hack is it will make you laugh. I came up with this ages ago and dubbed it Narcissus.

To understand the Narcissus hack, consider the following:

PNG is a lossless compression format, meaning that it doesn’t lose any information when compressed. It’s not like JPEG which scrambles the original and saves a faximile that tricks the eye. Nope, if you compress a PNG image you get the exact same out when you decode it (read: show it).

But who said we have to store pixels? Pixels are just bytes after all. In fact, why can’t we take a whole game disk or rom and store that inside a picture? Sure you can!

cool

This tool is now built into Amibian.js

It’s amusing, I came up with this hack years ago. It has been a part of Smart Mobile Studio since the beginning.

You have to remember that retro games are super small compared to modern games. The average ADF file is what? 880kb or something like that? Well hold on to your hat buddy, because PNG can hold 64 megabyte of data! You can encode a decent Amiga hard disk image in 64 megabyte.

eatmeCan you guess what the picture on the right contains? This picture is actually ALL the Amiga rom files packed into a single image. Dont worry, I converted it to JPEG to mess up the data before uploading. But yes, you can now host not just the games as normal picture files, but also roms and whatever you like.

And the beauty of it – who the hell is going to find them? You can host them on Github, Google drive, Dropbox or right your blog — if you don’t have the encryption key the file is useless.

Snap, crackle and pop!

RSS Filesystem

You know RSS feeds right? If you sign up for a blog you automatically get a RSS feed. It’s basically just a list of your recent posts – perhaps with an extract from each article, a thumbnail picture and links to each post. RSS have been around for a decade or more. It’s a great way to keep track of news.

The second hack is that using the data-to-image-encoder you can store a whole read-only filesystem as a normal RSS feed. Always think outside the box!

Let’s say you have a game collection for your Amiga right? Lets say 200 games. Wouldnt it be nice to have all those games online? Just readily available regardless of where you may be? Without “you know who” sending you a nasty email?

Well, just encode your game as described above, include the data-picture in your WordPress post, and do that for each of your games. Since you can encrypt these images they will be worthless to others. But for you its a neat way of hosting all your games online for free (like WordPress or Blogger) and play them via Amibian or the patched UAE4Arm (ops, did I share that, sorry dirk *grin*) and you’re home free.

You know what’s really cool? For this part Amibian doesnt even need a server. So you can just save the Amibian.js html page on your phone and that’s all you need.

RumoredPirates

Drink up me hearties yo ho!