Freepascal online, awesome work!

June 20, 2017 Leave a comment

Freepascal is, with the exception of gcc – the gnu c/c++ compiler toolchain, probably the most versatile and adaptable compiler in the world. Unlike most compilers outside the C camp – Freepascal is complete toolchain, and it has more than a few similarities to c/c++ in that it’s largely self-reliant.

MUI_PascalDemo

Freepascal scales from the old mc68000 family to the latest Intel i7, impressive!

FPC is easy to port over to new (or old) platforms, and as far as languages go – object pascal has a level of productivity that I have yet to find in other languages.

Now I have no interest in language wars or fanboy stereotyping, all languages have aspects that are unique and beneficial. So this is not an attempt to promote A over B.

Targeting both old and new from a single codebase

While mainstream and work related topics never brush into this – there is actually a healthy group of people still using older platforms. Naturally this is well within the “hobby” range – where passion and curiosity are the prime motivators rather than cash. But still, it is interesting to see how little of modern software is capable of scaling back to processors 10, 15 or 20 years out of date.

Alb42 is the nickname of a very industrious individual who has taken it on himself to port freepascal back to his roots. Which just happens to be the Amiga line of computers.

MiniSlugAmiga

Some of the best arcade games ever made are “vintage” by nature

But he has also done a lot of work for modern systems. A new Amiga was actually released this year (although most people havent noticed it) – and finding development software for a brand new platform is not easy. You have a variant of C/C++ naturally – but the honest truth is that C/C++ is not the language most people use to create desktop productivity software in. That is left to object pascal, C# and variations of Java. Delphi alone is responsible for a huge portfolio of popular software on Windows for instance. Only surpassed recently by monoliths like C#.

So getting freepascal running on Amiga OS 4 (which is the latest and modern operating system shipping with the new Amiga x5000 model) is very important to the market share of the platform.

Do it online!

Alb42 has setup an online compiler testbed where you can type in your freepascal snippets and compile to executable code on the spot. You can choose between MorphOS, Classic 68k, Aros (x86) and Amiga OS 4 (ppc). It’s a very simple setup yet it’s possible to do ad-hoc compilation and play around with the system ūüôā

onlinecompile

While humble, Alb42 has setup an interesting online kitchen sink that demonstrates how flexible and powerful object pascal really is

To give it a go, head over to: https://home.alb42.de/fpamiga/

And as always, visit his blog for both VMware images with cross compilation and binaries for your favorite retro system.

But perhaps even more important, your new x5000 (!)

Dont forget to join us at Amiga Dispatch on Facebook.

Cheers!

A day with the Tinkerboard, part 2

June 10, 2017 Leave a comment

Please read the first article here before you continue.

Having gone through a wealth of alternative boards, alternative to the Raspberry PI 3b that is; I have to admit that I am really, really disappointed by the results. Not just with the Tinkerboard but also boards like the ODroid XU4 and the whole fruit-basket of banana, orange and pineapple PI clones.

asus

What a waste

And please keep in mind that I have not been out to discredit anyone. I have been genuinely interested in these alternatives for many reasons, not just emulation or multimedia. I have no problem paying that extra 15£ for a system that delivers twice the power of the PI, in fact I even paid close to 150£ for the UP board. Which is also the only board that so far has delivered on its promise. Microsoft Windows and the horrors of eMMC storage considered.

Part of what I work with these days include node.js and full screen browser views, often touch enabled for POS (point of sale) type machines. This is an exciting field because for the first time in history we can now create rich, responsive and rock solid solutions using off the shelves web technology.

So while emulation of older systems is interesting for hobby purposes, the performance of vital technologies like the browser – and consequently the performance of JavaScript and the underlying JIT compilation, is naturally paramount for what I do.

So I was really looking forward to the extra power these boards advertize. Both CPU, number of cores, extra memory and last but not least – the benefits of a fast GPU. The latter directly impacts the effects we can use in our user-interface. And while effects may seem superficial, its important for touch feedback and the customers experience.

desktop_embedded

We have all come across touch devices that give little or no visual feedback. We all know what its like when you type something and the UI just hangs while talking with a server. This is why web tech is so important today. In many ways Html5 and websocket is re-defining what we think of as software.

So while I bring little pie-charts and numbers to the table, I do bring honesty and perspective. It’s not all fun and games.

Was it really fair?

It has been voiced by a couple that perhaps my way of testing things is unfair. Like I mentioned in my previous article the PI has a healthy head-start and a rich community that not only consumes and demands, but also contributes substantially to the value of the product. A well established community around a product is worth its weight in gold. The alternative boards have yet to establish that and when it comes to the Tinkerboard, it’s pretty much just getting started.

Secondly, as far as emulation goes, we have to remember that those who make these products rarely have emulation in mind (and probably not Amiga which by any standard is esoteric compared to Nintendo and Sega consoles). If we look at what people use gadgets like the Raspberry PI for, I think products like Kodi, Apache and various DIY programming experiments out-rank all the retro systems combined. And it’s clear that the people behind Tinkerboard is more media oriented than anything else; it ships with hardware support for 4k video playback after all.

Having said that, I do feel I gave both the ODroid and Tinkerboard a fair trial. I did not approach these boards as a developer, but rather as a consumer. I have focused on the experience of familiar things – like using a browser and how well JavaScript runs, does the UI lag under stress; things that any customer will face after purchase.

UAE4Arm vs FS-UAE

As far as UAE (Unix Amiga Emulator) there is one thing that might be unfair. Amibian is based around something called UAE4Arm which is a highly optimized version of the Amiga emulator. I would even go so far as to say its THE most optimized UAE version on any platform – Windows and OSX included.

The Amiga was a lot more than just a games machine, it was way ahead of both Windows and OS X for a decade

The Amiga was a lot more than just a games machine, it was way ahead of both Windows and OS X for a decade

Comparing that with FS-UAE is perhaps unfair. FS-UAE is based on the older, non optimized codebase. It’s known for stable emulation and good JIT performance. Sadly the JIT engine only works on x86 and is thus disabled on ARM systems. So to be honest it can’t hold a candle against UAE4Arm. The latter uses lookup tables for everything, fast switch cases instead of long sequences of “if then else” code (easier for the compiler to optimize), and it benefits from hardcore llvm level 3 optimization. So I agree, comparing FS to UAE4Arm is like comparing a tiger with a kitten. It’s not even in the same ballpark.

Optimization on this level is almost a lost art these days. People are used to languages like C# and Java – which are hopelessly bloated (for those that have seen what raw assembler and hand optimized pascal can deliver). On the Amiga we learned how to shave off a clock-cycle here and there – and the end result really made a huge difference.

The same type of low-level, every cycle counts, type optimization is used throughout UAE4Arm. FS-Uae is safe, slow and can hardly deliver A500 speed on a Raspberry PI. Yet UAE4Arm spits out 3.2 times the power of an Amiga 4000 with a 68040 cpu. That was the flagship power machine back in the day, and this little $40 card delivers 3 times the power under emulation.

Its like comparing a muffled 1978 Lada with a pissed off 2017 Lamborghini.

Fair is fair

Having tested just about every distro I could find for the Tinker, even back-tracking to older releases in hope that the drivers there would be better suited for the board (perhaps the new drivers were released in a hurry, or a mistake was made) the last straw really was Android. My hope was that Android could have better drivers since it sees a lot of development. It has giants like Google and Samsung behind it – so the last glimmer of light was that Asus had focused on Android rather than Linux.

podcast-event

Will Android save the say and give the Tinker it’s due credit?

I downloaded the latest image, burnt it and booted – thinking that this should finally put things right. Well it didn’t. Quite the opposite.

The first boot took almost an hour. Yes, an hour. I suspect this is because it did a network restore – downloading packages and requirements from the Internet. But that is still no excuse. I have tried Android on the PI it and it was up and running in less than 10 minutes; and that includes having to set some initial user information.

When the Android desktop finally came into view, it was stripped down with no repository package manager. In other words you would have to manually install packages. But that would have been OK had it not been for the utterly sluggish performance.

Android on the PI is no race-car, but at least it doesn’t freeze and lock-up every five second. Clicking on a link in the browser froze the whole system, and you were asked if you wanted to terminate the process again and again while Android figured out the meaning of life or whatever. Every single time.

I’m sorry but this is the worst experience I have ever had with a board. The ODroid XU4 at least tries to deliver a good experience. I had no problems finding Linux editions that ran well (especially gaming systems, Kodi and some homebrew distros) – and the chrome build for the ODroid gave far better results than both the PI and Tinkerboard. It was not the power-house it had been hyped up to be, but at least it delivered a normal desktop experience (even the PI locks up when facing demanding tasks, but the Tinker has an 8 core cpu, twice the ram and a supposedly superior GPU. With the capacity of running 8 threads and doing task scheduling in batches of 8 – the Tinkerboard should remain both stable and interactive under pressure).

I am sorry but I cannot recommend the Tinkerboard at all. The ODroid gives you a slight edge over the PI and if someone baked a more optimized Linux distro for it (it ships with a variant of Ubuntu) then ODroid XU4  would kick serious ass. But the Tinkerboard is a complete disaster.

Do not waste your money on the Tinkerboard. It may have the hardware but the software to utilize and express that power is simply not there! It has been the most pointless purchase I have done this year.

A common waste

The common denominator between these alternatives is the notion that more power equals more stuff. Which ultimately results in biting over more than they can chew. A distro like Pixel is lightweight, optimized and the PI foundation has spent a considerable amount of time making sure the drivers run as fast as possible.

Weird Science, a fun but completely fictional movie from 1985. I just posted this to liven up an otherwise boring trip down memory lane

Saying it doesnt make it so, except in Weird Science

What these alternative hardware suppliers don’t seem to get is that – what is the point of having a cpu that is twice as fast as the PI, when you waste it on a more demanding version of Linux? Both the ODroid and Tinkerboard ship with Ubuntu clones. And while Ubuntu is wonderful to work with on a high-end x86 machine, 32 gigabytes of ram and a i7 CPU growling away under your desk, it is without a doubt the worst possible OS you could pick for a tiny ARM board.

To put things into perspective: The Tinkerboard has the “theoretical” cpu power of a Samsung Galaxy S4 mobile phone (2012). Yet in their infinite wisdom they decided Ubuntu is what is really going to show off this fart of a device.

It makes absolutely no sense from a consumer’s perspective. As a user you want that extra power for running programs. You want to use the extra power on your stuff. Yet these guys seem to think that wasting all the extra power on infrastructure is going to score points.

So if anyone from the Tinkerboard or ODroid camp reads this, please remember the old saying “If you cannot innovate, emulate”. Make a fast, stable and small Jesse distro like Pixel. Focus on optimization. Compile whatever you can with maximum optimization. Shave off every clock cycle you can. Hand carve the drivers in assembler if that’s what it takes.

Because the sad result is, that right now the Raspberry PI is outperforming you with lesser hardware. There is no point in buying a Tinkerboard because it actually delivers a worse experience. It has bitten off more than it can chew and it’s trying to be something it’s not.

A day with the Tinkerboard, part 1

June 7, 2017 Leave a comment

Anyone into IOT mini computers, be it as a hobby or professionally – have no doubt heard about the Tinkerboard. The Tinkerboard is an alternative to the Raspberry PI (aren’t they all) produced by hardware giant Asus. Everything from its form-factor (size) to its color coordinated GPIO pins is more or less the same as the PI. What separates the two is that the Tinker gives you double the CPU power, double the ram and pretty much double of everything. If we compare it with the Raspberry PI 3b that is.

But you don’t get double for free. The Tinkerboard retails on Amazon for USD 60 which means something in the ballpark of 55¬£ for those that live in the UK. For the rest of us that live in far-away places like Norway, prices and taxes may vary.

tinker

Before I continue I want to write a few words about pricing. The word “expensive” is one I keep hearing when it comes to the Tinkerboard, but that is extremely unfair considering what you get for your money.

The Raspberry PI 3b is an awesome product and packs a lot of goodies into a credit-card size computer. The same is true for the Tinkerboard, but they have packed even more stuff into the same space! The difference in price is just 15£ Рbut those 15£ gives you the power to emulate demanding platforms like Nintendo Wii, Sega Dreamcast and Playstation (if you are into retro gaming). Platforms the Raspberry PI 3b dont stand a chance emulating (at least not smoothly).

For serious applications all that extra cpu power can be the difference between success and failure. In many ways the Tinker feels a bit like working on a low-end laptop. If you are building a movie server for your living-room the Tinkerboard will ble able to convert and stream movies much smoother and better than the PI. And if you are looking for a device to power your next kiosk system or POS device, again the extra memory and cpu power is going to make all the difference. And with just 15£ between the two models, I cannot imagine why anyone would pick the PI if gaming or movie streaming is on the menu.

Oh and it has built-in wi-fi and bluetooth support! Here are the specs as listed on Amazon:

  • High performance Quad Core ARM SOC 1.8GHz with 2GB of RAM -The Tinker board features the Rock chip RK3288 Soc with Mali – T764 GPU and 2GB of Dual Channel DDR3 memory
  • Non shared GBit LAN, Shielded Wi-Fi with upgradable antenna support
  • Highly compatible PCB & Topology, Tinker board offer extensive compatibility with SBC accessories & chassis
  • HD Audio & HD & UHD Video Support ‚Äď Tinker board supports HD audio 192/24bit audio along with accelerated HD & UHD ( 4K ) video playback support* requires use of Rock chip video player in TinkerOS
  • DIY Friendly Design ‚Äď Tinker board features multiple DIY friendly use features including a color coded GPIO header, silkscreen PCB and color coded pull tabs

That sounds pretty nice doesnt it ūüôā

One does not simply take on the mighty PI

Being an alternative to the Raspberry PI is not as easy as it sounds. You would imagine that it all boils down to the specs right? So if you beef up the cpu, ram and gpu everyone will throw away their PI’s and buy your stuff? If only that was true. In which case the Tinkerboard would be awesome since it outperforms the PI on every level.

But it’s not that simple. There is more to a product than just raw power.

One of the cool things about the Raspberry PI is the community. The knowledge base of the product if you like. In order to build a community and be a success, everything has to click. I mean just look at the third-party eco-system that exists around the PI; all those companies selling add-ons and widgets, sensors and remote controls. The PI itself is quite dull. It’s all that extra stuff that makes it so fun to play with. And then you have things like excellent customer service, a forum where people help each other out, well written documentation — the value of the product suddenly exceeds whatever you paid for it to begin with.

And the Tinkerboard is not alone in trying to get a piece of the action. It competes with boards like the ODroid XU4, the x86 based UP boards, Latte Panda, The Pine boards and a whole fruit-basket of banana, orange and whatever PI clones. All of them trying to get a bite of the phenomenon that is the Raspberry PI.

So Asus got their work cut out for them.

First impressions

Having unpacked the board and plugged in all the cables (which is exactly as dull as it sounds) the first thing you do is look for software. Which naturally should be a one-click operation on their website.

Well, the first thing that annoyed me was exactly that: the Asus website. The Tinkerboard is not something that belongs on a sterile, streamlined corporate site. It belongs on website that should be easy to navigate, with plenty of information that hobbyists and “tinkers” need to get started. The Asus website simply lacks the warmth and friendliness of the PI foundation website. I even had to google around a couple of times just to find the disk images. These should be readily available on the site, on the first page even – to make sure customers can get up and running quickly.

tinkerweb

It’s a little bit clangy and a little bit jammy

The presentation of the board was ok, but you really shouldnt have to use an external search engine to locate essential data for something you just bought. Unlike the PI foundation, Asus seem to have put little effort into what customers get once they have bought the board. As for community I dont even know if there is one.

When you finally manage to find the bloody download page, these are your options (there are 17 downloads in total, counting older OS images, config files and schematics):

  • TinkerOS 1.8
  • TinkerOS 1.6 Beta (my initial download)
  • Android, also beta

And don’t expect a polished Linux distro like Pixel either. You get a clean distro with some typical stuff pre-loaded (python, perl for some reason, and various tidbits) but nothing close to the quality of Pixel.

Note: Why they would list older beta releases in their downloads is beyond me. It only adds to the confusion of a sterile list of items. And customers who just got their boards will be eager to try it out and hence make a mistake like I did. Beta and unstable releases should be in a separate category, while the stable and up-to-date images are listed first.

The second thing that really irritated me was something as simple as adjusting the keyboard layout and setting the locale. These are simply not present in preferences at all. Once again you have to google around until you find the terminal commands to manually set these things, which utterly defeats booting into a desktop environment to begin with. This might seem trivial for someone who knows Linux well, but to a novice it can take hours. Things this fundamental should be available in preferences or at the very least as a script. Like raspi-config on the PI.

But OK. If we look away from these superficial things and focus on the actual board, things are not half bad.

It’s the insides that counts

The first thing you are going to notice is how much faster the system is compared to the PI. When you start an application on the PI that is demanding, like libre-office or Lazarus, the whole system can freeze-up for a few seconds while the CPU go through the roof. I was pleased to find that this is not the case here (at least not as disruptive). You can start a program and the system continues to be responsive while things load.

What made me furious though was the quality of the graphics drivers. Performance wise the Tinkerboard is twice (actually a bit more) as fast as the PI on all fronts – and the GPU is supposed to deliver a vastly superior graphics experience. Yet when I ran Amibian.js (the Smart Pascal JavaScript desktop system) in Chrome – the CSS hardware effects were painfully slow. I even questioned if GPU was used at all (Note: which I later confirmed was the case, so much for downloading a beta image by mistake). I was amazed at the cpu power of the Tinkerboard because despite having to do everything purely with the CPU (moving pixels in memory, allocating device independent bitmaps, rasterizing layers and complex compositions in X) and it was more than usable.

But it sure as hell was not the superior graphics experience I was hoping for.

I also noticed that when moving a normal desktop window (X window that is) rapidly around the screen, it would lag behind and re-trace your steps until it caught up with the mouse pointer. This was very odd since most composition engines would have eliminated these movements as much as possible – and the GPU would do all the work. Again, I blame the drivers and pondered what could possibly make a GPU perform so badly.

gpuwrong

Im supriced it booted at all, nothing is working

So while the system is notably faster than the PI, I get the sense that the drivers and tooling is not really polished. The hardware no doubt got juice but without drivers being carefully written for the board – most of the extra juice you pay for is wasted.

In the end I opened up chrome and typed “chrome://gpu” to see what was going on. And not surprisingly I was correct. It wasnt using the GPU at all and I had wasted hours on something that should have been clearly marked “Lacks hardware acceleration” in bold, italic, underlined big red letters. Like the PI foundation did when they shipped Raspberry PI 2 without GPU drivers. But they promptly delivered and it was ok, because you were informed up front.

 

Back to scratch

After a trip back to google I downloaded the right (or at least working) image, namely the TinkerOS 1.8 stable. It booted up, resized the partition to fill the whole SD card automatically (and other boring bits).

The first thing I did was start chrome and use the “chrome://gpu” to get some statistics. And thankfully most of the items were now working and active.

gpubetter

Thats better, although i ponder why GPU DIB’s are not active

Emulation, the name of the game

While I have serious tests to perform there is no doubt that what I was most eager to play with was UAE (the Unix Amiga Emulator). Like most retro-heads I have a ton of Raspberry PI’s and ODroid’s around the house with the sole purpose of emulating older game or computer systems. And I think everyone knows that my favorite computer in the whole wide world is the Commodore Amiga.

On the PI, Gunnar’s native Amibian distro gives you awesome performance. If you overclock the ram, gpu and cpu using safe numbers – the PI 3b spits out roughly 3.2 times the power of a Mc68040 based Amiga 4000. In other words 3 times the power of a high-end Amiga from the early 90s. That is quite an achievement for a $40 board. You can only imagine my expectations for a board running more than twice as fast as the PI does (and costing that 15¬£ more).

18986661_10154521981875906_472132585_o

Boots fine but you can forget about using it. The PI is 100 times faster than this

One snag though was that UAE4Arm which is the highly optimized version of the Amiga Emulator doesnt exist for the Tinkerboard. In theory it should compile without problems since the latest version uses SDL exclusively. So there is no odd, deprecated UI toolkit like the old UAE4Arm used.

The only thing available was fs-uae. And while that is a very popular emulator on Windows and Mac, it’s sadly not optimized as much as UAE4Arm is. The latter uses lookup tables to the extreme, all if, then, else statements have been replaced by fast switches — and it’s just optimized beyond anything else running on ARM.

The result on the PI in phenomenal, but sadly fs-uae was all I could play with. I will try to compile UAE4Arm on the Tinkerboard later though. But right now im to busy.

Not optimized at all

It is hard to pinpoint exactly where the problem is, but I suspect the older codebase of fs-uae combined with poorly written drivers is what resulted in the absolutely sluggish behavor.

Dont get me wrong, if all you want to do is play some A500 or A1200 games the Tinker is more than capable. But if you want to run the same high-performance desktop as Amibian gives you on the PI – you can pretty much forget it.

I set the system to emulate an A4000, activated the JIT engine and set my desktop to match the display of the desktop. This should run like hell — yet it was slow as a snail. You could literally see the mouse-pointer slowly trying to keep up as I moved it across the desktop.

One interesting thing though. When I ran fs-uae on the first image, the beta image where everything was crap, it actually ran exceptionally well. But here on the so-called stable and “up to date” version, it was useless for anything but clean floppy gaming.

There are many factors involved in this. The drivers sits at the bottom just above the kernel and exposes the hardware so it can be used through a common API. On top of this you have stuff like OpenGL and on top of that again you have SDL which simplifies typical gaming or graphics tasks. This may sound like a long road for code to travel, but it’s actually very fast. If the drivers expose the power of the GPU that is.

I can only conclude at this point that the Tinkerboard has the firepower, I have seen the result of tests performed by others (and tested a retrogaming image) – and you feel the difference almost immediately when you boot the device. But the graphics drivers seems.. crippled somehow. It’s easy to point the finger at Asus and say they have done a terrible job at writing drivers here, but it can also be a bottleneck elsewhere in the system.

This is something Raspberry PI has got so right. Every part of the system has been optimized for the hardware, which makes the PI feel smooth even under a lot of stress. It delegates all the hard graphical stuff to the GPU – and the GPU just deals with it.

Sadly this is not the case with the Tinkerboard so far. And it really is such a shame because the specs speak for themselves.

Android

Next time: We will be giving Android a test and hopefully that will have drivers that are more up to date than whatever we have seen so far..

Smart Pascal: Memory and pointers

June 4, 2017 Leave a comment

Allocating, manipulating and working with memory is an important part of any programming language. While there are many ways of achieving the same result, comparing operations that work directly with the data using pointers – to code that is purely restricted to high-level, fixed datatype arrays is not even a competition. By using pointers you will be able to copy, fill, move and otherwise do things to large chunks of memory that would otherwise be slow and impractical.

Anyone who has ever coded their own game, or working with large blobs in a dataset knows this. I would even argue that you can’t even write a high-speed game in a native environment without pointers (unless you are using one of those game makers, which is not real programming anyways) – and the same can be said about database engines. Could you imagine writing a database engine without using pointers to move data around in memory? I would hope not.

Im not saying that you cant, I’m simply saying it would be a total waste of time not using pointers when the alternative would be 100 times slower.

The web stack and memory

JavaScript¬†is known for many things, but manual memory management is certainly not one of them. That doesn’t mean JavaScript lacks any features for dealing with memory (those days are thankfully behind us), it simply means that your average JavaScript developer havent explored this aspect to the same extent as a C or Object Pascal developer would.¬†These is no shame in this;¬†If JavaScript is all you know then it would be a field you are unconfortable with.

But the JavaScript Virtual Machine and runtime-environment in a modern browser is actually quite good at memory. A lot has happened to JavaScript these past few years and regardless of what engine you use, be it Spidermonkey under Firefox or V8 under Chrome and Safari (and even Microsoft Edge) they have all been advanced with cool new features that object pascal and C/C++ coders are familiar with.

So fact is that a well written JavaScript function can move data around just as fast as your Object Pascal or C++ code. You just need to know the rules. And also get things into a framework that is easier to work with. JavaScript may have the functionality Рbut like all things JavaScript its not pretty and takes some getting used to. Which is why the Smart RTL is so nice to have.

How does memory work under JSVM?

First a few words about JSVM (the JavaScript virtual machine). First of all the runtime and the environment are two different things. If you have ever used a script engine in your Delphi or Lazarus applications, like DWScript, PascalScript or Lua for that matter Рyou will notice that out of the box these engines have no functionality except the fundamental language. They can run functions and create class instances, but the engine itself have no concept of a file or chunk of memory. Not unless you define those features and register the methods that delivers it.

So it is with the JavaScript runtime environment. It is initially empty. Engines like V8 or Spidermonkey have, out of the box, no special functionality at all. It is up to whomever uses the engine (in this case those that make browsers) to write and expose native objects to the engine. Objects that your code can access and consume.

The way memory is handled under Node.js for instance is much faster than how its done in a browser; because node has been optimized for server tasks rather than client tasks. There are also subtle differences between a node.js buffer-object and a browser buffer-object. This is where the strength of the Smart RTL becomes apparent, because it shields you from these differences and gives you a unified API that deals with all of this for you. So no matter if you work with node.js or FireFox – the same code will run identical on both JSVM configurations.

OK, lets look at what JavaScript has to offer. As mentioned above JavaScript is empty by default, and its up to the browser or node to expose a set of common objects you can access and use. And these objects are first and foremost:

  • Buffers
  • Views
  • Blobs

PS: We can look away from the latter since its not really interesting at this point.

Right, as you can probably guess the buffer object is the actual data. Or more correct, it is the container object for your data. It doesn’t have any read or write methods and it exists only to keep track of¬†an allocated chunk of memory. Think of it as TMemoryStream without any read or write functionality.

But how do we access that memory? Well this is where things get interesting because JavaScript has a view object that you attach to your buffer. Its a bit like creating a TStreamWriter or TStreamReader object under object pascal.

The interesting part is not that a view object expose read and write methods, but that the view type defines a fixed datatype. A fixed datatype of your choosing.

Lets say you want to access your buffer as an array of longwords, well there is a view for that (and you can also create something called “typed arrays” connected to a buffer). But you can also access the same buffer using say a word (16 bit) integer if you wish. Since you can have multiple views attached to the same buffer, this opens up for some interesting optimization techniques – not to mention spectacular error messages when you get it wrong (ps: stick to the RTL and it will take care of you).

The rule is that unsigned datatypes (like uint8 for a “normal” byte) is prefixed with a “u”, while signed types have no prefix.

  • So views that access a buffer as unsigned types are prefixed with “u”
    • uint8buffer
    • uint16buffer [and so on]
  • Signed views lack the prefix
    • int8buffer
    • int16buffer

Now the Smart Pascal RTL will shield you from all the nitty-gritty – but it’s important to understand how memory and raw data is dealt with behind the scenes by the JSVM. And abstracting the reader writer methods from the actual data makes sense. In fact I implemented something similar myself earlier (my ByteRage Delphi library).

Once you get to know it, every piece falls into place.

Smart memory

The units we will be looking at are the following:

  • System.Types
  • System.Types.Convert
  • System.Memory
  • System.Memory.Allocation
  • System.Memory.Buffer

Start by opening up the System.Memory unit. Spend a few minutes getting an overview of the classes and functions there. And when you have had a peek focus on the class TAddress.

TAddress¬†is a class that wraps around a JavaScript untyped buffer. It can be compared to a pointer holding the address to the memory you allocated (but also more, like the size). Let’s have a look at it:

  TAddress  = partial class(TObject)
  private
    FOffset:    Integer;
    FBuffer:    TMemoryHandle;
  protected
    function    GetSize: integer;virtual;
  public
    Property    Entrypoint: integer read FOffset;
    Property    Segment: TMemoryHandle read FBuffer;
    Property    Size: integer read GetSize;
    function    Addr(const Index: integer): TAddress;
    Constructor Create(const Segment: TMemoryHandle;
                const Offset: integer); overload; virtual;
    Destructor  Destroy;Override;
  end;

As you can see its very small and compact. It exposes a segment handle property which is a reference to the memory object it represents. It also expose size information, entrypoint and a curious Addr() function.

The Addr() function is for getting a sub address from within the memory buffer. Let’s say you want to write some data at position 1024 in your allocated buffer, well by calling FMyAddress.Addr(1023) it would return a new TAddress instance pointing at that particular place inside the buffer.

This is why we have an entrypoint property. This keeps track of what starting offset a TAddresse should start at. If you start at 1023, then all subsequent offsets will have the entrypoint added to them. They will be relative to this position. Think of it as the position property in TStream if you like. And you can naturally have as many TAddress instances pointing to the same buffer as you like – hence we use offsets to speed things up (instead of creating a new view object for every single reference).

Let’s allocate some memory:

procedure TForm1.ReserveMemory;
var
  LData: TAddress;
begin
  LData := TMarshal.Allocmem(1024 * 1024);
end;

The above code would allocate 1 megabyte of data, and LData now points to it. We can now use the other methods of TMarshal to work with the buffer. Let’s have a look at the methods TMarshal has to offer:

  TMarshal = class static
  public
    class property  UnManaged: TUnManaged;
    class function  AllocMem(const Size:Integer): TAddress;
    class procedure FreeMem(Const Segment: TAddress);

    class procedure Move(const Source:TAddress;
                    const Target:TAddress;
                    const Size:Integer);overload;

    class procedure Move(const Source:TMemoryHandle;
                    const SourceStart:Integer;
                    const Target:TMemoryHandle;
                    const TargetStart:Integer;
                    const Size:Integer);overload;

    class procedure FillChar(const Target:TAddress;
                    const Size:Integer;
                    const Value:char);overload;

    class procedure FillChar(const Target:TAddress;
                    const Size:Integer;
                    const Value:Byte);overload;

    class procedure ReAllocMem(var Segment: TAddress;
                    const Size:Integer);

    class function  ReadMemory(const Segment: TAddress;
                    const Size:Integer):TByteArray;overload;

    class procedure WriteMemory(const Segment: TAddress;
                    const Data:TByteArray);

    class procedure Fill(Const Buffer: TMemoryHandle; Offset: Integer;
                    ByteLen: Integer;const Value: Byte);
  end;

As you can see you pretty much have the same features as Delphi or Freepascal, albeit in a different form. Delphi being a native language doesnt need such intermediate objects, but the above code is actually leaps and bounds easier to work with than anything JavaScript gives you out of the box.

Lets say I want to write a longword 1024 bytes into our allocated memory. You would then use the Addr() function we talked about earlier to simplify things:

procedure TForm1.ReserveMemory;
var
  LData: TAddress;
begin
  LData := TMarshal.Allocmem(1024 * 1024);
  TMarshal.WriteMemory(LData.Addr(1023), TDataType.Int32ToBytes($BABECAFE));
end;

Now you are probably wondering – why waste time on converting the $BABECAFE value to bytes? Well you dont have to, but it’s one way of doing it. You can ofcourse create a uint32 view and write it directly like you would an array. But in order to make you more familiar with the RTL landscape I figured it would be valuable to start with the byte conversion. Which by the way saves you a lot of time. So have a peek at TDataType in System.Types.Convert.pas, the features there will impress you once you realize how little JavaScript gives you- and how much the RTL makes of it.

Moving up to TBinaryData

Now messing around with low-level byte buffers can be rewarding, but its not really the level you want to work on. You want the power of low-level stuff but with the infrastructure of high-level programming.

In the unit “system.memory.buffer” you will find a class that is written to cover all of your binary needs. It is optimized for reading and writing data fast and also writing various datatypes directly into the buffer without TAddress being a factor.
So while you can enjoy the low-level functionality of allocmem – you are actually expected to use TBinaryData. TMemoryStream is also fine, but TBinaryData will always be faster.

And it comes with methods for pretty much everything you could think of. Including portabilty! Be it buffer to stream, buffer to arrays or the other way around. TBinaryData has a lot of cool features. With ZLib just added to the new RTL – compression and zip file stuff will be added to this class (through partial classes, so only when you include the zlib unit will the methods appear in the class):

  TBinaryData = partial class(TAllocation
      ,IBinaryDataReadAccess
      ,IBinaryDataWriteAccess
      ,IBinaryDataReadWriteAccess
      ,IBinaryDataBitAccess
      ,IBinaryDataImport
      ,IBinaryDataExport)
  private
    FDataView:  JDataView;
  protected
    function    GetByte(const Index: integer):Byte;
    procedure   SetByte(const Index: integer;const Value:Byte);
  protected
    function    GetBitCount: integer; virtual;
    function    GetBit(const bitIndex: integer): boolean;
    procedure   SetBit(const bitIndex: integer;const value: boolean);
  protected
    function    OffsetInRange(Offset: integer): boolean;
  protected
    procedure   HandleAllocated; override;
    procedure   HandleReleased; override;
  public
    property    BitCount:integer read GetBitCount;
    property    Bytes[const ByteIndex: integer]: byte read GetByte write SetByte; default;
    property    Bits[const BitIndex: integer]: boolean read GetBit write SetBit;

    function    Allocation: IAllocation;

    procedure   AppendBytes(const Bytes: TByteArray);virtual;
    procedure   AppendStr(const Text: string);virtual;
    procedure   AppendMemory(const Buffer: TBinaryData; const ReleaseBufferOnExit: boolean);virtual;
    procedure   AppendBuffer(const Raw: TMemoryHandle);overload;
    procedure   AppendFloat32(const Value: Float32);virtual;
    procedure   AppendFloat64(const Value: Float64);virtual;

    procedure   CopyFrom(const Buffer: TBinaryData; const Offset: integer; const ByteLen: integer);
    procedure   CopyFromMemory(const Raw: TMemoryHandle;
                Offset: integer; ByteLen: integer);

    function    CutBinaryData(Offset: integer;ByteLen: integer): TBinaryData;
    function    CutStream(const Offset: integer; const ByteLen: integer): TStream;
    function    CutTypedArray(Offset: integer; ByteLen: integer): TMemoryHandle;

    procedure   Write(const Offset: integer; const Data: TByteArray);Overload;
    procedure   WriteFloat32(const Offset: integer; const Data: Float32);
    procedure   WriteFloat64(const Offset: integer; const Data: Float64);

    procedure   Write(const Offset:integer;const Data:TBinaryData);overload;
    procedure   Write(const Offset:integer;const Data:TMemoryHandle);overload;
    procedure   Write(const Offset:integer;const Data:String);overload;
    procedure   Write(const Offset:integer;const Data:integer);overload;
    procedure   Write(const Offset:integer;const Data:Boolean);overload;

    function    ReadFloat32(Offset:integer):Float;virtual;
    function    ReadFloat64(Offset:integer):Float;virtual;
    function    ReadBool(Offset:integer):Boolean;Overload;
    function    ReadInt(Offset:integer):integer;overload;
    function    ReadStr(Offset:integer;ByteLen:integer):String;overload;
    function    ReadBytes(Offset:integer;ByteLen:integer):TByteArray;overload;

    function    Clone: TBinaryData;

    procedure   SaveToStream(const Stream: TStream);
    procedure   LoadFromStream(const Stream: TStream);

    procedure   FromBase64(FileData: string);
    function    ToBase64: string;
    function    ToString: string;
    function    ToTypedArray: TMemoryHandle;
    function    ToBytes: TByteArray;
    function    ToStream: TStream;
    function    ToHexDump(BytesPerRow: integer;
                Options: TBufferHexDumpOptions): string;

    {$IFDEF APPTYPE_VISUAL}
    procedure   LoadFromFile(const aFileURI:String; const OnReady:TNotifyEvent);
    {$ENDIF}

    Constructor Create(aHandle: TMemoryHandle); overload; virtual;
  end;

If you already have allocated memory, or perhaps your server has been given a handle to a buffer for a file – notice that the constructor is overloaded and can work with existing buffer objects. In such a case it will not release the memory even if you free the instance (since it doesnt own it, only borrows it).

I took it one step further and added bit-manipulation as well. So you can allocate a large buffer and process it on¬†bit level if you need to. I actually used that a lot when doing some audio code. It is also very handy when keeping track of visible sprites in a game (“find me an idle sprite” where each bit represents a sprite’s busy state). And its even an important feature when writing database engines.

Well this was a quick introduction to low-level memory stuff in Smart. There is a ton of features I havent even mentioned yet, like being able to turn data into a URI object – that can be attached to anchor (A tag) objects and auto clicked to force a “save as” to appear. This is really neat if you have a web application that generates data or binary files the user can save.

Until next time !

Smart Pascal: stop sharing beta code

May 20, 2017 Leave a comment

I was a bit sad to hear that our RTL, despite the testing team being a very closed group of people, had made its way into the hands of others. People who then blog or comment that its unstable or that to many changes has been added.

When the phrase “early beta” and “our first sprint is to test the low-level functionality, like pointers, memory allocation and manipulation, streams, compression, codecs and type conversion” is clearly stated, then why on earth would someone start at the other end (the visual high level) which is due for changes weeks into the future?

Since this is doing the rounds I figured it is only fair that I clear up a few things:

An async run-time-library that should deliver identical behavior on 9 operating systems and a plethora of browsers, is not something you slap together over a weekend. Many of the solutions in SMS are the result of months of hard work; research and testing on both PC, Mac, various mobile devices and six different embedded boards.

I don’t know how many hours I have put into this project these past 6 years, but its more than I care to admit.

Obviously an early beta is not going to give you 100% compatibility. Especially not since the whole point of the first sprint was for our group to write, test and make sure the low-level methods are rock solid before we move on.

The second sprint is due soon, here we will look at databases, data binding, node.js and writing system level services (via the PM2 process manager).

The last sprint would focus on the visual, high level aspect of the RTL. Things like custom controls, layout, non visual components, event objects, messaging and user interface code.

I’m really gutted that we didn’t even make it through the first sprint before the code was shared. I have no idea who did this, and I frankly don’t care. But from now on I will only throw ball with people I trust and have a friendship with.

People doing reviews and publishing opinions on an upgrade that is presently 1/3 into the final release candidate – I just feel that is unfair.

Polyfills are not shims, and they are not dangerous

It was voiced that the new RTL loads in a couple of files that were considered dangerous by google. Actually that cannot be further from the truth.

But stop and think about this: why would I link to external JavaScript files for things like mutation-events and observers? If these things are available in browsers (which they have been for a long time) why would I need external files?

The answer is that these files are polyfills. And there is a huge difference between what is known as a shim, and a polyfill.

A polyfill is a small JavaScript library that checks if the browser supports the features you want. But if it doesn’t –¬†then the polyfill will register its own code to compensate for the lack of a function or feature. In other words a polyfill is a full implementation (!)

So even if Google, Mozilla or Microsoft remove support for node-observers or mutation-events tomorrow, your application wont be affected by it. Because the polyfill libraries are just that: full library implementations with no dependencies.

In the early beta I shipped out both the observer and mutation-events library has referenced. This has already been changed, and unless you include SmartCL.Observer.pas or SmartCL.MutationEvents.pas – these will no longer be linked automatically. That was never the plan either.

Had one of you (I have come across 3 different places with comments and info on the RTL around the web) bothered to just ask me, I would have included you in the beta process and explained this.

The executables have become so large!

This is another statement I stumbled over: “The new RTL is huge¬†and it generates large JavaScript binaries!”.

First of all you were using an early beta, which means things will be in flux while each level of the RTL is being tested, probed and verified. The pascal uses section in some units referece code they no longer use. Cleaning up that is due for sprint 3. But again in our first sprint focus was on memory, binary data, type conversion, streams, compression and various low-level stuff – not high level code!

As for size that is actually wrong: our namespacing and unit fragmentation is there for a reason. By dividing files into 3 namespaces, fragmenting large units with many classes into more files, our linker is able to trim away unused code more efficiently; as well as optimize your code beyond anything people have seen so far.

poptions

Rule of thumb: when you have finished writing your application, always switch on the optimization like you see above for the final build. So yes, you are expected to use packing, obfuscation etc. before deployment. Not just to protect your code, but also to make it load and run faster.

Since you will either use your applications on a mobile device, an embedded device or just online like a website (or desktop via nodewebkit compilation) – you can further speed up loading by activating gzip compression on the server. You will be surprised just how fast and small your applications are when you tune them correctly.

Practical example

The Smart Embedded and Cloud desktop consists of roughly 40+ unit files. Since this desktop can emulate older systems, like the 68k Amiga, Atari, Super Nintendo and much more (more about those later) I have made linking these in optional (read: you have to include the units in order to link the code in, UAE.Core.pas being the primary file).

smartdesk

Without the 68k emulation layer (and various other emulators, which is just added for fun and inspiration) and Aros ROM files (the bios file for the emulator), we are looking at code size in the ballpark of 4.7 megabyte. Which is nothing compared to high-end native solutions.

If you include the 68k emulation layer, the end result is a 10 megabyte index.html file. Again this is small potatoes if you work with embedded systems or kiosk booths.

But since the new RTL design was created to be heavily optimized, far better than the older RTL ever was, the end result after compression is 467 kilobyte (!). When you compress the version that includes the 68k emulation layer – it optimizes down to 1.5 megabyte of code.

  • No 68k emulation layer = 4.7 megabyte uncompressed
  • No 68k emulation,¬†compressed = 457 kilobyte (!)
  • 68k emulation added = 10.2 megabyte uncompressed
  • 68k emulation included, compressed = 1.5 megabyte
obfuscation

From 4.7 megabyte to 467 kilobyte

This level of compression and optimization was planned.  I have re-arranged the whole RTL in order to create the best possible circumstance for the compression and obfuscations modules Рand I believe the results speaks for itself.

Is the new RTL larger? Indeed it is, with plenty of new classes, controls and libraries. The more features you want to enjoy, the more infrastructure must be in place. That is just how it is, regardless of language.

But you should also know that the new and longer inheritance chain has little or no impact on execution speed. This is what the compiler options “de-virtualization” and “smart linking” are all about. In most cases the code for TW3OwnedObject and TW3OwnedErrorObject are linked into TW3Component. Here is the inheritance chain as of writing:

  • TW3OwnedObject
    • TW3OwnedErrorObject
      • TW3Component
        • TW3TagObj
          • TW3TagContainer
            • TW3MovableControl
              • TW3CustomControl

Why did I change this? There are many reasons. First of all I wanted to make the RTL more compatible with Delphi, where names like TComponent and TCustomControl are known and understood by most. So when facing TW3Component or TW3CustomControl, most Delphi developers will intuitively know what these are all about. TW3Component is also the base class for non visual, drag & drop components.

  TW3OwnedErrorObject = class(TW3OwnedObject)
  private
    FLastError: string;
    FOptions:   TW3ErrorObjectOptions;
  protected
    property    ErrorOptions: TW3ErrorObjectOptions read FOptions;
    function    GetLastError: string;
    procedure   SetLastErrorF(const ErrorText: string;
                const Values: Array of const);
    procedure   SetLastError(const ErrorText: string); virtual;
    procedure   ClearLastError;
  public
    property    LastError: string read FLastError;
    constructor Create(const AOwner: TObject); override;
    destructor  Destroy; override;
  end;

There are also other benefits, like uniform error handling. While this might seem useless in an exception based language, there are places where an exception is not something you want to throw. Like in a system level service running under node.js.

In such cases you should catch whatever exception occures and call SetLastError() with the description. The caller will then check if something went wrong and can take measures.

Embedded, a different beast

One of the things Smart Pascal does exceptionally well is embedded platforms. Regardless of what you have been hired to do, be it a kiosk booth in a museum, a touch based vending machine, a parking payment terminal or just some awesome IOT project, Smart Pascal makes this childs play. And the reason its so easy to write powerful web applications is precisely due to the broad scope of our RTL.

Smart Pascal is not a tool for creating websites (well you can, but that’s not what it was designed for). It is a tool to write high quality, complex and feature rich applications.

The distinction between a “web application” and “website” may seem subtle at first, but it all boils down to depth. The Smart RTL now contains a huge amount of classes and third party libraries that simplify writing modern applications with depth.

Other frameworks that (for some reason) are very popular, like JQuery, JQuery UI and ExtJS lacks this depth. They look great, but thats it. I recently catched up with a project that had taken the developers a whole year to create in pure JS. It took me 3 weeks to finish what they spent over 12 months implementing. And that has to do with depth and proper inheritance Рnot that absurd prototype cloning JavaScript coders use.

To sum up: if you want to enjoy the same features and power in the browser as you do with Delphi or Lazarus on your desktop¬†– then you cant expect Smart to produce binaries smaller than it’s native counterparts.

But more importantly, on embedded systems like the Raspberry PI, the UP x86 board or the new and sexy Tinkerboard — do you really think its going to matter if your application is 4, 8 or 10 megabyte? By now you should have realized that our compressor and obfuscator bakes that down to 460kb or 1.5 megabyte — both peanuts compared to similar system written in Java, C# or Delphi.

Ask and ye shall recieve

I dont really care who shared my RTL, but I am sad that it happened. Im not a difficult person when it comes to these things – all you have to do is ask.

Also remember that the RTL is a copyrighted product. It belongs to a registered financial entity (read: company) and is not subject for distribution.

FMX4Linux is coming, and we cant wait!

May 3, 2017 Leave a comment

When Embarcadero announced Linux support for their Tokyo release of Delphi, my soul literally left my body for a moment. Could it really be true? After all these years have Embarcadero done what many said would be impossible?

I must admit that people saying something is impossible has lost its sting for me. Over the past 3 years I have one of these impossible things on a weekly basis, yet people are just as shocked every single time. But this time – I was the one in a state of excitement.

As an outspoken (an understatement perhaps) active blogger, my skin has grown thick over the years. I also get to see a lot of cool tech long before it’s commercially available and mainstream – so it takes more for me to be swayed and dazzled. And you grow a healthy instinct for separating bullshit from true technical achievements too.

Tokyo

Delphi Tokyo is probably one of the finest Delphi editions to date

But yes, I admit it Рthis time Embarcadero surprised me in every positive way imaginable. If you follow my blog you know that I call it as I see it and hold little back. But this was a purely positive experience.

No I know what you are going to say; everyone knew about this right? Yeah me too. But my mind has been elsewhere lately with work and projects, so I didn’t catch the release buzz from the closed forums (well, “closed” is a matter of perspective, I have friends from Russia to the United States, and from china to the Sudan) and for the first time since the Borland days – Embarcadero got the drop on me.

I’m the kind of guy that runs on passion. Delphi, Smart Pascal and even object pascal as a general language is not just work for me – its something I love to use. I relax and enjoy myself when coding. So you can imagine my reaction when¬†my boss sent me a¬†message with “Download Tokyo and give me a report”. It was close to midnight but I was out of that bed faster than bacon on toast, ran into my home office wearing nothing but boxers and a Commodore t-shirt – and threw myself into¬†Embarcadero Developer Network’s download section.

From hero to zero in 2 seconds

I think it was around 07:00 the next morning, one hour before I was due for work that the magic phrase “command line and system services [daemons] only” hit home. And yes I “kinda”¬†knew that before – but maybe, just maybe Embarcadero had thrown in visual applications in the 11th hour. I even had a friendly bet with Jim McKeeth that a FMX UI solution¬†would appear less than 24 hours after release (more about that later).

linuxproj

Now that is a beautiful thing

Either way, it was quite the anti-climax after all that work in VMWare installing Delphi from scratch, waiting, hoping and praying. First of all because¬†I remember watching an in-depth technical review about¬†Firemonkey by David Intersimone¬†a few years back; the one where he describes the Firemonkey architecture in detail. Especially how the abstraction layer between the visual control¬†framework and the actual os made it possible for FMX to quickly adapt to new environments. Firemonkey is a complex and highly adaptable framework, but its biggest strength is paradoxically enough its simplicity. A simplicity achieved through abstracting the UI from the rendering functionality. At least as much as possible, you still have to deal with OS level windowing, security and all of that – so it’s no walk in the park either.

In the presentation (sadly I don’t have a link to this one) David went to great lengths to explain that regardless of operating system, as long as someone implemented a driver class that exposed the set of features FMX needs – Firemonkey would run as long as the compiler supported the instruction set. Visual engines could be DirectX, OpenGL, Cairo or whatever makes sense on that particular platform. As long as the “bridge class” talking with the operating system is there – Firemonkey can run on a toaster if¬†so be.

So Firemonkey has the same abstraction concept that we use in the VJL (Visual JavaScript Component Library) for Smart Pascal (differences not withstanding). If it’s Windows you are running on, DirectX is used; If it’s OS X then Apple’s implementation of OpenGL runs the show – and if you are on Linux you can pick between OpenGL and Cairo. I must admit I havent looked too closely at Cairo, but I know it was designed to make advanced composition and UI rendering more efficient.

Why it Tokyo didn’t ship with visual application support is beyond me, but considering the timing of what happened next, I have made my own conclusions.¬†It doesnt really matter to be honest – the point is we got it and it rocks!

FMX4Linux to the rescue!

Remember the wager I mentioned with Jim McKeeth? It wasnt a serious wager, I just commented and said “a Linux¬†FMX¬†solution will appear within 24hrs, you can bet on it” and added a smiley. I had no idea who or how, I just knew it’s going to turn up.

Because one thing that is a sure bet¬†– it’s that the Delphi community is a group made up of highly resourceful, inventive and clever people. And I was pretty sure that it wouldn’t take many hours before someone came up with a patch or framework to fill the void. And right I was. Less than 24 hours later and it was fact rather than conjecture.

fmxlinux

This is just a must have. There is no debate.

So less than 24 hours after Delphi Tokyo hit the shelves.¬†Eugene Kryukov and Alexey Sharagin presented “FMX for Linux”.¬†Giving you both the missing rendering back-end that talks to the system – and some kind of “widget mapping” (as far as I can understand, I wont pretend to know how they did it) that renders your UI according to GTK. So it’s not just a simple “patch”, theme or class that calls a handful of external routines; it’s a full visual implementation of FMX for Linux. Impressive? Oh yeah, and then some!

Let us explore!

Over the next few days I will be giving you an in-depth look at how this system works. I’m also going to test drive Html Components and see if we can get that running under Linux as well. Since FMX for Linux is still in development we have to take height for that – but¬†being able to target Ubuntu is pretty cool! And Remobjects, glorious Remobjects SDK, if that works out of the box I will dance the jig and upload it to YouTube, I swear to cow!

Remobjects_linux

Linux is about to feel the full onslaught of object pascal

There is little doubt what the next Smart Pascal IDE will be based on, and when you combine FMX for Linux, HTML components, Remobjects SDK and TMS into one Рyou got serious firepower to play with that will give even the most hardened C/C++ QT developer reason to be scared. And that is before the onslaught of Data Abstract and Remobjects C# native compiler.

Oh man next weekend is going to be the best ever!

Let’s not forget ARM targets

asus

The “PI killer” has arrived!

On a second note we will also be looking at the my latest embedded toys – namely the Asus Tinkerboard. I just got two of them in the mail today. Its going to be exciting to see how it fares against the Raspberry PI, ODroid XU4 and the Intel Atom based UP board.

We will also be testing how these two cards can be clustered together using node.js to work as one – and see how that impacts performance for our node.js based Smart Desktop project. These are exciting times indeed!

To make things even more interesting I will be pitching the Tinkerboard against the ODroid XU4 (original version, not the pussy passive save the environment unicorn they push now) against both the x86 UP v1 and Raspberry 3b. Although I think the Raspberry PI is in for the beating of its life when the ODroid and Tinker is overclocked to blood lust mode!

smartdesk

The Smart desktop has a powerful node.js back-end that packs a punch

So, when LLVM optimized JavaScript runs Mc68040 machine-code at 4 times the speed of a high-end Amiga 4000, I will be content.

So let’s do another “that’s impossible”¬†shall we ūüôā

Smart-Pascal: A brave new world, 2022 is here

April 29, 2017 Leave a comment

Trying to explain what Smart Mobile Studio does and the impact it can have on your development cycle is very hard. The market is rampant with superficial frameworks that promises you the world, and investors have been taken for a ride by hyped up, one-click “app makers” more than once.

I can imagine that being an investor is a bit like panning for gold. Things that glitter the most often turn out to be worthless – yet fortunes may hide beneath unpolished and rugged surfaces.

Software will disrupt most traditional industries in the next 5-10 years.
Uber is just a software tool, they don’t own any cars, yet they are now the
biggest taxi company in the world. -Source: R.M.Goldman, Ph.d

So I had enough. Instead of trying to tell people what I can do, I decided I’m going to show them instead. As the american’s say: “talk is cheap”. And a working demonstration is worth a thousand words.

Care to back that up with something?

A couple of weeks ago I published a video on YouTube of our Smart Pascal based desktop booting up in VMWare. The Amiga forums went off the chart!

vmware

For those that havent followed my blog or know nothing about the desktop I’m talking about, here is a short¬†summary of the events so far:


Smart Mobile Studio is a compiler that takes pascal, like that made popular in Delphi or Lazarus, and compiles it JavaScript instead of machine-code.

This product has shipped with an example of a desktop for years (called “Quartex media desktop”). It was intended as an example of how you could write a front-end for kiosk machines and embedded devices. Systems that could use a touch screen as the interface between customer and software.

You have probably seen those info booths in museums, universities and libraries? Or the ticket machines in subways, train-stations or even your local car-wash? All of those are embedded systems. And up until recently these have been small and expensive computers for running Windows applications in full-screen. Applications which in turn talk to a server or local database.

Smart Mobile Studio is able to deliver the exact same (and more) for a fraction of the price. A company in Oslo replaced their $300 per-board unit – with off the shelves $35 Raspberry Pi mini-computers. They then used Smart Pascal to write their client software and ran it in a fullscreen-browser. The Linux distribution was changed to boot straight into Firefox in full-screen. No Linux desktop, just a web display.

The result? They were able to cut production cost by $265 per unit.


Right, back to the desktop. I mentioned the Amiga community. This is a community of coders and gamers that grew up with the old Commodore machines back in the 80s and 90s. A new Amiga is now on the way (just took 20+ years) – and the look and feel of the new operating-system, Amiga OS 4.1, is the look and feel I have used in The Smart Desktop environment. First of all because I grew up on these machines myself, and secondly because the architecture of that system was extremely cost-effective. We are talking about a system that delivered pre-emptive multitasking in as little as 512Kb of memory (!). So this is my “ode to OS 4” if you will.

And the desktop has caused quite a stir both in the Delphi community, cloud community and retro community alike. Why? Because it shows some of the potential cloud technology can give you. Potential that has been under their nose all this time.

And even more important: it demonstrate how productive you can be in Smart Pascal. The operating system itself, both visual and non-visual parts, was put together in my spare time over 3 weeks. Had I been able to work on it daily (as a normal job) I would have knocked it out in a week.

A desktop as a project type

All programming languages have project types. If you open up Delphi and click “new” you are greeted with a rich menu of different projects you can make. From low-level DLL files to desktop applications or database servers. Delphi has it all.

delphistuff

Delphi offers a wide range of projects types you can create

The same is true for visual studio. You click “new solution” and can pick from a wide range of different projects. Web projects, servers, desktop applications and services.

Smart Pascal is the only system where you click “new project” and there is a type called “Smart desktop” and “Smart desktop application”. In other words, the power to create a full desktop is now an integrated part of Smart Pascal.

And the desktop is unique to you. You get to shape it, brand it and make it your own!

Let us take a practical example

Imagine a developer given the task to move the company’s aging invoice and credit system from the Windows desktop – to a purely web-based environment.

legacy2The application itself is large and complex, littered with legacy code and “quick fixes” going back decades. Updating such a project is itself a monumental task – but having to first implement concepts like what a window is, tasks, user space, cloud storage, security endpoints, look and feel, back-end services and database connectivity; all of that before you even begin porting the invoice system itself ? The cost is astronomical.

And it happens every single day!

In Smart Pascal, the same developer would begin by clicking “new project” and selecting “Smart desktop”. This gives him a complete desktop environment that is unique to his project and company.

A desktop that he or she can shape, adjust, alter and adapt according to the need of his employer.¬†Things like file-type recognition, storage, getting that database ‚Äď all of these things are taken care of already. The developer can focus on his task, namely to deliver a modern implementation of their invoice and credit¬†software ‚Äď not waste months¬†trying to force JavaScript frameworks do things they simply lack the depth to deliver.

Once the desktop has the look and feel in order, he would have to make a simple choice:

  • Should the whole desktop represent the invoice system or ..
  • Should the invoice system be implemented as a secondary application running on the desktop?

If it’s a large and dedicated system where the users have no need for other programs running, then implementing the invoice system inside the desktop itself is the way to go.

If however the customer would like to expand the system later, perhaps add team management, third-party web-services or open-office like productivity (a unified intranet if you like) – then the second option makes more sense.

On the brink of a revolution

The developer of 2022 is not limited to the desktop. He is not restricted to a particular operating system or chip-set. Fact is, cloud has already reduced these to a matter of preference. There is no strategic advantage of using Windows over Linux when it comes to cloud software.

Where a traditional developer write and implement a solution for a particular system (for instance Microsoft Windows, Apple OS X or Linux) ‚Äď cloud developers deliver whole eco systems; constellations of software constructed from many parts, both micro-services developed in-house but also services from¬†others; like Amazon or Azure.

All these parts co-operate and can be combined through established end-point standards, much like how components are used in Delphi or Visual Studio today.

smartdesk

The Smart Desktop, codename “Amibian.js”

Access to products written in Smart is through the browser, or sometimes through a ‚Äúpaper thin‚ÄĚ native host¬†(Cordova Phonegap,¬†Delphi and C/C++) that expose system level functionality.¬†These hosts wrap your application in a native, executable container ready for Appstore or Google Play.

Now the visual content is typically the same, and is only adapted for a particular device. The real work is divided between the client (which is now very much capable) and your server back-end.

So people still write code in 2022, but the software behaves differently and is designed to function as a group (cluster). And this requires a shift in the way we think.

asmjs

Above: One of my asm.js prototype compilers. Lets just say it runs fast!

Scaling a solution from processing 100 invoices a minute to handling 100.000 invoices a minute – is no longer a matter of code, but of architecture. This is where the traditional, native only approach to software comes up short, while more flexible approaches like node.js is infinitely more capable.

What has emerged up until now is just the tip of the ice-berg.

Over the next five to eight years, everything is going to change. And the changes will be irrevocable and permanent.

Running your Smart Pascal server as a system-level daemon is very easy once you know what to look for :)

The Smart Desktop back-end running as a system service on a Raspberry PI

As the Americans say, talk is cheap – and I’m done talking. I will do this with you, or without you. Either way it’s happening.

Nightly-build of the desktop can be tested here: http://quartexhq.myasustor.com/

Smart Desktop: Inter frame communication

April 24, 2017 Leave a comment

What is a process in a web desktop environment? Been thinking a bit about this, and it’s actually not that easy to hang the reality of an external script or website running in a frame as a solid “concept” (I use the word entity in my research notes). Visually it appears as a secondary, isolated web application – but ultimately (or technically) it’s just another web object with its own rendering context.

amibian_cortanaWhat I’m talking about is programs. Amibian.js or Smart desktop can already run a wealth of applications – which ultimately are just external or internal web-pages (a very simple term in some cases, but it will have to do). But when you want complex behavior from an eco-system, you must also provide an equally complex means of communication.

And this brings me neatly to the heart of todays task – namely how the desktop can talk to running web-applications and visa versa. And there are ultimately only two real options to choose from here: pure client, or client-server.

Unlike other embedded / web desktops I want to do as much as possible client-side. It would take me five minutes to implement message dispatching if I did this server-side, but the less dependent we are on a server – the more flexible amibian.js will be in terms of integration.

Screenshot

The first Desktop-Application type running in the desktop! And its aware that it is running under Amibian. This frame/host awareness helps establish parent/child relationship

Turns out that the browser has an API for this. A message API that allows a main website to talk with other websites embedded in IFrames. Which is just what we need.

But its very crude and not as straight forward as you may think. You have commands like postMessage() and then you can set up an event-handler that fires whenever something comes through. But you also have security measures to think about. What if you interface with a web-application designed to kill the system? Without the safety of an abstraction layer that would indeed be a very easy hack.

The Smart message API

This has been in the RTL for quite some time but I havent really used it for much. I had one demo a couple of years back that used the message-api to handle off-screen rendering in a separate IFrame (sort of poor mans thread), but that was it.

Well, time to update and bring it into 2017! So I give you the new and improved inter-frame, inter-window and inter-domain message framework! (ta da!)

It should be self-evident how to use it, but a little context wont hurt:

  • You inherit from TW3MessageData and implement whatever data-fields you need
  • You send messages using the w3_SendMessage() method
  • You broadcast (i.e send the message to all frames and windows listening) messages using the w3_broadcastMessage method

But here comes the twist, because I hate having to write a bunch of if/then/else switches for messages. So instead I created a simple subscription system! Super small yet very efficient!

msgbroker

In other words, you inform the system which messages you support, which then becomes your subscription. Whenever the local dispatcher encounters that particular message type  Рit will forward the message object to you.

Here is the code. Notice the helper functions that deals with finding the current domain and so on – this will be handy when establishing a parent/child relationship.

Also note: you are expected to inherit from these classes and shape them to your message types. I have implemented the bare-bones that mimic’s Windows message API. And yes, please google inter-frame communication while testing this.

unit SmartCL.Messages;

interface

uses
  w3c.dom,
  System.Types,
  System.Time,
  System.JSON,
  Smartcl.Time,
  SmartCL.System;

const
  CNT_QTX_MESSAGES_BASEID = 1000;

type

  TW3MessageData         = class;
  TW3CustomMsgPort       = class;
  TW3OwnedMsgPort        = class;
  TW3MsgPort             = class;
  TW3MainMessagePort     = class;
  TW3MessageSubscription = class;

  TW3MessageSubCallback  = procedure (Message: TW3MessageData);
  TW3MsgPortMessageEvent = procedure (Sender: TObject; EventObj: JMessageEvent);
  TW3MsgPortErrorEvent   = procedure (Sender: TObject; EventObj: JDOMError);

  (* The TW3MessageData represents the actual message-data which is sent
     internally by the system. Notice that it inherits from JObject, which
     means it supports 1:1 mapping from JSON.
     This is why Deserialize is a member, and Serialize is a class member. *)
  TW3MessageData = class(JObject)
  public
    property    ID: Integer;
    property    Source: String;
    property    Data: String;

    function Deserialize: string;
    class function Serialize(const ObjText: string): TW3MessageData;

    constructor Create;
  end;

  (* A "mesage port" is a wrapper around the [obj]->Window[]->OnMessage
     event and [obj]->Window[]->PostMessage API.
     Where [obj] can be either the main window, or an embedded
     IFrame->contentWindow. This base-class implements the generic behavior.
     You must inherit or use a decendant class if you dont know exactly what
     you are doing! *)
  TW3CustomMsgPort = Class(TObject)
  private
    FWindow:    THandle;
  protected
    procedure   Releasewindow;virtual;
    procedure   HandleMessageReceived(eventObj: JMessageEvent);
    procedure   HandleError(eventObj: JDOMError);
  public
    Property    Handle:THandle read FWindow;
    Procedure   PostMessage(msg:Variant;targetOrigin:String);virtual;
    procedure   BroadcastMessage(msg:Variant;targetOrigin:String);virtual;
    Constructor Create(WND: THandle);virtual;
    Destructor  Destroy;Override;
  published
    property    OnMessageReceived: TW3MsgPortMessageEvent;
    Property    OnError: TW3MsgPortErrorEvent;
  end;

  (* This is a baseclass for window-handles that already exist.
     You are expected to provide that handle via the constructor *)
  TW3OwnedMsgPort = Class(TW3CustomMsgPort)
  end;

  (* This message-port type is very different from both the
     ad-hoc baseclass and "owned" variation above.
     This one actually creates it's own IFrame instance, which
     means it's a completely stand-alone entity which doesnt need an
     existing window to dispatch and handle messages *)
  TW3MsgPort = Class(TW3CustomMsgPort)
  private
    FFrame:     THandle;
    function    AllocIFrame: THandle;
    procedure   ReleaseIFrame(aHandle: THandle);
  protected
    procedure   ReleaseWindow; override;
  public
    constructor Create; reintroduce; virtual;
  end;

  (* This message port represent the "main" message port for any
     application that includes this unit. It will connect to the main
     window (deriving from the "owned" base class) and has a custom
     message-handler which dispatches messages to any subscribers *)
  TW3MainMessagePort = Class(TW3OwnedMsgPort)
  protected
    procedure HandleMessage(Sender: TObject; EventObj: JMessageEvent);
  public
    constructor Create(WND: THandle); override;
  end;

  (* Information about registered subscriptions *)
  TW3SubscriptionInfo = Record
    MSGID:    integer;
    Callback: TW3MessageSubCallback;
  end;

  (* A message subscription is an object that allows you to install
     X number of event-handlers for messages you want to recieve. Its
     important to note that all subscribers to a message will get the
     same message -- there is no blocking or ownership concepts
     involved. This system is a huge improvement over the older WinAPI *)
  TW3MessageSubscription = class(TObject)
  private
    FObjects:   Array of TW3SubscriptionInfo;
  public
    function    ValidateMessageSource(FromURL: string): boolean; virtual;
    function    SubscribesToMessage(const MSGID: integer): boolean; virtual;
    procedure   Dispatch(const Message: TW3MessageData); virtual;
    function    Subscribe(const MSGID: integer; const CB: TW3MessageSubCallback): THandle;
    procedure   Unsubscribe(const Handle: THandle);

    Constructor Create; virtual;
    Destructor  Destroy; override;
  end;

  (* Helper functions which simplify message handling *)
  //function  W3_MakeMsgData:TW3MessageData;
  procedure W3_PostMessage(const msgValue: TW3MessageData);
  procedure W3_BroadcastMessage(const msgValue: TW3MessageData);

  (* Audience returns true if a message-ID have any
     subscriptions assigned to it *)
  function  W3_Audience(msgId: integer): boolean;

  (* This returns true if the current smart application is embedded in a frame.
     That can be handy to know when establishing parent/child relationships
     between main-application and child "programs" running in frames *)
  function AppRunningInFrame: boolean;
  function GetUrlLocation: string;
  function GetDomainFromUrl(URL: string; const IncludeSubDomain: boolean): string;

implementation

uses SmartCL.System;

var
_mainport:    TW3MainMessagePort = NIL;
_subscribers: Array of TW3MessageSubscription;

function AppRunningInFrame: boolean;
var
  LMyFrame: THandle;
begin
  // window.frameElement Returns the element (such as <iframe> or <object>)
  // in which the window is embedded, or null if the window is top-level.
  try
    // Note: attempting to read frameElement will throw a SecurityError
    // exception in cross-origin iframes. It should just return null,
    // but not all browsers play by the rules -- hence the try/catch
    asm
      @LMyFrame = window.frameElement;
    end;
  except
    on e: exception do
    exit;
  end;
  result := not (LMyFrame);
end;

function GetUrlLocation: string;
begin
  asm
    @result = window.location.hostname;
  end;
end;

function GetDomainFromUrl(Url: string; const IncludeSubDomain: boolean): string;
begin
  url := Url.trim();
  if url.length < 1 then
  begin
    asm
      @Url = window.location.hostname;
    end;
  end;
  asm
    @url = (@url).replace("/(https?:\/\/)?(www.)?/i", '');
    if (!@IncludeSubDomain) {
        @url = (@url).split('.');
        @url = (@url).slice((@url).length - 2).join('.');
    }
    if ((@url).indexOf('/') !== -1) {
      @result = (@url).split('/')[0];
    } else {
      @result = @url;
    }
  end;
end;

//#############################################################################
//
//#############################################################################

procedure QTXDefaultMessageHandler(sender:TObject;EventObj:JMessageEvent);
var
  x:      Integer;
  mItem:  TW3MessageSubscription;
  mData:  TW3MessageData;
begin
  mData := TW3MessageData.Serialize(EventObj.Data);
  //mData := new TW3MessageData();
  //mData.Serialize(EventObj.Data);

  for x:=0 to _subscribers.count-1 do
  Begin
    mItem := _subscribers[x];

    // Check that the message is subscribed to
    if mItem.SubscribesToMessage(mData.ID) then
    begin
      // Make sure subscriber validates the caller-domain
      if mItem.ValidateMessageSource( TVariant.AsString(EventObj.source) ) then
      begin
        (* We execute with a minor delay, allowing the browser to
         exit the function before we dispatch our data *)
        TW3Dispatch.Execute(procedure ()
        begin
          mItem.Dispatch(mData);
        end, 10);
      end;
    end;
  end;
end;

{ function W3_MakeMsgData: TW3MessageData;
begin
  result := new TW3MessageData();
  result.Source := "*";
end;      }

function W3_GetMsgPort: TW3MainMessagePort;
begin
  if _mainport = NIL then
  _mainport := TW3MainMessagePort.Create(BrowserAPI.Window);
  result := _mainport;
end;

procedure W3_PostMessage(const msgValue:TW3MessageData);
begin
  if msgValue<>NIL then
  W3_GetMsgPort().PostMessage(msgValue.Deserialize,msgValue.Source) else
  raise exception.create('Postmessage failed, message object was NIL error');
end;

procedure W3_BroadcastMessage(const msgValue:TW3MessageData);
Begin
  if msgValue<>NIL then
  W3_GetMsgPort().BroadcastMessage(msgValue,msgValue.Source) else
  raise exception.create('Broadcastmessage failed, message object was NIL error');
end;

function  W3_Audience(msgId: Integer): boolean;
var
  x:  Integer;
  mItem:  TW3MessageSubscription;
begin
  result:=False;
  for x:=0 to _subscribers.count-1 do
  Begin
    mItem:=_subscribers[x];
    result:=mItem.SubscribesToMessage(msgId);
    if result then
    break;
  end;
end;

//#############################################################################
// TW3MainMessagePort
//#############################################################################

Constructor TW3MessageData.Create;
begin
  self.Source:="*";
end;

function  TW3MessageData.Deserialize: string;
begin
  result := JSON.Stringify(self);
end;

class function TW3MessageData.Serialize(const ObjText: string): TW3MessageData;
begin
  result := TW3MessageData(JSON.parse(ObjText));
end;

//#############################################################################
// TW3MainMessagePort
//#############################################################################

constructor TW3MainMessagePort.Create(WND: THandle);
begin
  inherited Create(WND);
  OnMessageReceived := QTXDefaultMessageHandler;
end;

procedure TW3MainMessagePort.HandleMessage(Sender: TObject; EventObj: JMessageEvent);
begin
  QTXDefaultMessageHandler(self, eventObj);
end;

//#############################################################################
// TW3MessageSubscription
//#############################################################################

Constructor TW3MessageSubscription.Create;
begin
  inherited Create;
  _subscribers.add(self);
end;

Destructor TW3MessageSubscription.Destroy;
Begin
  _subscribers.Remove(self);
  inherited;
end;

function TW3MessageSubscription.Subscribe(const MSGID: integer; const CB: TW3MessageSubCallback): THandle;
var
  LObj: TW3SubscriptionInfo;
begin
  LObj.MSGID := MSGID;
  LObj.Callback := @CB;
  FObjects.add(LObj);
  result := LObj;
end;

procedure TW3MessageSubscription.Unsubscribe(const Handle: THandle);
var
  x:  Integer;
begin
  for x:=0 to FObjects.Count-1 do
  begin
    if variant(FObjects[x]) = Handle then
    Begin
      FObjects.delete(x,1);
      break;
    end;
  end;
end;

function TW3MessageSubscription.ValidateMessageSource(FromURL: string): boolean;
begin
  // by default we accept messages from anywhere
  result := true;
end;

function TW3MessageSubscription.SubscribesToMessage(const MSGID: integer): boolean;
var
  x:  Integer;
begin
  result:=False;
  for x:=0 to FObjects.Count-1 do
  Begin
    if FObjects[x].MSGID = MSGID then
    Begin
      result:=true;
      break;
    end;
  end;
end;

procedure TW3MessageSubscription.Dispatch(const Message:TW3MessageData);
var
  x:  Integer;
begin
  for x:=0 to FObjects.Count-1 do
  Begin
    if FObjects[x].MSGID = Message.ID then
    Begin
      if assigned(FObjects[x].Callback) then
      FObjects[x].Callback(Message);
      break;
    end;
  end;
end;

//#############################################################################
// TW3MsgPort
//#############################################################################

Constructor TW3MsgPort.Create;
begin
  FFrame:=allocIFrame;
  if (FFrame) then
  inherited Create(FFrame.contentWindow) else
  Raise Exception.Create('Failed to create message-port error');
end;

procedure TW3MsgPort.ReleaseWindow;
Begin
  ReleaseIFrame(FFrame);
  FFrame:=unassigned;
  Inherited;
end;

Procedure TW3MsgPort.ReleaseIFrame(aHandle:THandle);
begin
  If (aHandle) then
  Begin
    asm
      document.body.removeChild(@aHandle);
    end;
  end;
end;

function TW3MsgPort.AllocIFrame: THandle;
Begin
  asm
    @result = document.createElement('iframe');
  end;

  if (result) then
  begin
    /* if no style property is created, we provide that */
    if not (result['style']) then
    result['style'] := TVariant.createObject;

    /* Set visible style to hidden */
    result['style'].display := 'none';

    asm
      document.body.appendChild(@result);
    end;

  end;
end;

//#############################################################################
// TW3CustomMsgPort
//#############################################################################

Constructor TW3CustomMsgPort.Create(WND: THandle);
Begin
  inherited Create;
  FWindow := WND;
  if (FWindow) then
  begin
    FWindow.addEventListener('message', @HandleMessageReceived, false);
    FWindow.addEventListener('error', @HandleError, false);
  end;
End;

Destructor TW3CustomMsgPort.Destroy;
begin
  if (FWindow) then
  begin
    FWindow.removeEventListener('message', @HandleMessageReceived, false);
    FWindow.removeEventListener('error', @HandleError, false);
    ReleaseWindow;
  end;
  inherited;
end;

procedure TW3CustomMsgPort.HandleMessageReceived(eventObj:JMessageEvent);
Begin
  if assigned(OnMessageReceived) then
  OnMessageReceived(self,eventObj);
End;

procedure TW3CustomMsgPort.HandleError(eventObj:JDOMError);
Begin
  if assigned(OnError) then
  OnError(self,eventObj);
end;

procedure TW3CustomMsgPort.Releasewindow;
begin
  FWindow := Unassigned;
end;

Procedure TW3CustomMsgPort.PostMessage(msg:Variant;targetOrigin:String);
begin
  if (FWindow) then
  FWindow.postMessage(msg,targetOrigin);
end;

// This will send a message to all frames
procedure TW3CustomMsgPort.BroadcastMessage(msg:Variant;targetOrigin:String);
var
  x:  Integer;
  mLen: Integer;
begin
  mLen := TVariant.AsInteger(browserAPI.Window.frames.length);
  for x:=0 to mLen-1 do
  begin
    browserAPI.window.frames[x].postMessage(msg,targetOrigin);
  end;
end;

finalization
begin
  if assigned(_mainport) then
  _mainport.free;
end;

end.

Amibian.js and the Narcissus hack

April 22, 2017 Leave a comment

Wow, I must admit that I never really thought Amibian.js would become even remotely as popular as it has Рyet people respond with incredible enthusiasm to our endeavour. I was just told that an article at Commodore USA mentioned us Рthat an exposure to 37000 readers. Add that to the roughly 40.000 people that subscribe to my feeds around the world and I must say: I hope I code something worthy of your time!

amibian_cortana

This is going to look and behave like the bomb when im done!

But there is a lot of stuff on the list before it’s even remotely finished. This is due to the fact that im not just juggling one codebase here – im juggling 5 separate yet interconnected codebases at the same time (!). First there is the Smart Mobile Studio RTL (run time library) which represents the foundation. This gives me object-oriented, fully inheritance driven visual controls. This have roughly 5 years of work behind it.

138-Quake_III_Arena-1479915663

Still #1 after all these years

On top of that you have the actual visual controls, like buttons, scrollbars, lists, css3 effect engines, tweening, database storage and a ton of low-level stuff. The browser have no idea what a window is for example, let alone how it should look or respond to users. So every little piece has to be coded by someone. And¬†well, that’s what I do.

Next you have the workbench and operating-system itself. What you know as Amibian.js, Smart Workbench or Quartex media desktop – take your pick, but it’s already a substantial codebase spanning some 40 units with thousands of lines of code. It is divided into two parts: the web front-end that you have all seen; and the node.js backend that is not yet made public.

And on top of that you have the external stuff. Quake III didn’t spontaneously self-assemble inside the desktop, someone had to do some coding and make the two interface. Same with all the other features you have.

The worst so far (as in damn hard to get right) is the Ace text editor. Ace by itself is super easy to work with – but you may have noticed that we have removed it’s scrollbars and replaced these with Amiga scrollbars instead? That is a formidable challenge it its own right.

patch2

Aced it! Kidnapping signals was a challenge

Whats on the menu this weekend?

I noticed that on Linux that text-selection was utterly messed up, so when you moved a window around – it would suddenly start selecting the title text of other elements around the desktop. This is actually a bug in the browser – not my code; but I still have to code around it. Which I have now done.

I also solved selection for the console window (or any “text” container. A window is made up of many parts and the content region can be inherited from and replaced), so that should now work fine regardless of browser and platforms. Ace theming also works, and the vertical scrollbar is responding as expected. Still need a few tweaks to move right, but that is easy stuff. The hard part is behind us thankfully.

patch

Selection – works

Right now I’m working on ScummVM so that should be in place later today ūüôā

Thats cool, but what motivates you?

2jd3x2q

Cult of Joy

Retro gaming is important, and we have to make it as easy for people to enjoy their retro gear without patent trolls ruining the fun. Im just so tired of how ruined the Amiga scene is by these (3 companies in particular).. thieves is the only word I can find that fits.

So fine! I will make my own. Come hell or high water. Free as a bird and untouchable.

So I have made some tools that will make it ridiculously easy for you to share, download and play your games online. Whenever you want, hosted where-ever you need and there is not a god damn thing people can do about it. When you realize how simple the hack is it will make you laugh. I came up with this ages ago and dubbed it Narcissus.

To understand the Narcissus hack, consider the following:

PNG is a lossless compression format, meaning that it doesn’t lose any information when compressed. It’s not like JPEG which scrambles the original and saves a faximile¬†that tricks the eye. Nope, if you compress a PNG image you get the exact same out when you decode it (read: show it).

But who said we have to store pixels? Pixels are just bytes after all.¬†In fact, why can’t we take a whole game disk or rom and store that¬†inside a picture? Sure¬†you can!

cool

This tool is now built into Amibian.js

It’s amusing, I came up with this hack years ago. It has been a part of Smart Mobile Studio since the beginning.

You have to remember that retro games are super small compared to modern games. The average ADF file is what? 880kb or something like that? Well hold on to your hat buddy, because PNG can hold 64 megabyte of data! You can encode a decent Amiga hard disk image in 64 megabyte.

eatmeCan you guess what the picture on the right contains? This picture is actually ALL the Amiga rom files packed into a single image. Dont worry, I converted it to JPEG to mess up the data before uploading. But yes, you can now host not just the games as normal picture files, but also roms and whatever you like.

And the beauty of it – who the hell is going to find them? You can host them on Github, Google drive, Dropbox or right¬†your blog — if¬†you don’t have the encryption¬†key the file is useless.

Snap, crackle and pop!

RSS Filesystem

You know RSS feeds right? If you sign up for a blog you automatically get a RSS feed. It’s basically just a list of your recent posts – perhaps with an extract from each article, a thumbnail picture and links to each post. RSS have been around for a decade or more. It’s a great way to keep track of news.

The second hack is that using the data-to-image-encoder you can store a whole read-only filesystem as a normal RSS feed. Always think outside the box!

Let’s say you have a game collection for your Amiga right? Lets say 200 games. Wouldnt it be nice to have all those games online? Just readily available regardless of where you may be? Without¬†“you know who” sending you a nasty email?

Well, just encode your game as described above, include the data-picture in your WordPress post, and do that for each of your games. Since you can encrypt these images they will be worthless to others. But for you its a neat way of hosting all your games online for free (like WordPress or Blogger) and play them via Amibian or the patched UAE4Arm (ops, did I share that, sorry dirk *grin*) and¬†you’re home free.

You know what’s really cool? For this part Amibian doesnt even need a server. So you can just save the Amibian.js html page on your phone and that’s all you need.

RumoredPirates

Drink up me hearties yo ho!

Smart Puppy: Smart pascal meets linux!

April 21, 2017 Leave a comment

logo_waifu2x_art_noise1_scale_tta_1One of my absolute favorite operating-systems in the whole world has to be Puppy Linux. I discovered it just a few days ago and I have fallen completely in love with this thing. I can vaguely remember giving it a testdrive a few years back, but I¬†didn’t know much about Linux in general so I didn’t understand what I it represented.

So if you are looking for a friendly, small, fast and easy to use Linux system – then Puppy is about as friendly as it gets. The Facebook user group with the same name is a warm and friendly place to be. Much like Delphi developer the Admin(s) take pride in keeping things orderly – and people who hang out there engage, care and help each other out.

Before you run out and download Puppy, which I hope you do later – please understand that Puppy is very different from Linux in general. You could almost say that it’s a whole alternative to mainstream Linux as we know it.

But, once you know about the differences then you are in for a treat! I will explain them in the article, so please be patient and take the time to digest.

Puppies hate fluff

One of the reasons I never converted wholesale to Linux (and yes I did try) – is that the average Linux distro is unbearable and unnecessary cryptic. For some reason Linux architects suffer from a terrible affliction, namely a shortage of characters. This sickness means that Linux don’t have enough characters for everyone, so programmers must use a maximum of five letters when naming their software. If coders ignore this shortage and blatantly name something directly or intuitively – then¬†Richard Stallman and Lady Gaga¬†will order a “drive-by pony tail cut” on the dude. And a Linux administrator without his pont-tail is finished (the nerd equivalent of flipping burgers at McDonalds).

Puppy Linux does contain it’s fair share of the classical Linux software (that goes without saying). But, the man behind this wonderful Linux flavour is also a level-headed, clever¬†and resourceful man (or woman) – so he has thankfully broken with what can only be described as archaic thinking.

puppy01

Puppy Linux is not exactly software impaired

So even with my minimal Linux experience I was able to navigate around the filesystem, locate documents (which here is called “Documents” and “My Documents” even). There is a whole bunch of these tiny differences, small things that makes all the difference. From the way he (or she) has named things – to where things are stored and placed.

And it’s so small! The basic install is less than 300 megabytes in size (!) Yes you read that right. The generic Puppy Linux installation with desktop and a few popular applications is less than 300 megabytes.

In my case I can have a fully loaded development studio, featuring GCC, FPC (freepascal), Lazarus IDE, CodeBlocks IDE, KDevelop IDE, Anjunta developer studio – and last but never least Smart Mobile Studio on a 2 gigabyte USB stick (!) I don’t think you can even get USB sticks that small any more (?) The smallest I got is 32 gigabyte and the largest is 256 gigabyte.

But before we go on with the wonders of Puppy Linux Рlets look at what Linux did wrong. Why is Linux even to this day considered hard to use? Or to put it another way: what has Windows and OS X done right to be considered easier to use yet capable of the same (and often more) ?

Naming, what Linux did wrong

One of the tenants of professional programming, is to ensure that classes, members and functions have meaningful names. There was a time when you would get away with single character class, variable and method names — but that wont fly in 2017. Your Q&A department would have you for breakfast if you checked in code like that. Classes, name-spacing and packages should be descriptive. End of story.

The reason this has become an almost sacred law, should be obvious: it may not be you that maintains the software 5, 10 or 15 years down the road. A piece of code should always be written in such a way that it can be understood and thus maintained by others within a reasonable time-frame (which also means plenty of comments and good documentation). This is not a matter of preference, but of time and money. And when you pay out salaries these factors are one and the same.

So naming elements of software in 2017 has a lot of criteria attached to it. The most obvious so far being:

  • Always name things clearly because¬†that
    • ensures ease of use
    • simplifies maintenance
    • removes doubt as to “what is what”
    • less user-mistakes
  • The less mistakes, either in understanding something or using something, the less money a business throws out the window. Money that could be spent paying you to make something cool instead (or fix bugs that are critical).
  • The less user-mistakes caused by customers,¬†the more your service department can focus on quality of service. When a company starts it usually have outstanding support, but as it grows their service-desk slowly become robots.
  • The easier and more intuitive a system is, the more users it will attract. If people can pick something up and just naturally figure out how things work, then statistics show that¬†they most likely will continue using it through thick and thin.

Right. With these rules in mind Рwhat happens if you take them but apply them to Linux instead? Not Linux code or libraries or stuff like that, but Linux the user-experience from top to bottom?

And don’t get me wrong, I think Linux is awesome so this is not an attack on Linux; I’m simply pointing out factors that could help make Linux even better.

I mean, just look at the Linux filesystem. Again you have this absurd¬†shortage of characters. Why would anyone abbreviate the word “user[s]” into “usr” ? It make noh sense. ¬†Same with “lib”, would it have killed you to call it “libraries”? And so it continues with “dev” – because calling it “devices” would¬†cause the space-time-continuum to break.

Shell shocked

The shell (or command-line under Windows) and it’s commands is really the thing that annoys me the most. There is a fine line between use and abuse, and the level of abbreviation here is beyond whimsical and harmless – and well into the realm of silly and absurd

Who in their right mind would name a command “ps”? What could it possibly mean? The first thing that comes to mind is “print spool”. If you come from any other platform than Linux (and perhaps Unix, I don’t know) you would never¬†imagine that “ps” actually means “list all running processes and their states”.

ps_command

“ps” lists the running processes and their states

Above: running “ps” from the shell lists the running processes. Would it have killed the coders to just call it, oh perhaps, “listprocesses” or “showrunningprograms”?

The “ps” command is just one in a long, long list of commands that really should be brought into the twenty-first century. The benefits should be obvious. It should not be necessary for a 43-year-old man to blog about this, because it’s been a problem for the better part of three decades.

  • Kids and teenagers is the bread and butter for all operating systems. The faster a kid of teenager can do something with a system, the more loyal that individual will be to the platform in the future.
  • Linux needs developers and users from other platforms. When someone who has been a successful developer for almost 30 years find a system cryptic and hard to use, how much harder will it be for a non-technical user?
  • Standards are important. The location of files, libraries and settings should be uniform. As of writing Linux¬†seem to have 3 different standards (again, I am no expert): systemd, initd and “systemx”. The latter is just a name I made up, because no-one really knows what it’s called.¬†We are now in the realm of PlayStation, ChromeOS, WebOS and systems that build on the Linux –¬†but deviate the moment the drivers have loaded.

Again, I’m not writing this in a negative mindset. I have been using Ubuntu for a while¬†as an alternative to Windows and OS X. But this has been a purely user-centric experience. I have not done any programming except random bits of Freepascal and node.js experiements. I have enjoyed Ubuntu purely as a user. Writing documents, checking email, browsing the web, IRC, reading news groups – ordinary stuff.

So I am very positive to Linux, but I have yet to find “my” flavour of it. A Linux¬†distro I feel at home with and that appeals to my way of working.

Until today that is..

Enter puppy Linux

Puppy is a flavour of Linux that just demolishes some of Linux’s most holiest of concepts.¬†Everyone will tell you never to run as root, always have the root account in peace¬†– and¬†keep it under lock and key just in case someone gets into your second account right?

Well not Puppy. Here you are expected to run as root and you can, if you for some reason must, jump out into a secondary user which is fake. So indeed – puppy Linux is a single user Linux system. It’s the rebel, the scoundrel and rouge of the Linux world – the distro that couldn’t care less what the other guys are doing.

gcc.png

Fancy a spot of coding? GCC is a SFS module away ..

Secondly, and this is very cool, Puppy is highly modular. No I’m not talking about packages, all Linux distros have that in some form or another. No I’m talking about something called SFS files, short for squashed file-system.

To make a long story short, Puppy allows you to mount compressed files as disks and they become a part of the system. It’s a bit like the virtual-drive API on windows (if you have ever coded against that?). You may have noticed in Windows how you can double-click on a .ISO file and suddenly the file is mounted in the file-explorer and stays mounted until you manually dis-mount the damn thing?

Well, SFS is that but also much more. Because when you mount the SFS file whatever applications it contains registers on the start-menu, adds itself to the global path and essentially becomes one with the whole system. This took me a while to wrap my head around this (good features always comes with a price, so i keept waiting for the negative. But there were none!). The people I talked to about this were not coders, so they had some very colorful explanations to how it all worked. But once I realized SFS was just a zip-file (or tarball or whatever) with a fixed structure (including mount script and dis-mount script) I got the picture.

Size and speed matters

Before I started using a PC back in the early 90’s I was a huge Amiga fan. I still am (as you no doubt have noticed). One of the first things I found, or first difference between Amiga computing and PC computing that hit me – was how wasteful PC’s were. I remember I was shocked when I saw how much space and cpu power the average programmer just wasted — because on the Amiga everyone strived to be as resourceful and efficient as possible.

We would spend days optimizing even the smallest parts of our applications just to ensure that it ran at top speed and produced as little bloat as possible. This was just baked into us, it was the way of the force¬†and as common as your grandfather’s work ethics. Quality and achievement went hand in hand.

cb

CodeBlocks is an excellent IDE ūüôā

When you fire up Puppy Linux you are instantly reminded that there are people to this day that cares about size and speed. And that maybe, just maybe, consumerism has tricked you into throwing away perfectly usable technology year after year. Machines that actually had more than enough CPU power for the tasks you wanted, but was slowed down by bloated operating-systems, poor programming and lazy code generators.

Puppy Linux is the fastest bloody Linux you will ever run. The only operatingsystem I have tried that runs faster, is Aros compiled for Arm (a distro called Aeros, a reverse engineered edition of Amiga OS). But as far as x86 and the Linux kernel goes — Puppy Linux is the bomb.

I know I’m repeating myself here but: less than 300 megabytes for a fully loaded Linux distro with text processor, browser, devkit, music player, video player and all the “typical” applications you would use for daily tasks?¬†And it truly is¬†the fastest hunk of junk in the galaxy without question.

Amiga coders and the cult of joy

When I started to snoop around the Puppy environment and community, I started¬†to notice a couple of “tell-tell” signs. Tiny, subtle things that only an Amiga coder¬†would pick up on. Enough to give you a hunch, a gut feeling – but not enough to blatantly say it out loud.¬†“Amiga guys did this” i would whisper to myself. And it’s not really such a big surprise to find that coders now in their 40s that used to be Amiga coders.

In 30 years time there will be company owners and CEO’s that grew up with Playstation and have fond memories of that. But they wont recognize each-other by their craftmanship – that is the difference.

cult

The cult of joy lives on, albeit in new forms

The Amiga was special because it was not just a games machine. It was also a complete rewrite of what constituted the power¬†operatingsystem of its time:¬†Unix. In other words they copied the best stuff from Unix (which by the way had the same absurd filesystem as Linux still has) but cleaned it up. First thing to be cleaned was (drumroll) the filesystem. But that’s another story all together.

When I entered the Puppy Linux forum I naturally mentioned that I was a complete total Linux novice, and that my favorite machine before x86 was an Amiga. And what do you think happened? Let’s just say that more than a few greeted me with open arms. These were the Amiga users that went to Linux when Commodore went under all those years ago. And they had been active in shaping Linux ever since (!)

So yeah, had a great time on their forums – and it was like running into your long-lost cousin or something. Like if you havent seen a family member in 30 years and suddenly you meet them face to face.

Tired of 30 gigabyte operatingsystems?

Puppy Linux is not for everyone. It’s the kind of system you either love or hate. I have yet to find someone on a middle-ground regarding puppy. Either you love it, or you hate it. Or if you prefer: either you use it and are thrilled about it, or you never install it.

It has a lot of good things going for it:

  • It is built to be one of the smallest, working desktop environment you can get
  • It is built according to¬†“the old ways”, where speed, efficiency and size matter
  • It runs fine on older hardware (my test machine is an 8 year old laptop)¬†and makes stuff you would otherwise throw away become valuable again.
  • It is storage abstracted, meaning you can have all your personal stuff inside a single SFS archive (easier to back up), while the operatingsystem remains on a USB stick.
  • You don’t have to permanently install it (again, boot from a USB stick).
  • It is single user by default, which is perfect for IOT projects and devices!
  • It supports ARM, so you can now enjoy¬†this awesome thing on Raspberry PI 3 !
  • Its Linux so it has all the benefits of a rich driver database
  • Latest Puppy is binary compatible with Ubuntu (whatever that means)
  • There are 3 different desktops for it (to my knowledge), so if you don’t like the default client just install something else
  • It is the perfect rescue USB stick. At less than 300 Mb you can fit it on any old USB stick you have around the house. I think the smallest you can buy now is 4 GB
  • It has a warm, helpful, friendly and international group of users

Oh and it’s free!

As a final note: I installed Wine, the system that makes it possible to run Windows software on Linux (not an emulator, more of a api-call middle-ware /slash/ dispatcher). I was quite surprised to see it run Smart Mobile Studio on the first try!

So fancy a bit of hacking this weekend? Why not give puppy a go?

Check it out here: http://puppylinux.org/main/Download%20Latest%20Release.htm

Smart Pascal + WebOS = true

April 15, 2017 Leave a comment

LG-WebOSIf you own a television (and who doesn’t) chances are you own an LG model. The LG brand has been on the market since before recorded history (or so it feels) and have been a major player for decades.

What is little known though, is that LG also owns and finance a unique operating system. This is not uncommon these days, most NAS and cloud vendors have their own system (there are roughly 20 cloud based systems on the market that I know of). The only way to make a Smart TV (sigh) is to add a computer to it. And if you buy a television today chances are it contains a small embedded board running some custom-made operating system designed to do just that.

Every vendor has their own system, and those that don’t usually end up forking Android and adapt that to their needs.

Luna WebOS

LG’s system is called WebOS. This may sound like yet another html eye candy front end, but hear me out because this OS¬†is not what it seems.

VirtualBox_Developers LuneOS emulator appliance 20151006131924-stable-038-253_15_04_2017_16_01_43

Except for some font issues (see disk label) Smart loaded fine under WebOS

Remember Palm OS? If you are between 35 and 45 you should remember that before Apple utterly demolished the mobile-phone market with their iPhone, one of the most popular brands was Palm. Their core product being “Palm Pilot”, a kind of digital¬†filo-fax¬†(personal organizer) and mobile phone rolled into one.

WebOS is based on what used to be called Palm-OS, but it has been completely revamped, given a sexy new user interface (looks a lot like android to be honest!) and brought into the present age. And best of all: its 100% free! It runs on a plethora of systems, from x86 to Raspberry PI to Mips. It is used in televisions, terminals and set-top-boxes around the world – and is quite popular amongst engineers.

Check it out their new portal here; there is also plenty of links to pre-built images if you look around there: https://pivotce.com/

JavaScript application stack

One of the coolest features in my view, is that their applications are primarily made by JavaScript. The OS itself is native of course, but they have discovered that JavaScript and HTML5 is an excellent way to build applications. Applications that is easy to control, sandboxed and safe yet incredibly powerful and capable.

VirtualBox_Developers LuneOS emulator appliance 20151006131924-stable-038-253_15_04_2017_16_00_46

Smart Desktop booting Quake 3 in Luna OS

Well, today I sent them an Email requesting their SDK. I know they use Enyo.js as their primary and suggested framework – but that is no problem for Smart Mobile Studio. A better question is: can Luna handle our codebase?

When I loaded up Quake 3 in pure Asm.JS the TV crashed with a spectacular access violation. So they are probably not used to the level of hardcore coding Smart developers represent. But yeah, Quake III is about as advanced as it gets – and it pushes any browser to the outer limits of what is possible.

Once I get the SDK, docs and a few examples – I will begin adding support for it as quickly as possible. Which means you get a new project type especially for the platform + API units (typically stored under $RTL\Apis folder).

Why is this cool?

Because with support for Luna / WebOS, you as a Delphi or Smart developer can offer your services to companies that use WebOS or Luna (the open source version) in their devices. The list of companies is substantial – and they are well established companies. And as you have no doubt noticed, hardware engineers doesn’t always make the best software engineers. So this opens up plenty of opportunities for a good object pascal / Smart developer.

17948597_10154372538120906_1912877833_o

Luna has an Android like quality over it – except its more smooth to use

Remember Рother developers will use vanilla JavaScript. You have the onslaught of the VJL and the might of our libraries at your disposal. You can produce in days what others use weeks on achieving.

These are exciting days! I’ll keep you posted on our progress!

Smart Pascal, the next generation

April 15, 2017 Leave a comment

I want to take the time to talk a bit about the future, because like all production companies we are all working towards lesser and greater goals. If you don’t have a goal then you are in trouble; Thankfully our goals have been very clear from the beginning. Although I must admit that our way there has been.. “colorful” at times.

When we started back in 2010 we didn’t really know what would become of our plans. We only knew that this was important; there was a sense of urgency and “we have to build this” in the air; I think everyone involved felt that this was the case, without any rational explanation as to why. Like all products of passion it can consume you in a way – and you work day and night on turning an idea into something real. From the intangible to the tangible.

transitions_callback

It seems like yesterday, but it was 5 years ago!

By the end of 2011 / early 2012, Eric and myself had pretty much proven that this could be done. At the time there were more than enough nay-sayers and I think both of us got flamed quite often for daring to think different. People would scoff at me and say I was insane to even contemplate that object pascal could ever be implemented for something as insignificant and mediocre as JavaScript. This was my first meeting with a sub-culture of the Delphi and C++ community, a constellation I have gone head-to-head with on many occasions. But they have never managed to shake my resolve as much as inch.

 

 

When we released version 1.0 in 2012 some ideas about what could be possible started to form. J√łrn¬†defined plans for a system we later dubbed “Smart net”. In essence it would be something you logged onto from the IDE – allowing you to store your projects in the cloud, compile in the cloud (connected with Adobe build services) and essentially¬†move parts of your eco-system to the cloud. Keep in mind this was when people still associated cloud with “storage”; they had not yet seen things like Uber or Netflix¬†or played Quake 3 at 160 frames per second,¬†courtesy of asm.js in their browser.

The second part would be a website where you could do the same, including a live editor, access to the compiler and also the ability to buy and sell components, solutions and products. But for that we needed a desktop environment (which is where the Quartex Media Desktop came in later).

cool

The first version of the Media Desktop, small but powerful. Here running in touch-screen mode with classical mobile device layout (full screen forms).

Well, we have hit many bumps along the road since then. But I must be honest and say, some of our detours have also been the most valuable. Had it not been for the often absurd (to the person looking in) research and demo escapades I did, the RTL wouldn’t be half as powerful as it is today. It would deliver the basics, perhaps piggyback on Ext.js or some lame, run of the mill¬†framework – and that would be that. Boring, flat and limited.

What we really wanted to deliver was a platform. Not just a website, but a rich environment for creating, delivering and enjoying web and cloud based applications. And without much fanfare – that is ultimately what the Smart Desktop and it’s sexy node.js back-end is all about is all about.

We have many project types in the pipeline, but the Smart Desktop type is by far the most interesting and powerful. And its 100% under your control. You can create both the desktop itself as a project – and also applications that should run on that desktop as separate projects.

This is perfectly suited for NAS design (network active storage devices can usually be accessed through a web portal on the device), embedded boards, intranets and even intranets for that matter.

You get to enjoy all the perks of a multi-user desktop, one capable of both remote desktop access, telnet access, sharing files and media, playing music and video (we even compiled the mp4 codec from C to JavaScript so you can play mp4 movies without the need for a server backend).

The Smart Desktop

The Smart Desktop project is not just for fun and games. We have big plans for it. And once its solid and complete (we are closing in on 46% done), my next side project will not be more emulators or demos Рit will be to move our compiler(s) to Amazon, and write the IDE itself from scratch in Smart Pascal.

smart desktop

The Smart Desktop – A full desktop in the true sense of the word

And yeah, we have plans for EmScripten as well – which takes C/C++ and compiles it into asm.js. It will take a herculean effort to merge our RTL with their sandboxed infrastructure –¬†but the benefits are too great to ignore.

As a bonus you get to run native 68k applications (read: Amiga applications) via emulation. While I realize this will be mostly interesting for people that grew up with that machine – it is still a testament to the power of Smart and how much you can do if you really put your mind to it.

Naturally, the native IDE wont vanish. We have a few new directions we are investigating here – but native will absolutely not go anywhere. But cloud, the desktop system we are creating, is starting to become what we set out to make five years ago (has it been half a decade already? Tempus fugit!). As you all know it was initially designed as an example of how you could write full-screen applications for Raspberry PI and similar embedded devices. But now its a full platform in its own right – with a Linux core and node.js heart, there really is very little you cannot do here.

scsc

The Smart Pascal compiler is one of our tools that is ready for cloud-i-fication

Being able to login to the Smart company servers, fire up the IDE and just code – no matter if you are: be it Spain, Italy, Egypt, China or good old USA¬†— is a pretty awesome thing!

Clicking compile and the server does the grunt work and you can test your apps live in a virtual window; switch between device layouts and targets — then hit “publish” and it goes to Cordova (or Delphi) and voila – you get a message back when binaries for 9 mobile devices is ready for deployment. One click to publish your applications on Appstore, Google play and Microsoft marketplace.

Object pascal works

People may have brushed off object pascal (and from experience those people have a very narrow view of what object pascal is all about), but when they see what Smart delivers, which in itself is written in Delphi, powered by Delphi and should be in every Delphi developer’s toolbox — i think it should draw attention to both Delphi as a product, object pascal as a language – and¬†smart as a solution.

With Smart it doesn’t matter what computer you use. You can sit at home with the new A1222 PPC Amiga, or a kick-ass Intel i7 beast that chew¬†virtual machines for breakfast. If your computer can handle a modern website, then you can learn object pascal and work directly in the cloud.

desktop_embedded

The Smart Desktop running on cheap embedded hardware. The results are fantastic and the financial savings of using Smart Pascal on the kiosk client is $400 per unit in this case

Heck you can work off a $60 ODroid XU4, it has more than enough horsepower to drive the latest chrome or Firefox engines. All the compilation takes place on the server anyways. And if you want a Delphi vessel rather than phonegap (so that it’s a Delphi application that opens up a web-view in full-screen and expose features to your smart code) then you will be happy to know that this is being investigated.

More targets

There are a lot of systems out there in the world, some of which did not exist just a couple of years ago. FriendOS is a cloud based operating system we really want to support, so we are eager to get cracking on their SDK when that comes out. Being able to target FriendOS from Smart is valuable, because some of the stuff you can do in SMS with just a bit of code – would take weeks to hand write in JavaScript. So you get a productive edge unlike anything else – which is good to have when a new market opens.

As far as Delphi is concerned there are smaller systems that Embarcadero may not be interested in, for example the many embedded systems that have come out lately. If Embarcadero tried to target them all – it would be a never-ending cat and mouse game. It seems like almost every month there is a new board on the market. So I fully understand why Embarcadero sticks to the most established vendors.

ov-4f-img

Smart technology allows you to cover all your bases regardless of device

But for you, the programmer, these smaller boards can repsent thousands of dollars worth of saving. Perhaps you are building a kiosk system and need to have a good-looking user interface that is not carved in stone, touch capabilities, low-latency full-duplex communication with a server; not much you can do about that if Delphi doesnt target it. And Delphi is a work horse so it demands a lot more cpu than a low-budget ARM SoC can deliver. But web-tech can thrive in these low-end environments. So again we see that Smart can compliment and be a valuable addition to Delphi. It helps you as a Delphi developer to act on opportunities that would otherwise pass you by.

So in the above scenario you can double down. You can use Smart for the user-interface on a low power, low-cost SoC (system on a chip) kiosk — and Delphi on the server.

It all depends on what you are interfacing with. If you have a full Delphi backend (which I presume you have) then writing the interface server in Delphi obviously makes more sense.

If you don’t have any back-end then, depending on your needs or future plans, it could be wise to investigate if node.js is right for you. If it’s not – go with what you know. You can make use of Smart’s capabilities either way to deliver cost-effective, good-looking device front-ends of mobile apps. Its valuable tool in your Delphi toolbox.

Better infrastructure and rooting

So far our support for various systems has been in the form of APIs or “wrapper units”. This is good if you are a low-level coder like myself, but if you are coming directly from Delphi without any background in web technology – you wouldn’t even know where to start.

So starting with the next IDE update each platform we support will not just have low-level wrapper units, but project types and units written and adapted by human beings. This means extra work for us Рbut that is the way it has to be.

As of writing the following projects can be created:

  • HTML5 mobile applications
  • HTML5 mobile console applications
  • Node.js console applications
  • node.js server applications
  • node.js service applications (requires PM2)
  • Web worker project (deprecated, web-workers can now be created anywhere)

We also have support for the following operating systems:

  • Chrome OS
  • Mozilla OS
  • Samsung Tizen OS

The following API’s have shipped with Smart since version 1.2:

  • Khronos browser extensions
  • Firefox¬†spesific API
  • NodeWebkit
  • Phonegap
    • Phonegap provides native access to roughly 9 operating systems. It is however cumbersome to work with and beta-test if you are unfamiliar with the “tools of the trade” in the JavaScript world.
  • Whatwg
  • WAC Apis

Future goals

The first thing we need to do is to update and re-generate ALL header files (or pascal units that interface with the JavaScript libraries) and make what we already have polished, available, documented and ready for enterprise level use.

kiosk-systems

Why pay $400 to power your kiosk when $99 and Smart can do a better job?

Secondly, project types must be established where they make sense. Not all frameworks are suitable for full project isolation, but act more like utility libraries (like jQuery or similar training-wheels). And as much as possible of the RTL made platform independent and organized following our namespace scheme.

But there are also other operating systems we want to support:

  • Norwegian made Friend OS, which is a business oriented cloud desktop
  • Node.js OS is very exciting
  • LG WebOS,¬†and their Enyo application framework
  • Asustor DLM web operating system is also a highly attractive system to support
  • OpenNAS has a very powerful JavaScript application framework
  • Segate Nas OS 4 likewise use JavaScript for visual, universal applications
  • Microsoft Universal¬†Platform allows you to create truly portable, native speed JavaScript applications
  • QNap QTS web operating system [now at version 4.2]

All of these are separate from our own NAS and embedded device system: Smart Desktop, which uses node.js as a backend and will run on anything as long as node and a modern browser is present.

Final words

I hope you guys have enjoyed my little trip down memory lane, and also the plans we have for the future. Personally I am super excited about moving the IDE to the cloud and making Smart available 24/7 globally – so that everyone can use it to design, implement and build software for the future right now.

Smart Pascal Builder (or whatever nickname we give it) is probably the first of its kind in the world. There are a ton of “write code on the web” pages out there, but so far there is not a single hard-core development studio like I have in mind here.

So hold on, because the future is just around the corner ūüėČ

Delphi developer on its own server

April 4, 2017 Leave a comment

While the Facebook group will naturally continue exactly like it has these past years, we have set up a server active Delphi developers on my Quartex server.

This has a huge benefit: first of all those that want to test the Smart Desktop can do so from the same domain – and people who want to test Smart Mobile Studio can do so with myself just a PM away. Error reports etc. will still need to be sent to the standard e-mail, but now I can take a more active role in supervising the process and help clear up whatever missunderstanding could occur.

casebook

Always good with a hardcore Smart, Laz, amibian.js forum!

Besides that we are building a lively community of Delphi, Lazarus, Smart and Oxygene/Remobjects developers! Need a job? Have work you need done? Post an add — wont cost you a penny.

So why not sign up? Its free, it wont bite you and we can continue regardless of Facebook’s up-time this year..

You enter here and just fill out user/pass and that’s it:¬†http://quartexhq.myasustor.com/sharetronix/

Smart Pascal: Information to alpha testers

April 3, 2017 Leave a comment

Note: This is re-posted here since we are experiencing networking problems at http://www.smartmobilestudio.com. The information should show up there shortly.

Our next-gen RTL will shortly be sent to people who have shown interest in testing our new RTL. We will finish the RTL in 3 stages of alpha before we hit beta (including code freeze, just fixes to existing code) and then release. After that we move on to the IDE to bring that up to date as well.

Important changes

You will notice that visual controls now have a ton of new methods, but one very interesting in particular called: ObjectReady. This method holds an important and central role in the new architecture.

You may remember that sometimes you had to use Handle.ReadyExecute() in the previous RTL? Often to synchronize an activity, setting properties or just calling ReSize() when the control was ready and available in the DOM.

To make a long story short, ObjectReady() is called when a control has been constructed in full and the element is ready to be used. This also includes the ready-state of child elements. So if you create 10 child controls, ObjectReady will only be called once those 10 children have finished constructing – and the handle(s) have safely been injected into the DOM.

To better understand why this is required, remember that JavaScript is asynchronous. So even though the constructor finish – child objects can still be busy “building” in the background.

If you look in SmartCL.Components.pas, notice that when the constructor finishes –it does a asynchronous call to a procedure named ReadySync(). This is a very important change from the previous version – because now synchronization can be trusted. I have also added a limit to how long ReadySync() can wait.¬†So if it takes to long the ReadySync() method will simply exit.

ObjectReady

As you probably have guessed, when ReadySync() is done and all child elements have been successfully created – it called the ObjectReady() method.

This makes things so much easier to work with. In many ways it resembles¬†Delphi and Freepascal’s AfterConstruction() method.

To summarize the call-chain (or timeline) all visual controls follow as they are constructed:

  • Constructor Create()
  • InitializeObject()
  • ReadySync
  • ObjectReady()
  • Invalidate
  • Resize

If you are pondering what on earth Invalidate() is doing in a framework based on HTML elements: it calls Resize() via the RequestAnimationFrame API. TW3GraphicControl visual controls that are actually drawn, much like native VCL or LCL components are – naturally invalidate the graphics in this method. But ordinary controls just does a synchronized resize (and re-layout) of the content.

When implementing your own visual controls (inheriting from TW3CustomControl), the basic procedures you would have to override and implement are:

  • InitializeObject;
  • FinalizeObject;
  • ObjectReady;
  • Resize;

Naturally, if you dont create any child controls or data of any type – then you can omit InitializeObject and FinalizeObject; these act as constructor and destructor in our framework. So in the JVL (Javascript Visual component Library) you dont override the constructor and destructor directly unless it is extremely important.

Where to do what

What the JVL does is to ensure a fixed set of behavioral traits in a linear fashion- inside an environment where anything goes. In order to achieve that the call chain (as explained above) must be predictable and rock solid.

Here is a quick cheat sheet over what to do and where:

  • You create child instances and set variables¬†in InitializeObject()
  • You set values and access the child instances for the first time in ObjectReady()
  • You release any child instances and data in FinalizeObject()
  • You enable/disable behavior in CreationFlags()
  • You position and place child controls in Resize()
  • Calling Invalidate() refreshes the layout or graphics, depending on what type of control you are working with.

What does a custom control look like ?

Its actually quite simple. Now I have included a ton of units here in order to describe what they contain, then you can remove those you wont need. The reason we have fragmented the code like this (for example System.Reader, System.Stream.Reader and so on) is because node.js, Arduino, Raspberry PI, Phonegap, NodeWebKit are all platforms that run JavaScript in one form or another – and each have code that is not 1:1 compatible with the next.

Universal code, or code that executes the same on all platforms is isolated in the System namespace. All files prefixed with “System.” are universal and can be used everywhere, regardless of project type, target or platform.

When it comes to the reader / writer classes, it’s not just streams. We also have binary buffers (yes, you actually have allocmem() etc. in our RTL) – but you also have special readers that work with database blobs, Bson attachments .. hence we had no option but to fragment the units. At least it makes for smaller code ūüôā

unit MyOwnControlExample;

interface

uses
  // ## The System namespace is platform independent
  System.Widget,           // TW3Component
  System.Types,            // General types
  System.Types.Convert,    // Binary access to types
  System.Types.Graphics,   // Graphic types (TRect, TPoint etc)
  System.Colors,           // TColor constants + tools
  System.Time,             // TW3Dispatch + time methods

  // Binary data and streams
  System.Streams,
  System.Reader, System.Stream.Reader,
  System.Writer, System.Stream.Writer,

  // Binary data and allocmem, freemem, move, copy etc
  System.Memory,
  System.Memory.Allocation,
  System.Memory.Buffer,

  // ## The SmartCL namespace works only with the DOM
  SmartCL.System,         // Fundamental methods and classes
  SmartCL.Time,           // Adds requestAnimationFrame API to TW3Dispatch
  SmartCL.Graphics,       // Offscreen pixmap, canvas etc.
  SmartCL.Components,     // Classes for visual controls
  SmartCL.Effects,        // Adds about 50 fx prefixed CSS3 GPU effect methods
  SmartCL.Fonts,          // Font and typeface control
  SmartCL.Borders,        // Classes that control the border of a control
  SmartCL.CSS.Classes,    // Classes for self.css management
  SmartCL.CSS.StyleSheet, // Create stylesheets or add styles by code

  { Typical child controls
  SmartCL.Controls.Image,
  SmartCL.Controls.Label,
  SmartCL.Controls.Panel,
  SmartCL.Controls.Button,
  SMartCL.Controls.Scrollbar,
  SMartCL.Controls.ToggleSwitch,
  SmartCL.Controls.Toolbar }
  ;

type

  TMyVisualControl = class(TW3CustomControl)
  protected
    procedure InitializeObject; override;
    procedure FinalizeObject; override;
    procedure ObjectReady; override;
    procedure Resize; override;
  end;

implementation

procedure TMyVisualControl.InitializeObject;
begin
  inherited;
  // create child instances here
end;

procedure TMyVisualControl.FinalizeObject;
begin
  // Release child instances here
  inherited;
end;

procedure TMyVisualControl.ObjectReady;
begin
  inherited;
  // interact with controls first time here
end;

procedure TMyVisualControl.Resize;
begin
  inherited;
  if not (csDestroying in ComponentState) then
  begin
    // Position child elements here
  end;
end;

CreateFlags? What is that?

Delphi’s VCL and Lazarus’s LCL have had support for CreateFlags for ages. It essentially allows you to set some very important properties when a control is created; properties that enable or disable how the control behaves.

  TW3CreationFlags = set of
    (
    cfIgnoreReadyState,     // Ignore waiting for readystate
    cfSupportAdjustment,    // Controls requires boxing adjustment (default!)
    cfReportChildAddition,  // Dont call ChildAdded() on element insertion
    cfReportChildRemoval,   // Dont call ChildRemoved() on element removal
    cfReportMovement,       // Report movements? Managed call to Moved()
    cfReportResize,         // Report resize? Manages call to Resize()
    cfAllowSelection,       // Allow for text selection
    cfKeyCapture            // flag to issue key and focus events
    );

As you can see these properties are quite fundamental, but there are times when you want to alter the default behavior radically to make something work. Child elements that you position and size directly doesn’t need cfReportReSize for instance. This might not mean much if its 100 or 200 child elements. But if its 4000 rows in a DB grid, then dropping that event check has a huge impact.

ComponentState

Yes, finally all visual (and non visual) have componentstate support. This makes your code much more elegant, and also way more compatible with Delphi and Lazarus. Here are the component-states we support right now:

  TComponentState = set of (
    csCreating,   // Set by the RTL when a control is created
    csLoading,    // Set by the RTL when a control is loading resources
    csReady,      // Set by the RTL when the control is ready for use
    csSized,      // Set this when a resize call is required
    csMoved,      // Set this when a control has moved
    csDestroying  // Set by the RTL when a control is being destroyed
    );

So now you can do things like:

procedure TMyControl.StartAnimation;
begin
  // Can we do this yet?
  if (csReady in ComponentState) then
  begin
    // Ok do magical code here
  end else
  //If not, call back in 100ms - unless the control is being destroyed
  if not (csDestroying in ComponentState) then
    TW3Dispatch.Execute(StartAnimation, 100);
end;

Also notice TW3Dispatch. You will find this in System.Time and SmartCL.Time (it is a partial class and can be expanded in different units); we have isolated timers, repeats and delayed dispatching in one place.

SmartCL.Time adds RequestAnimationFrame() and CancelAnimationFrame() access, which is an absolute must for synchronized graphics and resize.

Other changes

In this post I have tried to give you an overview of immediate changes. Changes that will hit you the moment you fire up Smart with the new RTL. It is very important that you change your existing code to make use of the ObjectReady() method in particular. Try to give the child elements some air between creation and first use – you get a much more consistent result on all browsers.

The total list of changes is almost to big to post on a blog but I did publish a part of it earlier. But not even that list contains the full extent. I have tried to give you an understanding

Click here to look at the full change log

Smart Pascal: Busting browser storage limits

April 2, 2017 Leave a comment

Sessionstorage is the name for a browser’s in-memory only storage. Meaning that it’s essentially a ram-disk that is just deleted when you navigate away from the website or close the browser.

Sessionstorage has also been deprecated, so you should avoid using it and go for Localstorage, or just use a raw, untyped uint8 array instead.

Or should you?

Ensuring 64 megabytes minimum

Browsers do not behave identically across devices. Try to get a concurrent reading of something as simple as drawing sprites, and you will quickly notice that even the same device families (Android, iOS and Microsoft) can behave differently between versions – and even builds (revisions).

On embedded systems or thin clients with very little memory, allocating large chunks ot uint8 arrays is not going to work. One of my test thin-client machines has only 512 megabyte ram – and it would throw an exception if I tried to allocate more than 20 megabyte of continuous memory (again, as an array of uint8 bytes).

Using the dark side of the force

Screenshot

Offline means the system boots from a local cache disk

While testing Smart code on this little device, I noticed that quite large images loaded just fine. So where I was not allowed to allocate more than 20 megabytes, the browser would happily load in pictures taking up over 50 megabyte of pixel data?

It then struck me that the maximum limit of a picture, which is enforced by the DIB Api (at least on Windows desktop and embedded), is 4000 x 4000 pixels. Since each pixel is 32 bits (4 bytes, RGBA) that my friend is 64 megabytes right there!

I created a new class that inherits from the virtual-filesystem that Smart Pascal uses, created an off-screen image object in the constructor – and then made a simple but effective “bytes to scan line” calculation routine. So whenever the need for more data grew, it would¬†first grow the picture so it could hold the data (and shrink it again) on demand.

Humble but meaningful

Now 64 megabytes might not seem like much in our day and age, but if you are on holiday and want to connect to your home NAS – 64 megabytes of available ram makes a huge difference. Remember that localstorage only allows between 5 and 10 megabytes.

I should mention that using an image as a buffer makes little sense on a full Windows PC, a Mac or a Linux box. These system will page memory to disk and you will most likely never encounter the 20 megabyte barrier I experienced on this low-end Dell thin client device. But considering that hotels, motels and b&b often have thin clients setup for their customers (read: you) – The Smart desktop has to take height for it.

 

 

 

 

 

Smart desktop: Amibian.js past, future and present

April 1, 2017 2 comments

Had someone told me 20 years ago that I would enter my 40’s enjoying JavaScript, my younger self would probably have beaten that someone over the head with a book on hardcore demo coding or something. So yeah, things have changed so much and we are right now in the middle of a paradigm shift that is taking us all to the next level – regardless if we agree or not.

Ask a person what he thinks about “cloud” and you rarely get an answer that resembles what cloud really is. Most will say “its a fancy way of dealing with storage”. Others will say its all about web-services – and others might say it’s about JavaScript.

memyselfandi

Old coders never die, we just get better

They are all right and wrong at the same time. Cloud is first of all the total abstraction of all parts that makes up a networked computer system. Storage, processing capacity, memory, operating system, services, programming language and even instruction set.

Where a programmer today works with libraries and classes to create programs that run on a desktop — dragging visual controls like edit-boxes and buttons into place using a form or window designer; a cloud developer builds whole infrastructures. He drags and drops whole servers into place, connects external and internal services.

Storage? Ok I need Dropbox, amazon, Google drive, Microsoft one disk, local disk – and he just drags that onto a module. Done. All of these services now have a common API that allows them to talk with each other. They become like dll files or classes, all built using the same mold – but with wildly different internals. It doesn’t matter as long as they deliver the functionality according to standard.

Processing power? Setup an Azure or Amazon account and if you got the cash, you can compute enough to pre-cacalculate the human brain if you like. It has all been turned into services — that you can link together like lego pieces.

Forget everything you think you know about web; that is but the visual rendering engine over the proverbial death-star of technology hidden beneath. People have only seen the tip of the ice berg.

Operating systems have been reduced to a preference. There is no longer any reason to pick Windows over Linux on the server. Microsoft knew years ago that this day would come. Back in the late 90s I remember reading an interview with Steve Balmer¬†doing one of his black-ops meetings with Norwegian tech holders in Oslo;¬†and he outlined software as a service when people were on 14.4 modems. He also mentioned that “we need a language that is universal” to make this a reality. You can guess why .net was stolen from Borland, not to mention their failed hostile¬†takover of Java (or J#) which Anders Hejlsberg was hired to spear-head.

Amibian.js

Amibian.js is my, Gunnar and Thomas‘s effort to ensure that the Amiga is made portable and can be enjoyed in this new paradigm. It is not made to compete with anyone (as was suggested earlier), but rather to make sure Amiga gets some life into her again¬†– and that people of this generation and the kids growing up now can get to enjoy the same exciting environment that we had.

Amibian666

From Scandinavia with love

The world is going JavaScript. Hardware now speaks JavaScript (!), your TV now speaks JavaScript – heck your digital watch probably runs JavaScript. And just to add insult to injury – asm.js now compiles JS code side-by-side with C/C++ in terms of speed. I think the browser has seen more man years of development time than any other piece of software out there – with the exception of GCC / Gnu Linux perhaps.

Amibian is also an example of a what you can do with Smart Pascal, which is a programming environment that compiles object pascal to JavaScript. One we¬†(The Smart Company AS) made especially for this new paradigm. I knew this was coming years ago – and have been waiting for it. But that’s another story all together.

Future

Well, naturally the desktop system is written from scratch so it needs to be completed. We are at roughly 40% right now. But there is also work to be done on UAE.js (a mild fork of sae, scriptable Amiga emulator) in terms of speed and IO. We want the emulated Amiga side to interact with the modern web desktop – to be able to load and save files to whatever backend you are using.

 

Amibianstuff

For those about to rock; We salute you!

Well, it’s not that easy. UAE is like an organism, and introducing new organs into an existing creature is not easily done. UAE.js (or SAE) has omitted a huge chunk of the original code – which also includes the modified boot-code that adds support for external or “virtual” UAE drives (thanks to Frode Solheim of Fs-UAE for explaining the details of the parts here).

But, there are hacker ways. The dark side is a pathway to many abilities, some deemed unnatural. So if all else fails, i will kidnap the hardfile API and delegate the IO signals to the virtual filesystem on the server — in short, UAE.JS will think it’s booting from a hardfile in memory – when in reality it will get its data from node.js.

There are some challenges here. UAE (the original) is not async but ordinary, linear C code. JavaScript is async and may not return the data on exit of the method. So i suspect I will have to cache quite a lot. Perhaps as much as 1 megabyte backwards and forwards from the file-position. But getting data in there we will (and out), come hell or high water.

We can also drop a lot of sub code, like parts of the gayle interface. I found out this is the chip that deals with the IDE interface — so it has to be there, but it can also host memory expansions – but who the hell cares in 2017 in JavaScript about that. More than enough fun via standard chip/fast/rtg memory – so the odd bits can be removed.

So we got our work cut out for us. But hey.. there can only be one .. QUARTEX! And we bring you fire.

Ok. Lets do this!

Node and delphi in sweet harmony

March 31, 2017 Leave a comment

There are cases when you want to dip your toe into Delphi land. While I can’t think of anything in particular at the moment – it would be handy if you say, have some Delphi dll’s that you would like to re-use, so you don’t have to re-write the whole thing in Smart Pascal.

Enter FFI

There is this mind-blowing package over at NPM and if you don’t know it yet, you are going to love it. Its called FFI (foreign function interface) and essentially allows you to call functions from ordinary libraries (DLL, .SO and .DynLib’s). It is supported on both Windows, OS X and Linux. So yeah, now you can call Delphi from your Smart code!

So let’s say you have this awesome data backend written in Delphi and you don’t want to re-write 10 years of code. What you can do is simply create a DLL and expose a simplified API for the core functionality — and then call those from your Smart Pascal node.js server — which is just mind-bending¬†awesome (excuse my language)!

You can also call functions exposed in the same process. So lets say you are calling node.js from your Delphi applications (or C++, or Freepascal or C# – whatever floats your boat) then you guessed it – you can now invoke methods from the “runner”.

There goes another weekend

I was going to relax this weekend but im busy compiling node.js from scratch (and you know how I feel about C++ from Linux, its like the worst date ever!) to something visual studio can eat – and then export it from that into Delphi. For some reason C++ builder goes off bonkers when I try to build it there. Which is so odd because it should be vanilla C/C++ (probably some gnu curiosity messing it up)

 

kick_ass

Create awesome full browser-desktop apps in Smart!

 

But once .DCU i-fied, how does a TNodeJS component sound? Already have TV8Engine – so yeah, interfacing with the mothership (read: Delphi) is important to us.

Anyways – if your brain didn’t just blow with the FFI info, then go and get all girly inside right now – because this bridges the gap so utterly between native and scalable, kick ass clustered JavaScript server-side!

https://www.npmjs.com/package/ffi

If you still wonder if this is powerful enough, check out the Smart based operating system that is currently kicking serious butt! It even runs 68k assembly programs (real Amiga applications) in a window! This is the core demo for the SMS update (it should have been out 2 weeks ago, I know – but its not me in charge of that bit!).

amibianrocks

An OS in smart? Sure, with a linux bootstrap ofcourse – but man does it run!

Oh and by the way, Smart Pascal now gives you access to more than 350.000 node.js packages. That is the biggest, bad-ass code-repo in existence. And you can then add typescript as well.

Ok let’s do this!

HTMLComponents, the native Delphi html rendering engine

March 26, 2017 1 comment

This review has been postponed two times, so my apologies to Alexander Sviridenkov, the author of this magnificent Delphi component package.

HTML rendering engine in Delphi

Delphi have always had its fair share of brilliant programmers. And as a result Delphi has enjoyed a vibrant component market, one that simplify the life of both novice and expert.

Sadly, html has never really been Delphi’s strong suit. Sure we have various server types and packages. From Intraweb to UniGui – each of them with positive and negative traits. We have IIS plugin packages and apache modules – and right now Embarcadero is branching into Linux as well. Sadly all of these solutions suffer the same problem: they are purely native. They lack the automatic scaling mechanisms of node.js and C# – and don’t even get me started on clustering (Netflix is written 100% in node.js, why did they pick JavaScript instead of Delphi, C# or C++? Food for thought!).

But, serving html is all well and good, and while I could write a doctorate on why node.js and Smart Pascal is a much better technology for Delphi developers when it comes to web development – it would come across as disingenuous since Smart originated with me. So people will simply have to find out for themselves.

Rendering with Delphi

Server and client aside: if there is one thing Delphi is the undisputed king of, it is desktop applications and productivity! Fueled by object pascal’s modular, component based frameworks; like the VCL and Firemonkey – no other language even comes close to what Delphi has to offer. Except for html that is ..

Traditionally there have been only two paths you could go if you wanted to display¬†or edit HTML: you could stick with good old TWebBrowser, a thin wrapper over Microsoft’s Internet Explorer API’s. If all you want to do is to show some basic html and lack even a single fiber of taste in your body then TWebBrowser¬†could be¬†the ticket.

But anyone who has ever tried to work with that component, perhaps dived into the hundreds of interfaces you need to deal with in order to do anything remotely fun¬†– knows that it leaves you frustrated and wanting. It’s also not portable and as pure a Windows component as can be. It makes absolutely no sense to use this component in 2017, especially when Firemonkey is on the menu.

The other option was TFrameBrowser or THtmlViewer, a fast, responsive browser control written purely in Delphi (by Dave Baldwin I seem to remember?). This was a commercial component back in the day, but he open sourced it as he abandoned Delphi for C# or C++ or something along those lines. You can pick this up at Github for free and there is also a Lazarus version of this.

The problem with both of these solution is the same haunting us all, namely that they are getting old. They havent seen much development for 10-15 years (give or take the port to Lazarus), and while they may work – they lack support for all the interesting tags, css styles and element types that we now take for granted. THtmlView is stuck somewhere between HTML 1 or 2, and TWebBrowser is .. well, that has a life of its own so I honestly cant tell you what it represents these days. But a little bird told me Embarcadero havent updated the ActiveX interfaces at all – so it’s not even using the new Microsoft Edge engine. Personally I suspect Microsoft is the culprit, exposing Edge as a .net assembly exclusively.

The Chromium wrappers

A third alternative appeared around 2011 / 2012 in the form of chromium embedded. This is the browser we use in Smart Mobile Studio to do background rendering and run the compiled Smart Mobile Studio applications when you hit F9.

Webkit is by far the fastest and most evolved engine out there, but it also add around 90 – 120 megabyte of dependencies and data files to your distribution. It also has a lot of quirks, and if you do hardcore coding like I do – pushing the GPU to the bleeding edge of what webkit can deliver – then prepare for crashes, unexplainable access violations and having to restart the whole application to get rid of some background process chromium has spawned outside your reach. It’s a great solution, but it comes with its own baggage and history.

HTMLComponents

Suddenly out of the blue a fourth option has emerged, this time it’s a full re-implementation of HTML 4¬†(with elements from HTML 5) written in pure, platform independent Delphi. This means it has no dependencies, it will render as much as 90% of modern html, it supports all the fancy tags and¬†css¬†style (including transitions, which I must admit I did not expect).

Not only that, it works under the traditional VCL framework and Firemonkey, so you can use the components on both desktop and mobile applications. If that doesn’t get your creative juices flowing I don’t know what will; this is frikkin awesome!

Alexander Sviridenkov has really gone to town with this package. And I must extend my gratitude for allowing me access to his codebase. We are considering using this as the primary rendering engine for the new Smart Mobile Studio IDE’s live editor.¬†So I want to personally thank Alexander for the opportunity to investigate the components at source level.

Writing a review of this great package¬†seems only natural. It has been years since I’ve been this stoked about a Delphi component package. If you like the idea of using HTML as a part of your Delphi application’s presentation layer – then consider this your birthday, Xmas and st. patrick day all rolled into one. HtmlComponents got you covered!

So what is it?

component_list

A rich and powerful component suite

At the heart of the package is a modern, fast, portable and 100% written in Delphi HTML rendering engine. Around this engine Alexander has made a large collection of visual controls that you use to enrich your applications. Most notably, a WYSIWYG editor akin to what you find in Adobe Dreamweaver. This is not a text editor where you write tags (well you can), but rather a full, rich word processor control where you can design your web pages.

The components most interesting to me were obviously: THtmlEditor, TDBHtmlEditor, THtTemplatePanel and THtMetroPanel, THtCategoryButtons.

These are just  5 out of 23 visual controls that ship with this package, each of them using the rendering engine to present better looking information. I mean, when you can use html to describe how each row in a listbox should appear Рit goes without saying that this is a lot easier (and prettier) than spending hours with TCanvas. The now ancient TCanvas class is not exactly renowned for its blistering speed or fantastic visuals). So HtmlComponents gives you an impressive list of visual extravaganza to play with, all of whom makes it a snap to create modern, good-looking user interfaces.

And you don’t have to use the pre-defined components. HtmlComponents will happily take your html and css and draw it straight to a TCanvas (or GDI+) without complaints.¬†So you are not stuck having to use this or that baseclass (which often is the case).

What really impressed me is the speed. Writing an html file parser that is fast is one thing, but rendering complex and composite web content? That’s not for the faint of heart, it requires some serious skill.

The first and easiest test you can do to see how quickly a rendering engine adapts and re calculate its node tree (internal stuff) – is to resize the window fast back and forth. Most homebrew browsers is going to struggle with that. Typically because they just calculate as they draw each element, which is not how to go about this particular task. I’m super impressed by Alexanders work here, because his controls are just as responsive as commercial browsers.

The editor

Since I work with web technology, the editor is ultimately what would save us time and also help lift the quality of our IDE. Smart Mobile Studio is extremely powerful, but our weak spot is without a doubt that our IDE doesn’t reflect the power of our the RTL properly. There are also things that would be simpler for our customers, editing documents directly is one of them – so the editor was really make or break for my part.

editor

The example editor program is impressive to say the least

The editor is wonderful. There are a few tidbits that I would like to see improved, but those are very specific to my needs. But I have to be honest and say that this is the most impressive WYSIWYG html editor I have seen outside of Dreamweaver. You have full control of everything, from tables to borders, margins to padding – and the active segment you are editing is easy to spot due to dotted highlighting.

Alexander has also made some very interesting dialogs (see color and border dialog in the picture above). While I am missing a more dedicated css stylesheet solution (like gradient editor, image to css conversion and other handy features) – the package in general has more or less everything you need to get going. And then some (it may also be that the stylesheet thing is there. The codebase is so huge it would take s week to study every part).

In short: If you are looking for a drop-in solution for editing web documents that is platform independent, has zero dependencies, lightweight, fast and even work on mobile devices – then this is it! The same goes for rich and gorgeous rendered custom controls.

I do realize that there are other editors out there; packages like Richview is very popular, and while that is both fast and polished, it’s not an html rendering engine. Which is ultimately where HtmlComponents scores all points. The editor is not the core of the product, it’s just a control that exposes the product – which is a modern html rendering system written in pure Delphi.

It even outputs to PDF, again with no dependencies!

What about the web?

Being able to edit html documents is cool and if you are looking for a package to give you that functionality – then this is it. No doubt there. But how compatible is it? I mean, html is evolving at an alarming rate – both in terms of features but also with regards to JavaScript. How does HtmlComponents fare in that department?

I guess the obvious observation is that HtmlComponents¬†is¬†not a browser. It has a very capable rendering engine, editor and supports the most common tags – both old and new, but at the end of the day – it’s not designed to be a browser.

If you drop an editor on a form, set the¬†readonly¬†property to true, and then call LoadFromUrl() on a complex website, then odds are it will come up short. It will happily render static websites, but the moment frames and JavaScript comes into the picture – you are quickly reminded that this is ultimately an editor and rendering engine designed to leverage web technology for Delphi – and it’s not trying to be anything else either. It delivers what it says, and it does it well. What more can you ask.

browser

It has no problems with traditional, non framed or JS powered websites

Lack of scripting

I have no doubt that Alexander could turn this package into a real life browser if he so wanted. But again this package delivers compliant html editing and presentation inside Delphi applications. That is what he set out to create, and that is exactly what I need.

Alexander have also taken height for future development. The package ships with support for pascal scripting (the Jedi scripting engine), and he has thoughtfully isolated scripting in general in separate adapter classes. This means that you are free to add whatever scripting language you want. Dont like pascal script? No problem, just implement an adapter and use whatever you like.

My immediate thought goes to Besen, the open source, complete ECMAScript fifth edition, JIT powered JavaScript engine written 100% in object pascal. Hopefully Alexander will get inspired and add that as an alternative in a future edition. With besen as the primary scripting engine – and taking the time to expose all the traditional DOM objects JavaScript expects to find in a browser environment (like window, document, createElement, getElementById [and so on]) then HtmlComponents would be capable of great things. All of it in portable, platform independent Delphi.

The V8 JavaScript engine used by Chrome, Safari and all webkit browsers is also an alternative. It is a fairly humble dll that is easy to use from Delphi. But that is presently only available on Windows. Unless the Delphi maintainer decides to add Linux and OS X binaries to the repository as well. But indeed, V8 could also be added quite easily.

Final thoughts

This is a wonderful component package for Delphi, one I really hope we get to use in our own product when that time comes. There will always be bits and pieces that could be improved, but it’s quite hard to pinpoint what exactly to criticize in this package.

helpmanual

H&M is pretty solid

I did notice an access violation in the Paint() method on two occasions, which could be easily fixed by checking if csDestroying has been set – and that the device context is not null (it was in the middle of a paint when i killed the program). But that is kebab reporting on my part. All in all I am super impressed and really recommend this product – its worth every penny!

The product is also used by a list of well established companies.¬†Applications like Help & Manual isn’t exactly small potatoes, nor is Coffeecup html editor some unknown application. These are used by thousands of people every day.

Have a look at Alexander’s gallery and I’m sure you agree that this¬†package has merit.

As far as I’m concerned, this will form the basis of our future template and content editor.

Jon-Lennart Aasenden
 The Smart Company AS

Amibian + Smart pascal = A new beginning

March 25, 2017 Leave a comment

In the Amiga community there is a sub-group of people with an agenda. They hoard and collect every piece of hardware they can get their hand on Рthen sell them at absurd prices on ebay to enrich themselves. This is not only ruining the community at large, ensuring that ordinary users cannot get their hands on a plain, vanilla Amiga without having to fork out enough dough to buy a good car.

“We also have a working Facebook clone¬†– but we’re not
going into competition with Mark Zuckerberg either”

Thankfully not everyone is like that. There are some very respected, very good people who buy old machines, restore them – and sell them at affordable prices.¬†People that do this as a hobby or to make a little on the side. Nothing wrong with that.¬†No, its people that try to sell you an A2000 for $3000, or that pimp out Vampire accelerator cards at 700‚ā¨ a piece thats the problem.

As for the price sharks, well –¬†this has to stop. And Gunnar’s Amibian distro has already given the Amiga scalpers a serious uppercut. Why buy an old Amiga when you can get a¬†high-end A4000 experience¬†with 4 times as much power for around $35? This is what Gunnar has made a reality – and he deserves a medal for his work!

And myself, Thomas and the others in our little band of brothers will pick up the fight and stand by Gunnar in his battle. A battle to make the Amiga affordable for ordinary human beings that love the platform.

Amiga as a service

Yesterday I did a little experiment. You know how Citrix and VMWare services work? In short, they are virtualization application servers. That means that they can create as many instances of Windows or Linux as they want Рrun your applications on it Рand you can connect to your instance and use it. This is a big part of how cloud computing works.

While my little experiment is very¬†humble, I am now streaming a WinUAE instance display from my basement server pc, just some old bucket of chips I use for debugging, directly into Amibian.js (!). It worked! I just created the world’s first Amiga application server. And it took me less than 30 minutes in Delphi¬†!

Amibian, Amibian.js, appserver, what gives?

Let’s clear this up so you dont mix them:

Amibian. This is the original Linux distro made by Gunnar Kristjansson. It boots straight into UAE in full-screen. All it needs is a Raspberry PI, the Amiga rom files and your workbench or hard disk image. This is a purely native (machine code) solution.

Amibian.js – this is a JavaScript remake of AmigaOS that I’m building, with the look and feel of OS 4. It uses uae.js (also called SAE) to run 68k software directly in the browser. It is not a commercial product, but one of many demos¬†that ship with Smart Mobile Studio. “Smart” is a compiler, editor and run-time library¬†sold by The Smart Company AS.¬†So Amibian.js is just a demo, just like Amos shipped with a ton of demo’s and Blitzbasic came with a whole disk full of examples.

Amiga application server (what I mentioned above) was just a 30 minute experiment I did yesterday after work. Again, just for fun.

I hope that clears up any confusion. Amibian.js is a purely JavaScript based desktop made in the spirit of the Amiga – it must not be confused with Amibian the Linux distro, which boots into UAE on a Raspberry PI. Nor is it an appserver – but rather, it can connect to an appserver if I want to.

genamiga

Generation Amiga post on Amibian.js earlier when I added some look & feel

With a bit of work, and if everything works as expected (I don’t see why not), I will upload both source and binaries to github together with Amibian.js.

There is only one clause: It cannot be used, recreated, included or distributed by Cloanto. Sorry guys, but the ROMS belong to the people, and until you release those into public domain, you wont get access to anything we make. Nothing personal, but pimping out roms and even having audacity to fork UAE and sell it as your own? You should be ashamed of yourself.

Are you in competition with FriendOS?

This question has popped up a couple of times the past two weeks. So I want to address that head on.

I make a product called Smart Mobile Studio. I do that with a group of well-known developers, and we have done so for many years now. The preliminary ideas were presented on my blog¬†during the winter 2009, early 2010 and we started working (and blogging) after that. Smart Mobile Studio and it’s language, Smart Pascal (see Wikipedia article), takes object pascal (like freepascal or Delphi) and compiles to JavaScript rather than machine code.

smsamiga

The Smart compiler is due for OS4 once i get the A1222 in my hands

One of the examples that has shipped with Smart Mobile Studio, and also been available through a library called QTX, is something called Quartex Media Desktop. Which is an example of a NAS server front-end, a kiosk front-end (ticket ordering, cash machines etc) or just an intranet desktop where you centralize media and files. It is also node.js powered to deal with the back-end filesystem. This is now called amibian.js.

In other words – it has nothing to do with Friend software labs at all. In fact, I didn’t even know Friend existed until they approached me¬†a few weeks ago.

15590846_10154070198815906_4207939673564686511_o

The Quartex Media desktop has been around for ages

Amibian.js is just an update of the Quartex Media Desktop example. It is not a commercial venture at all, but an example of how productive you can be with Smart pascal.

And it’s just one example out of more than a hundred that showcase different aspects of our run-time library. This example has been available since version 1.2 or 1.3 of Smart, so no, this is not me trying to reverse engineer FriendOS. Because I was doing this long before FriendOS even was presented. I have just added a windowing manager and made it look like OS4, which also happened before I had any contact with my buddies over at Friend Software Labs (why do you think they were interested in me).

16805402_1914020898827741_149245853_o

Early 2017 Linux bootloader by Gunnar

So, am I in competition with Friend? NO! I have absolutely no ambition, aspiration or intent for anything of the sorts. And should you be in doubt then let me break it down for you:

  • Hogne, Arne, David, Thomas, Francois and everyone at Friend Software Labs¬†are friends of mine. I talk almost daily with David Pleasence who is a wonderful person and an inspiration for everyone who knows him.
  • Normal people don’t¬†sneak around stabbing friends in the back. Plain and simple. That is not how I was raised, and such behavior is completely unacceptable.
  • Amibian.js is 110% pure Amiga oriented. The core of it has been a part of Smart for years now, and it has been freely available for anyone on google code and github.
  • For every change we have made to the Smart RTL, the media desktop example has been updated to reflect this. But ultimately it’s just one out of countless¬†examples. We also have a working Facebook clone – but we’re not going into competition with Mark Zuckerberg for that matter.
  • People can invent the same things at the same time. Thats how reality works. There is a natural evolution of ideas, and great minds often think alike.

Why did you call it Amibian.js, it’s so confusing?

Well it’s a long story but to make it short. The first “boot into uae” thing was initially outlined by me (with the help¬†of chips, the UAE4Arm maintainer). But I didn’t do it right because Linux has never really been my thing. So I just posted it on my retro-gaming blog and forgot all about it.

Gunnar picked this up and perfected it. He has worked weeks and months making Amibian into what it is today – together with Thomas, our spanish superhero /slash/ part-time dictator /slash/ minister of propaganda ūüôā

We then started talking about making a new system. Not a new UAE, but something new and ground breaking. I proposed Smart Pascal, and we wondered how the Raspberry PI would run JavaScript performance wise. I then spent a couple of hours adding¬†the icon layout grid and the windowing manager to our existing media desktop – and then fired up some HTML5¬†demos. Gunnar tested them under Chrome on the Raspberry PI — and voila, Amibian.js was born.

amidesk

And that is all there is to it. No drama, no hidden agendas – and no conspiracy.

I should also add that I do not work at Friend Software Labs, but we have excellent communication and I’m sure we will combine our forces on more than one software title in the future.

On a personal note I have more than a few titles I would like to port to FriendOS. One of my best sellers of all time is an invoice and credit application – which will be re-written in Smart Pascal (its presently a mix of Delphi and C++ builder code). The same program is also due to Amiga OS 4.1 whenever I get my A1222 (looking at you Trevor *smile*).

Well, I hope that clears up any misunderstanding regarding these very separate but superficially related topics. Amibian.js will remain 100% Amiga focused – that has been and remains our goal.

Ode to our childhood

Amibian is and will always be, an ode to the people who gave us such a great childhood. People like David Pleasence who was the face of Commodore in europe. A man who embody the friendliness of the Amiga with his very being. Probably one of the warmest and kindest people I can think of.

Francois Lionet, author of Amos Basic. The man who made me a programmer and that I cannot thank enough. And I know I’m not alone about learning from him.

Mark Sibly, the author of BlitzBasic, the man who taught me all those assembler tricks. A man that deserves to go down in the history books as one of the best programmers in history.

And above all – the people who made the Amiga itself; giants like Jay Miner, Dave Haynie, Carl Sassenrath, Dave Needle, RJ Michal (forgive me for not listing all of you. Your contributions will never be forgotten).

That is what Amibian.js is all about.

Patents and greed may have killed the actual code. But we are free to implement whatever we like from scratch. And when I’m done – your patents will be worthless..

 

Amiga revival, Smart Pascal and growing up

March 11, 2017 1 comment

Maybe its just me but the Amiga is kinda having a revival these days? Seems to me like the number of people going back to the Amiga has just exploded the past couple of years. Much of that is no doubt thanks to my buddy Gunnar Kristjannsen’s excellent work on the Amibian distro for Raspberry PI. Making a high-end Amiga experience that would have cost you thousands of dollars available at around $35.

amifuture

Looking forward to some cosy reading

While Gunnar’s great distro is no doubt a huge factor in this, I believe its more than just easy access. I think a lot of us that grew up with the system, who lived the Amiga daily from elementary school all the way to college – have come full circle. We spend our days coding on PC’s, Mac’s or making mobile software – but deep down inside, I think all of us are still in love with that magical machine; The Commodore Amiga.

I am honestly at a loss for words on this (and that’s a first, most days you can’t get me to shut the hell up). Why should a 30-year-old system attract me more, and still cause so much joy in my life – compared to the latest stuff? I mean, I got a fat ass I7 that growls when you start it with 64 gigabyte ram, SSD and all the extras; I got macs all over the house, the latest consoles – and enough embedded boards to start my own arcade if I so desired.

Yet at the end of the day, when the kids are in bed and GF firmly planted in front of her favorite tv show, fathers are down in basements all around europe. Not watching porn, not practising black magic or trying to transform led into gold, nope: coding in assembler on a mc68k processor running at a whopping 7Mhz and loving every minute of it!

Today the madness held no bounds and forced me, out of sheer perverted joy, to order 4 copies of Amiga Future magazine (yes there are still magazines for the Amiga, believe it or not), a few posters, a mousemat and (drumroll) the ever sexy A1222. Actually that was a lie, I ordered that weeks ago, Trevor Dickenson¬†over at A-EON hooked me up so im getting it as soon as it comes off the assembly line. And for those that don’t know, the A1222 is the new affordable Amiga that is released today. It’s not a remake of the older models, but a brand new thing. I havent been this giddy about a piece of silicon since I fell into a double-d cup at a beach in Spain last year.

Smart Pascal

It made sense to unite my two great computing passions, namely the object pascal language and Amiga into one package. So whenever I have some spare time I work my ass off on the update for Smart Mobile Studio. And it’s getting probably the biggest “demo” ever shipped with a programming language.

What? Well, a remake of the Amiga operating system. But not just a simple css-styled shallow lookalike. You know me, I just had to go all the way. So I married the system with something called uae.js. Which is essentially the JavaScript version of the Amiga emulator. Its compiled with EmScripten – a post processor that takes LLVM compiled bitcode compiled with C/C++ and spits out Asm.js optimized code.

amidesk

You just cant kill it, Amiga is 4ever

So, Smart Pascal in one hand – C/C++ in the right hand. Its like being back in college all over again. Only thing missing now is that Wacom suddenly returns and Borland rise from the grave with another Turbo product. But yes, JavaScript is something I really enjoy. And being able to compile object pascal to JavaScript is even better.

The end result? Well since I don’t have too much time on my hands it’s roughly 31-32% done, and when we hit 50% is when UAE.js will be activated. So right now its a sexy cloud front end. It has a virtual filesystem that runs fine over localstorage, but it can also talk to node.js and access the real filesystem on your server.

But when UAE.js kicks in you will be able to run your favorite Amiga demos, applications and games in your browser. I am actually very excited about seeing the performance. It runs most demos OK (using the Aros rom-files). I imagine running things like blitzbasic, Amos basic and SAS-C/C++ should work fine. Or at least be within the “usable” range if you got a powerful PC to play with.

The V8 JavaScript engine in webkit is due for an overhaul next year – and while I can only speculate I’m guessing real-life compilation will be the addition. They already do some heavy JIT’ing but once you throw LLVM based actual compilation into the picture – large JS applications is going to fly side by side with native stuff. And that’s when cloud front-ends like ChromeOS and other FriendOS is going to take off.

My little remake is not that ambitious, but I do intend to make this an absolute kick-ass system as far as Amiga is concerned. And for Smart Pascal developers? Well, lets just say that this demo project has pushed the RTL for all it’s worth and helped fix bugs and expand the RTL in a way that makes it a real power-house!

Growing up

Do we ever really grow up? I’m not sure any more. I look at others and see some that have adopted this role, this image of how an adult should be like — but its more often than not tied into the whole A4¬†family thing or some superficial work profile. And since most Amiga fanatics are in their 40’s and 50’s (same age as Delphi hooligans, Turbo was released in 1983 same year as the Amiga came out), I guess this is when kids have grown up enough for people to go “wait a minute, what .. where is my Amiga!“.

But good things come to those who wait. If someone told me that I would one day work side by side with giants like David John Pleasance, Francois Lionet and the crew at FriendUp systems – I would never have believed them. A member of quartex in meetings with the head of Commodore? My teenage self would¬†never have believed it. Both of these men, including all the tech guys at Commodore,¬†Mark Sibly the guy behind BlitzBasic — these were my teenage heroes. And now I get to work with¬†two of them. That is priceless.

As for growing up – if that means losing that spark, that trigger that when lost would render us incapable of enjoying things like the Amiga, reduced to a suit in a grey world of PCs – you know, then I’m happy to be exactly where I am. If you can go to work wearing an Amiga T-Shirt, tracker music on your iPod, a family you love at home, cool people to work with – I would call that a wrap.

And looking at the hundreds and thousands of people returning to the Amiga after 30 years in the desert – something tells me I wont be alone .. ūüėČ