Archive

Posts Tagged ‘freepascal’

Quartex Pascal, convergence is near

July 16, 2020 1 comment
67582488_10156396548830906_5204248427029856256_o

A Quartex Cluster of 5 x ODroid XU4. A $400 super computer running Quartex media Desktop. Enough to power a school.

I only have the weekends to work on Quartex Pascal, but I have spent the past 18 months tinkering away, making up for wasted time. So I’m just going to leave some pictures here for you to enjoy.

Note: I was asked on LinkedIn if this has anything to do with Smart Mobile Studio, and the answer is a resounding no. I have nothing to do with Smart any more. QTX Pascal is a completely separate project that is written from scratch by yours truly.

The QTX Framework was initially a library I created back in 2014, but it has later been completely overhauled and turned into a full RTL. It is not compatible with Smart Pascal and has a completely different architecture.

QTX Pascal is indirectly funded by the Amiga Retro Community (which might sound strange, but the technical level of that community is beyond anything I have encountered elsewhere) since QTX is central to the creation of the Quartex Media Desktop. It is a shame that Embarcadero decided to not back the project. The compiler and toolchain would have been a part of Delphi by now, and I wouldn’t have to write a separate IDE. But when they see what this system can deliver in terms of services, database work, mobile and embedded -they might regret it. The project only accepts donation funding, I am not interested in investors or partners. If you want a vision turned into reality, you gotta do it yourself. Everything else just gets in the way.

For developers by developers

Quartex Pascal is made for the community. It will be free for students and open-source projects. And a commercial license will never exceed $300. It is a shareware license and the financial aspects is purely to help fund further research and development of the desktop cloud platform. The final goal (CloudForge) is to compile the IDE itself to JavaScript, so people only need a browser to write enterprise level applications via Quartex Media Desktop. When that is finished, my work is done – and people have a clear path to the future.

qtx_run_07

Unlike other systems, QTX started with the non-visual stuff, so the system has a well implemented infrastructure for writing universal services and servers, using node.js as a deployment host. Services are also Docker friendly. Runs without change on Windows, Mac OS, Linux and a wealth of embedded systems and SBCs (single board computers)

qtx_run_08

A completely new RTL written from scratch generates close to native speed JS, highly compatible (even with legacy browsers) and rock solid

qtx_run_09

There are several display modes for QTX forms, from dynamic to absolute positioning. You can mix and match between HTML and QTX code, including a HTML5 compliant WYSIWYG editor and style manager. Makes content handling a lot easier

qtx_run_10

Write object pascal, JavaScript, HTML, LDEF (webassembly), node.js services – or mix and match between them all for maximum potential. Writing mobile applications is now ridiculously easy compared to “other tools” out there.

Oh and for the proverbial frosting — The full clustered Quartex Media desktop and services is a project type. Thats right. A complete cloud infrastructure suitable for teams, kiosks, embedded, schools, intranets – and even an replacement OS for ChromeOS. You don’t need to interface with Amazon, you get your own Amazon (optional naturally).

desktop_02

Filesystem over websocket, IPC between hosted apps and desktop, full back-end services that are clustered, and run on anything from a Raspberry PI 4 to low-cost ARM SBCs.

49938355_1169526123220996_502291013608407040_o

Web Assembly made easy. Both for Delphi and QTX

smartdesk

Let there be rock

Oh, and documentation. Loads and loads of documentation.

qtx_run_11

Proper documentation, both class overview and explanations that a human being has written is paramount for learning and getting up to speed quickly.

I don’t have vacation this year, which means I only have weekends to tinker away. But i have spent the past 18-ish months preparing and slowly finishing the pieces I needed. From vector containers to form design controls, to a completely re-written RTL from scratch — so yeah. This time I’m doing it my way.

Vector Containers For Delphi and FPC

April 11, 2020 Leave a comment

Edit: Version 1.0.1 has been released, with a ton of powerful features. Read about it here and grab your fork: https://jonlennartaasenden.wordpress.com/2020/04/13/qtx-framework-for-delphi-and-fpc-is-available-on-bitbucket/


If you have been looking at C++ and envied them their std::vector classes, wanting the same for Delphi or being able to access untyped memory using a typed-view (basically turning a buffer into an array of <T>) then I have some good news for you!

Vector containers, unified storage model and typed views are just some of the highlights of my vector-library. I did an article on the subject at the Embarcadero community website, so head over and read up on how you can enjoy these features in your Delphi application!

I also added FreePascal support, so that the library can be used with TMS Web Framework.

vectors

Head over to the Embarcadero Community website to read the full article

Delphi Dying? Think again, Tiobe

March 8, 2020 21 comments

At the beginning of last week, Tiobe once again threw a punch at Object Pascal. Playing the whole “Delphi is dying” tune, while focusing on outdated and quite frankly irrelevant episodes from the past. Hoping no doubt, to leave the reader with an impression that Delphi is stuck in the 90s.

This is the same pattern we often see whenever Delphi or Object Pascal in general experience significant growth; or to be blunt, when the author cannot be bothered to think independently, but simply parrot hearsay and misinformation on autopilot.

It is lame, superficial and Tiobe’s biggest mistake to date.

Tiobe

Guess “alternative news” is no longer limited to individuals like Alex Jones

Just to underline the problem areas here. The ranking is based on their internal system (there is no standard for how to rank popularity), and while I have issues with how they build up their score, it’s ultimately the March editorial text that has caused irritation and shock. You don’t declare a language as dead when there are over 10 million developers using it. This type of editorial could have very real consequences – which in turn brings us to their ranking system and how they arrived at their conclusions.

I would have understood their statement if it was issued between 2007 and 2010, because Delphi was at that time transitioning between Borland and Embarcadero. But to issue something like this in 2020? After a decade worth of restoration, optimization, modernization and above all – forging a thriving community that goes from victory to victory month after month, year after year? It makes absolutely no sense.

Significant growth

In 2018 there were roughly six million Delphi developers (I worked at Embarcadero at the time), with a total estimate of ten million Object Pascal developers worldwide when counting all alternative compilers, dialects and indeed – known piracy issues.

“Tiobe failed stupendously in their data mining operation, they seem to be oblivious regarding the demographic in which the language is used”

Since that time Delphi has made strides into the universities in Scandinavia, South-America and the Middle-East. Turkey recently announced their dedication to native and archetypal software development with Delphi (provided free for students), which adds a whopping one million students to the already large body of users.

Embarcadero has slowly but steadily rebuilt much of the infrastructure that existed under Borland. From professional training at Embarcadero Academy, to entry level training at LearnDelphi.org. The Idera community pages likewise produce a large body of articles on a weekly basis. Comparing the Delphi and C++Builder ecosystem today with it’s tragic state back in 2010, is like day and night.

academy

Training is available for both Enterprise level developers and students alike

With so much positive happening in the world of Object Pascal, Tiobe’s article comes across as a grave, intentional misrepresentation at worst, or an intellectual emergency at best. It is completely out of place and carries the tell-tell signs of an echo chamber.

Tiobe has lost all credibility

I have to be honest. I have never taken Tiobe that serious, because they have made to many mistakes in the past to have any form of credibility when it comes to Delphi and Object Pascal as a language. And when I say mistakes, I mean monumental blunders that just annihilate all possibility that they treat languages on equal footing.

“not only have Tiobe failed in their indexing, they have completely and utterly misunderstood the demographic in which the language is used”

If we go back a decade, Tiobe actually based their numbers on the keyword “Pascal”. In other words they excluded not just Delphi commits to GitHub, BitBucket and similar services – they also managed to exclude Freepascal and every subsequent dialect that signify Object Pascal as a whole. So for quite some time their entire statistics was based on the off chance that people typed “Pascal” in their project or commit entries.

To make matters worse, their search tech was not smart enough to recognize “Pascal” in composite words. So if you wrote “ObjectPascal” in a single word, the commit was excluded; As was “Freepascal”, “Smartpascal”, “Oxygenepascal” and variations using a hyphen (and the same for abbreviations).

Developers also use the term Lazarus and FPC interchangeably since Lazarus typically means people use the LCL, the visual framework used to write desktop applications with Freepascal. So while Freepascal has nothing to do with Delphi in terms of intellectual property, the two compilers are used by the community as a whole.

But let’s look at why Tiobe’s indexing fails for Delphi. Just what are they doing wrong?

  • Delphi has been around for 25 years, and it’s roots stretch back to the birth of C. Using Stack Overflow as an indicator for popularity is ludacris, since the majority of errors and problems have been largely ironed out in the past, leaving only extremely advanced and rare topics. If problems is the criteria, then I guess that explains why C# and Java soars in the ranking.
  • Nobody searches google for “Delphi programming”. You search for explicit topics like composite polygon clipping with GDI+ and then add “delphi” to limit the search to said language. Just like C/C++, Object Pascal is an archetypal language. It stretches from kernel work with inline assembly, to cloud services and HTML5 rendering. So the topics people search for are usually straight out of the operating-system strata.
  • Delphi developers communicate in dedicated groups, such as Delphi Developer on Facebook. There is also a thriving community on the Delphi Praxis forums, not to mention the Freepascal forums. None of which seem to be included in Tiobe’s activity statistics.
  • Object Pascal has several frameworks and run-time libraries. Delphi ships with two:
  • Freepascal operates with its own, open-source variation called the LCL
  • Freepascal also targets WebAssembly and JavaScript and have variations of the LCL adapted those targets
  • And then there is third party, commercial alternatives that covers HTML5/JS like TMS WebCore, Smart Pascal, Oxygene Pascal and the upcoming Quartex Pascal. Around these runtime libraries (VCL, FMX and LCL) there are thousands of libraries, components and frameworks, large and small, that don’t necessarily put  “Delphi” or “Object Pascal” in their metadata.
  • Tiobe also fails to include feeds like BeginEnd.net or DelphiFeeds, which syndicate on average 3000 unique blog-posts a year, representing a consistent and very much alive stream of information and content.

Delphi and Freepascal, which represents the most widely used compilers, are predominantly used to write commercial, closed source products. Which by consequence means that code and the activity involved is not public. For Tiobe to so utterly misunderstand the demographic for Object Pascal in general, is quite frankly outrageous. If you are going to rank a language that involves millions of users -then at least have the decency of investigating the communities it involves.

Excluding the factors I have outlined above, makes as much sense as excluding mono from C#.

Incompetence or plain ignorance?

It was only after an avalanche of complaints in 2014, orchestrated by yours truly, where members of the Delphi Developer group on Facebook sent complaints en-mass to Tiobe that they addressed the use of “Pascal” to represent Delphi and associated dialects. Yet for all the complaints, outlined in letters that no sentient human being could misunderstand – all Tiobe managed to do was to add “Object Pascal” to their list. Which, believe it or not, was unfamiliar to them.

It’s funny because it’s true

But do you think they bothered to do it right? Afraid not. Instead of aggregating all of the dialects, frameworks and variations of names under a single banner, they still to this day operate with two very specific search elements, namely “Delphi” or “Object Pascal”.

I sure hope the dairy industry doesn’t hire Tiobe to do statistics on milk, because if their coverage of Object Pascal is anything to go by, they will be ranking by yogurt.

No updates since 2018? Really Tiobe?

When a global Index service like Tiobe manage to write, and I quote:

However, the latest Delphi release is from 2018” -Source: Tiobe, March report

You really have to ponder if human beings are involved in their business at all. I’m not expecting much, honestly, but I do expect them to interact with the community they supposedly track and build a statistic on. Have they visited Delphi Developer and talked to the admins about growth numbers? Have they talked to Embarcadero to get some figures and coverage there? Did they contact the Freepascal community to get some download statistics from them?

Delphi 10.3 was released on november 21st 2019. The version that Tiobe seem to think is the last update, is in fact the last release with a city name (which was launched in 2018). Since then there have been three successive, regular updates; most developers are now using version 10.3.3. With 10.3.4 about to be released. This just underlines how oblivious Tiobe is to our part of the industry.

Delphi_IDE

Modern Delphi is used by millions of professional developers globally

Delphi and Freepascal is different in more ways than one, but beyond language compatibility there is one aspect that is quintessential for them both; namely their role in the commercial sector. Where other languages, like C/C++ or (for example) JavaScript see a lot of open-source activity, especially with regards to Linux and Node.js – Delphi and Freepascal are predominantly used to write high-quality, commercial, closed source business applications. In other words, the vast majority of code produced by the millions of Object Pascal developers around the world – is never publicly committed to GitHub or BitBucket.

So not only have Tiobe failed stupendously in their data mining operation, they seem oblivious to the demographic in which the language is used.

academy2

The selection of books, video tutorials and coding material for Delphi is recovering at a rapid pace. And much like C/C++ there are classic books on Amazon that are just as relevant today as they were 10 years ago. Thankfully Delphi don’t suffer the “learn Delphi in 2 weeks” style books, because any developer worth his salt knows that such books are for the gullible and naive.

Developers use Delphi and Freepascal to deliver rock solid, data driven services; services that is expected to run 24/7 with zero downtime, processing millions of transactions. Delphi is used to write medical software that manages networks of hospitals, with tens of thousands of patients. Delphi is used by banks to power their ATM machines, and Delphi is used to do the heavy lifting in thousands of POS (point of sale) terminals across Europe. Terminals that don’t have time to wait for a garbage collector to kick in, only to cause catastrophic CPU spikes (I won’t mention names, but attempting to switch to C# was a disaster for one of the biggest POS terminal suppliers in Europe).

academy3

Delphi represents the back-bone of the medical software industry in Scandinavia and Europe at large. Many have tried to replace Delphi, but end up with expensive lessons in why archetypal languages are indeed called archetypal.

Object Pascal is used by governments, fortune 500 companies and the guy with a million dollar idea working out of his parents garage; It is used to write cloud accounting software, invoicing systems and medical journaling; It is used by the music industry and graphical design. There are large and extremely successful products out there that don’t advertise that it’s written in Delphi (just like you don’t stamp “made with C++” on a piece of software). You would be surprised!

Object Pascal it’s used by developers who value speed, security, creative freedom and the benefit of a mature feature matrix that only C/C++ and Object Pascal brings. C is by definition three years older than Pascal, but these two archetypal languages have evolved side by side.

There is a reason these two languages represented the university curriculum for close to two decades; further still if we include Turbo Pascal. And Delphi is once again returning home to academia. To the applause of teachers who were forced to teach Java, and hated every minute of it (I helped setup two universities with Delphi in Norway, so I have some first hand accounts in the matter).

Reflections

Since Delphi is growing aggressively these days, Embarcadero is making waves. A few months back we saw how a well known team of C# influencers took a stab at Delphi (and me in particular, no doubt because I have been so outspoken). And as Delphi now returns to academia – Tiobe is demonstrating a bias that leaves little to the imagination. Especially when you know their numbers account for nothing and are bordering on fiction.

giphy

If I didn’t know better, I would say someone is worried. And it’s not the Delphi and Freepascal communities. Modern Delphi is a power-house for software development, and it has the potential to disrupt and restore the devtool market.

There is a lot of money involved, so I am not surprised we are seeing a string of attempts at undermining the importance of Object Pascal. I had hoped Tiobe would adopt a higher standard though.

Then again, the ship of credibility sailed when they couldn’t tell Turbo Pascal from Object Pascal.

Nodebuilder, QTX and the release of my brand new social platform

January 2, 2020 9 comments

First, let me wish everyone a wonderful new year! With xmas and the silly season firmly behind us, and my batteries recharged – I feel my coding fingers itch to get started again.

2019 was a very busy year for me. Exhausting even. I have juggled both a full time job, kids and family, as well as our community project, Quartex Media Desktop. And since that project has grown considerably, I had to stop and work on the tooling. Which is what NodeBuilder is all about.

I have also released my own social media platform (see further down). This was initially scheduled for Q4 2020, but Facebook pissed me off something insanely, so I set it up in december instead.

NodeBuilder

node

For those of you that read my blog you probably remember the message system I made for the Quartex Desktop, called Ragnarok? This is a system for dealing with message dispatching and RPC, except that the handling is decoupled from the transport medium. In other words, it doesnt care how you deliver a message (WebSocket, UDP, REST), but rather takes care of serialization, binary data and security.

All the back-end services that make up the desktop system, are constructed around Ragnarok. So each service exposes a set of methods that the core can call, much like a normal RPC / SOAP service would. Each method is represented by a request and response object, which Ragnarok serialize to a JSON message envelope.

In our current model I use WebSocket, which is a full duplex, long-term connection (to avoid overhead of having to connect and perform a handshake for each call). But there is nothing in the way of implementing a REST transport layer (UDP is already supported, it’s used by the Zero-Config system. The services automatically find each other and register, as long as they are connected to the same router or switch). For the public service I think REST makes more sense, since it will better utilize the software clustering that node.js offers.

nodebuilder

Node Builder is a relatively simple service designer, but highly effective for our needs

 

Now for small services that expose just a handful of methods (like our chat service), writing the message classes manually is not really a problem. But the moment you start having 20 or 30 methods – and need to implement up to 60 message classes manually – this approach quickly becomes unmanageable and prone to errors. So I simply had to stop before xmas and work on our service designer. That way we can generate the boilerplate code in seconds rather than days and weeks.

While I dont have time to evolve this software beyond that of a simple service designer (well, I kinda did already), I have no problem seeing this system as a beginning of a wonderful, universal service authoring system. One that includes coding, libraries and the full scope of the QTX runtime-library.

In fact, most of the needed parts are in the codebase already, but not everything has been activated. I don’t have time to build both a native development system AND the development system for the desktop.

nodebuilder4

NodeBuilder already have a fully functional form designer and code editor, but it is dormant for now due to time restrictions. Quartex Media Desktop comes first

But right now, we have bigger fish to fry.

Quartex Media Desktop

We have made tremendous progress on our universal desktop environment, to the point where the baseline services are very close to completion. A month should be enough to finish this unless something unforeseen comes up.

desktop

Quartex Media Desktop provides an ecosystem for advanced web applications

You have to factor in that, this project has only had weekends and the odd after work hours allocated for it. So even though we have been developing this for 12 months, the actual amount of days is roughly half of that.

So all things considered I think we have done a massive amount of code in such a short time. Even simple 2d games usually take 2 years of daily development, and that includes a team of at least 5 people! Im a single developer working in my spare time.

So what exactly is left?

The last thing we did before xmas was upon us, was to throw out the last remnants of Smart Mobile Studio code. The back-end services are now completely implemented in our own QTX runtime-library, which has been written from scratch. There is not a line of code from Smart Mobile Studio in QTX, which means we no longer have to care what that system does or where it goes.

To sum up:

  • Push all file handling code out of the core
  • Implement file-handling as it’s own service

Those two steps might seem simple enough, but you have to remember that the older code was based on the Linux path system, and was read-only.

So when pushing that code out of the core, we also have to add all the functionality that was never implemented in our prototype.

nodebuilder2

Each class actually represents a separate “mini” program, and there are still many more methods to go before we can put this service into production.

Since Javascript does not support threads, each method needs to be implemented as a separate program. So when a method is called, the file/task manager literally spawns a new process just for that task. And the result is swiftly returned back to the caller in async manner.

So what is ultimately simple, becomes more elaborate if you want to do it right. This is the price we pay for universality and a cluster enabled service-stack.

This is also why I have put the service development on pause until we have finished the NodeBuilder tooling. And I did this because I know by experience that the moment the baseline is ready, both myself and users of the system is going to go “oh we need this, and that and those”. Being able to quickly design and auto-generate all the boilerplate code will save us months of work. So I would rather spend a couple of weeks on NodeBuilder than wasting months having to manually write all that boilerplate code down the line.

What about the QTX runtime-library?

Writing an RTL from scratch was not something I could have anticipated before we started this project. But thankfully the worst part of this job is already finished.

The RTL is divided into two parts:

  • Non Visual code. Classes and methods that makes QTX an RTL
  • Visual code. Custom Controls + standard controls (buttons, lists etc)
  • Visual designer

As you can see, the non-visual aspect of the system is finished and working beautifully. It’s a lot faster than the code I wrote for Smart Mobile Studio (roughly twice as fast on average). I also implemented a full visual designer, both as a Delphi visual component and QTX visual component.

nodebuilder3

Quartex Media Desktop makes running on several machines [cluster] easy and seamless

So fundamental visual classes like TCustomControl is already there. What I haven’t had time to finish are the standard-controls, like TButton, TListBox, TEdit and those type of visual components. That will be added after the release of QTX, at which point we throw out the absolute last remnants of Smart Mobile Studio from the client (HTML5 part) software too.

Why is the QTX Runtime-Library important again?

When the desktop is out the door, the true work begins! The desktop has several roles to play, but the most important aspect of the desktop – is to provide an ecosystem capable of hosting web based applications. Offering features and methods traditionally only found in Windows, Linux or OS X. It truly is a complete cloud system that can scale from a single affordable SBC (single board computer), to a high-end cluster of powerful servers.

Clustering and writing distributed applications has always been difficult, but Quartex Media Desktop makes it simple. It is no more difficult for a user to work on a clustered system, as it is to work on a normal, single OS. The difficult part has already been taken care of, and as long as people follow the rules, there will be no issues beyond ordinary maintenance.

And the first commercial application to come out of Quartex Components, is Cloud Forge, which is the development system for the platform. It has the same role as Visual Studio for Windows, or X Code for Apple OS X.

78498221_438784840394351_7041317054627971072_n

The Quartex Media Desktop Cluster cube. A $400 super computer

I have prepared 3 compilers for the system already. First there is C/C++ courtesy of Clang. So C developers will be able to jump in and get productive immediately. The second compiler is freepascal, or more precise pas2js, which allows you to compile ordinary freepascal code (which is highly Delphi compatible) to both JavaScript and WebAssembly.

And last but not least, there is my fork of DWScript, which is the same compiler that Smart Mobile Studio uses. Except that my fork is based on the absolute latest version, and i have modified it heavily to better match special features in QTX. So right out of the door CloudForge will have C/C++, two Object Pascal compilers, and vanilla Javascript and typescript. TypeScript also has its own WebAssembly compiler, so doing hard-core development directly in a browser or HTML5 viewport is where we are headed.

Once the IDE is finished I can finally, finally continue on the LDEF bytecode runtime, which will be used in my BlitzBasic port and ultimately replace both clang, freepascal and DWScript. As a bonus it will emit native code for a variety of systems, including x86, ARM, 68k [including 68080] and PPC.

This might sound incredibly ambitious, if not impossible. But what I’m ultimately doing here -is moving existing code that I already have into a new paradigm.

The beauty of object pascal is the sheer size and volume of available components and code. Some refactoring must be done due to the async nature of JS, but when needed we fall back on WebAssembly via Freepascal (WASM executes linear, just like ordinary native code does).

A brand new social platform

During december Facebook royally pissed me off. I cannot underline enough how much i loath A.I censorship, and the mistakes that A.I does – in which you are utterly powerless to complain or be heard by a human being. In my case i posted a gif from their own mobile application, of a female body builder that did push-ups while doing hand-stands. In other words, a completely harmless gif with strength as the punchline. The A.I was not able to distinguish between a leotard and bare-skin, and just like that i was muted for over a week. No human being would make such a ruling. As an admin of a fairly large set of groups, there are many cases where bans are the result. Disgruntled members that acts out of revenge and report technical posts about coding as porn or offensive. Again, you are helpless because there are nobody you can talk to about resolving the issue. And this time I had enough.

It was always planned that we would launch our own social media platform, an alternative to Facebook aimed at adult geeks rather than kids (Facebook operates with an age limit of 12 years). So instead of waiting I rushed out and set up a brand new social network. One where those banale restrictions Facebook has conditioned us with, does not apply.

Just to underline, this is not some simple and small web forum. This is more or less a carbon copy of Facebook the way it used to be 8-9 years ago. So instead of having a single group on facebook, we can now have as many groups as we like, on a platform that looks more or less identical to Facebook – but under our control and human rules.

AD1

Amigadisrupt.com is a brand new social media platform for geeks

You can visit the site right now at https://www.amigadisrupt.com. Obviously the major content on the platform right now is dominated by retro computing – but groups like Delphi Developer and FPC developer has already been setup and are in use. But if you are expecting thousands of active users, that will take time. We are now closing in on 250 active users which is pretty good for such a short period of time. I dont want a platform anywhere near as big as FB. The goal is to get 10k users and have a stable community of coders, retro geeks, builders and creative individuals.

AD (Amiga Disrupt) will be a standard application that comes with Quartex Media Desktop. This is the beauty of web technology, in that it can unify different resources under one roof. And we will have our cake and eat it come hell or high water.

Disclaimer: Amiga Disrupt has a lower age limit of 18 years. This is a platform meant for adults. Which means there will be profanity, jokes that would get you banned on Facebook and content that is not meant for kids. This is hacker-land, and political correctness is considered toilet paper. So if you need social toffery like FB and Twitter deals with, you will be kicked by one of the admins.

After you sign up your feed will be completely empty. Here is how to get it started. And feel free to add me to your friends-list!thumb

Hydra, what’s the big deal anyway?

October 29, 2019 7 comments

RemObjects Hydra is a product I have used for years in concert with Delphi, and like most developers that come into contact with RemObjects products – once the full scope of the components hit you, you never want to go back to not using Hydra in your applications.

Note: It’s easy to dismiss Hydra as a “Delphi product”, but Hydra for .Net and Java does the exact same thing, namely let you mix and match modules from different languages in your programs. So if you are a C# developer looking for ways to incorporate Java, Delphi, Elements or Freepascal components in your application, then keep reading.

But let’s start with what Hydra can do for Delphi developers.

What is Hydra anyways?

Hydra is a component package for Delphi, Freepascal, .Net and Java that takes plugins to a whole new level. Now bear with me for a second, because these plugins is in a completely different league from anything you have used in the past.

In short, Hydra allows you to wrap code and components from other languages, and use them from Delphi or Lazarus. There are thousands of really amazing components for the .Net and Java platforms, and Hydra allows you compile those into modules (or “plugins” if you prefer that); modules that can then be used in your applications like they were native components.

hydra-01-overview

Hydra, here using a C# component in a Delphi application

But it doesn’t stop there; you can also mix VCL and FMX modules in the same application. This is extremely powerful since it offers a clear path to modernizing your codebase gradually rather than doing a time consuming and costly re-write.

So if you want to move your aging VCL codebase to Firemonkey, but the cost of having to re-write all your forms and business logic for FMX would break your budget -that’s where Hydra gives you a second option: namely that you can continue to use your VCL code from FMX and refactor the application in your own tempo and with minimal financial impact.

The best of all worlds

Not long ago RemObjects added support for Lazarus (Freepascal) to the mix, which once again opens a whole new ecosystem that Delphi, C# and Java developers can benefit from. Delphi has a lot of really cool components, but Lazarus have components that are not always available for Delphi. There are some really good developers in the Freepascal community, and you will find hundreds of components and classes (if not thousands) that are open-source; For example, Lazarus has a branch of Synedit that is much more evolved and polished than the fork available for Delphi. And with Hydra you can compile that into a module / plugin and use it in your Delphi applications.

This is also true for Java and C# developers. Some of the components available for native languages might not have similar functionality in the .Net world, and by using Hydra you can tap into the wealth that native languages have to offer.

As a Delphi or Freepascal developer, perhaps you have seen some of the fancy grids C# and Java coders enjoy? Developer Express have some of the coolest components available for any platform, but their focus is more on .Net these days than Delphi. They do maintain the control packages they have, but compared to the amount of development done for C# their Delphi offerings are abysmal. So with Hydra you can tap into the .Net side of things and use the latest components and libraries in your Delphi applications.

Financial savings

One of coolest features of Hydra, is that you can use it across Delphi versions. This has helped me leverage the price-tag of updating to the latest Delphi.

It’s easy to forget that whenever you update Delphi, you also need to update the components you have bought. This was one of the reasons I was reluctant to upgrade my Delphi license until Embarcadero released Delphi 10.2. Because I had thousands of dollars invested in components – and updating all my licenses would cost a small fortune.

So to get around this, I put the components into a Hydra module and compiled that using my older Delphi. And then i simply used those modules from my new Delphi installation. This way I was able to cut cost by thousands of dollars and enjoy the latest Delphi.

hydramix

Using Firemonkey controls under VCL is easy with Hydra

A couple of years back I also took the time to wrap a ton of older components that work fine but are no longer maintained or sold. I used an older version of Delphi to get these components into a Hydra module – and I can now use those with Delphi 10.3 (!). In my case there was a component-set for working closely with Active Directory that I have used in a customer’s project (and much faster than having to go the route via SQL). The company that made these don’t exist any more, and I have no source-code for the components.

The only way I could have used these without Hydra, would be to compile them into a .dll file and painstakingly export every single method (or use COM+ to cross the 32-bit / 64-bit barrier), which would have taken me a week since we are talking a large body of quality code. With Hydra i was able to wrap the whole thing in less than an hour.

I’m not advocating that people stop updating their components. But I am very thankful for the opportunity to delay having to update my entire component stack just to enjoy a modern version of Delphi.

Hydra gives me that opportunity, which means I can upgrade when my wallet allows it.

Building better applications

There is also another side to Hydra, namely that it allows you to design applications in a modular way. If you have the luxury of starting a brand new project and use Hydra from day one, you can isolate each part of your application as a module. Avoiding the trap of monolithic applications.

img_517046

Hydra for .Net allows you to use Delphi, Java and FPC modules under C#

This way of working has great impact on how you maintain your software, and consequently how you issue hotfixes and updates. If you have isolated each key part of your application as separate modules, you don’t need to ship a full build every time.

This also safeguards you from having all your eggs in one basket. If you have isolated each form (for example) as separate modules, there is nothing stopping you from rewriting some of these forms in another language – or cross the VCL and FMX barrier. You have to admit that being able to use the latest components from Developer Express is pretty cool. There is not a shadow of a doubt that Developer-Express makes the best damn components around for any platform. There are many grids for Delphi, but they cant hold a candle to the latest and greatest from Developer Express.

Why can’t I just use packages?

If you are thinking “hey, this sounds exactly like packages, why should I buy Hydra when packages does the exact same thing?“. Actually that’s not how packages work for Delphi.

Delphi packages are cool, but they are also severely limited. One of the reasons you have to update your components whenever you buy a newer version of Delphi, is because packages are not backwards compatible.

delphi-500

Delphi packages are great, but severely limited

A Delphi package must be compiled with the same RTL as the host (your program), and version information and RTTI must match. This is because packages use the same RTL and more importantly, the same memory manager.

Hydra modules are not packages. They are clean and lean library files (*.dll files) that includes whatever RTL you compiled them with. In other words, you can safely load a Hydra module compiled with Delphi 7, into a Delphi 10.3 application without having to re-compile.

Once you start to work with Hydra, you gradually build up modules of functionality that you can recycle in the future. In many ways Hydra is a whole new take on components and RAD. This is how Delphi packages and libraries should have been.

Without saying anything bad about Delphi, because Delphi is a system that I love very much; but having to update your entire component stack just to use the latest Delphi, is sadly one of the factors that have led developers to abandon the platform. If you have USD 10.000 in dependencies, having to pay that as well as buying Delphi can be difficult to justify; especially when comparing with other languages and ecosystems.

For me, Hydra has been a tremendous boon for Delphi. It has allowed me to keep current with Delphi and all it’s many new features, without losing the money I have already invested in components packages.

If you are looking for something to bring your product to the next level, then I urge you to spend a few hours with Hydra. The documentation is exceptional, the features and benefits are outstanding — and you will wonder how you ever managed to work without them.

External resources

Disclaimer: I am not a salesman by any stretch of the imagination. I realize that promoting a product made by the company you work for might come across as a sales pitch; but that’s just it: I started to work for RemObjects for a reason. And that reason is that I have used their products since they came on the market. I have worked with these components long before I started working at RemObjects.

BTree for Delphi

July 7, 2019 5 comments
lookup

Click here to read

A few weeks back I posted an article on RemObjects blog regarding universal code, and how you with a little bit of care can write code that easily compiled with both Oxygene, Delphi and Freepascal. With emphasis on Oxygene.

The example I used was a BTree class that I originally ported from Delphi to Smart Pascal, and then finally to Oxygene to run under WebAssembly.

Long story short I was asked if I could port the code back to Delphi in its more or less universal form. Naturally there are small differences here and there, but nothing special that distinctly separates the Delphi version from Oxygene or Smart Pascal.

Why this version?

If you google BTree and Delphi you will find loads of implementations. They all operate more or less identical, using records and pointers for optimal speed. I decided to base my version on classes for convenience, but it shouldn’t be difficult to revert that to use records if you absolutely need it.

What I like about this BTree implementation is that it’s very functional. Its easy to traverse the nodes using the ForEach() method, you can add items using a number as an identifier, but it also supports string identifiers.

I also changed the typical data reference. The data each node represent is usually a pointer. I changed this to variant to make it more functional.

Well, here is the Delphi version as promised. Happy to help.

unit btree;

interface

uses
  System.Generics.Collections,
  System.Sysutils,
  System.Classes;

type

  // BTree leaf object
  TQTXBTreeNode = class(TObject)
  public
    Identifier: integer;
    Data:       variant;
    Left:       TQTXBTreeNode;
    Right:      TQTXBTreeNode;
  end;

  [Weak]
  TQTXBTreeProcessCB = reference to procedure (const Node: TQTXBTreeNode; var Cancel: boolean);

  EBTreeError = class(Exception);

  TQTXBTree = class(TObject)
  private
    FRoot:    TQTXBTreeNode;
    FCurrent: TQTXBTreeNode;
  protected
    function  GetEmpty: boolean;  virtual;
    function  GetPackedNodes: TList;

  public
    property  Root: TQTXBTreeNode read FRoot;
    property  Empty: boolean read GetEmpty;

    function  Add(const Ident: integer; const Data: variant): TQTXBTreeNode; overload; virtual;
    function  Add(const Ident: string; const Data: variant): TQTXBTreeNode; overload; virtual;

    function  Contains(const Ident: integer): boolean; overload; virtual;
    function  Contains(const Ident: string): boolean; overload; virtual;

    function  Remove(const Ident: integer): boolean; overload; virtual;
    function  Remove(const Ident: string): boolean; overload; virtual;

    function  Read(const Ident: integer): variant; overload; virtual;
    function  Read(const Ident: string): variant; overload; virtual;

    procedure Write(const Ident: string; const NewData: variant); overload; virtual;
    procedure Write(const Ident: integer; const NewData: variant); overload; virtual;

    procedure Clear; overload; virtual;
    procedure Clear(const Process: TQTXBTreeProcessCB); overload; virtual;

    function  ToDataArray: TList;
    function  Count: integer;

    procedure ForEach(const Process: TQTXBTreeProcessCB);

    destructor Destroy; override;
  end;

implementation

//#############################################################################
// TQTXBTree
//#############################################################################

destructor TQTXBTree.Destroy;
begin
  if FRoot  nil then
    Clear();
  inherited;
end;

procedure TQTXBTree.Clear;
var
  lTemp:  TList;
  x:  integer;
begin
  if FRoot  nil then
  begin
    // pack all nodes to a linear list
    lTemp := GetPackedNodes();

    try
      // release each node
      for x := 0 to ltemp.Count-1 do
      begin
        lTemp[x].Free;
      end;
    finally
      // dispose of list
      lTemp.Free;

      // reset pointers
      FCurrent := nil;
      FRoot := nil;
    end;
  end;
end;

procedure TQTXBTree.Clear(const Process: TQTXBTreeProcessCB);
begin
  ForEach(Process);
  Clear();
end;

function TQTXBTree.GetPackedNodes: TList;
var
  LData:  Tlist;
begin
  LData := TList.Create();
  ForEach( procedure (const Node: TQTXBTreeNode; var Cancel: boolean)
  begin
    LData.Add(Node);
    Cancel  := false;
  end);
  result := LData;
end;

function TQTXBTree.GetEmpty: boolean;
begin
  result := FRoot = nil;
end;

function TQTXBTree.Count: integer;
var
  LCount: integer;
begin
  ForEach( procedure (const Node: TQTXBTreeNode; var Cancel: boolean)
    begin
      inc(LCount);
      Cancel  := false;
    end);
  result := LCount;
end;

function TQTXBTree.ToDataArray: TList;
var
  Data: TList;
begin
  Data := TList.Create();

  ForEach( procedure (const Node: TQTXBTreeNode; var Cancel: boolean)
    begin
      Data.add(Node.data);
      Cancel := false;
    end);
  result := data;
end;

function TQTXBTree.Add(const Ident: string; const Data: variant): TQTXBTreeNode;
begin
  result := Add( Ident.GetHashCode(), Data);
end;

function TQTXBTree.Add(const Ident: integer; const Data: variant): TQTXBTreeNode;
var
  lNode:  TQTXBtreeNode;
begin
  LNode := TQTXBTreeNode.Create();
  LNode.Identifier := Ident;
  LNode.Data := data;

  if FRoot = nil then
    FRoot := LNode;

  FCurrent := FRoot;

  while true do
  begin
    if (Ident  FCurrent.Identifier) then
    begin
      if (FCurrent.right = nil) then
      begin
        FCurrent.right := LNode;
        break;
      end else
      FCurrent := FCurrent.right;
    end else
    break;
  end;
  result := LNode;
end;

function TQTXBTree.Read(const Ident: string): variant;
begin
  result := Read( Ident.GetHashCode() );
end;

function TQTXBTree.Read(const Ident: integer): variant;
begin
  FCurrent := FRoot;
  while FCurrent  nil do
  begin
    if (Ident  Fcurrent.Identifier) then
      FCurrent := FCurrent.Right
    else
    begin
      result := FCUrrent.Data;
      break;
    end
  end;
end;

procedure TQTXBTree.Write(const Ident: string; const NewData: variant);
begin
  Write( Ident.GetHashCode(), NewData);
end;

procedure TQTXBTree.Write(const Ident: integer; const NewData: variant);
begin
  FCurrent := FRoot;
  while (FCurrent  nil) do
  begin
    if (Ident  Fcurrent.Identifier) then
      FCurrent := FCurrent.Right
    else
    begin
      FCurrent.Data := NewData;
      break;
    end
  end;
end;

function  TQTXBTree.Contains(const Ident: string): boolean;
begin
  result := Contains( Ident.GetHashCode() );
end;

function TQTXBTree.Contains(const Ident: integer): boolean;
begin
  result := false;
  if FRoot  nil then
  begin
    FCurrent := FRoot;

    while ( (not Result) and (FCurrent  nil) ) do
    begin
      if (Ident  Fcurrent.Identifier) then
        FCurrent := FCurrent.Right
      else
      begin
        Result := true;
        break;
      end
    end;
  end;
end;

function TQTXBTree.Remove(const Ident: string): boolean;
begin
  result := Remove( Ident.GetHashCode() );
end;

function TQTXBTree.Remove(const Ident: integer): boolean;
var
  LFound: boolean;
  LParent: TQTXBTreeNode;
  LReplacement,
  LReplacementParent: TQTXBTreeNode;
  LChildCount: integer;
begin
  FCurrent := FRoot;
  LFound := false;
  LParent := nil;
  LReplacement := nil;
  LReplacementParent := nil;

  while (not LFound) and (FCurrent  nil) do
  begin
    if (Ident  FCurrent.Identifier) then
    begin
      LParent := FCurrent;
      FCurrent := FCurrent.right;
    end else
    LFound := true;

    if LFound then
    begin
      LChildCount := 0;

      if (FCurrent.left  nil) then
        inc(LChildCount);

      if (FCurrent.right  nil) then
        inc(LChildCount);

      if FCurrent = FRoot then
      begin
        case (LChildCOunt) of
        0:  begin
              FRoot := nil;
            end;
        1:  begin
              if FCurrent.right = nil then
                FRoot := FCurrent.left
              else
                FRoot :=FCurrent.Right;
            end;
        2:  begin
              LReplacement := FRoot.left;
              while (LReplacement.right  nil) do
              begin
                LReplacementParent := LReplacement;
                LReplacement := LReplacement.right;
              end;

            if (LReplacementParent  nil) then
            begin
              LReplacementParent.right := LReplacement.Left;
              LReplacement.right := FRoot.Right;
              LReplacement.left := FRoot.left;
            end else
            LReplacement.right := FRoot.right;
          end;
        end;

        FRoot := LReplacement;
      end else
      begin
        case LChildCount of
        0:  if (FCurrent.Identifier < LParent.Identifier) then
            Lparent.left  := nil else
            LParent.right := nil;
        1:  if (FCurrent.Identifier < LParent.Identifier) then
            begin
              if (FCurrent.Left = NIL) then
              LParent.left := FCurrent.Right else
              LParent.Left := FCurrent.Left;
            end else
            begin
              if (FCurrent.Left = NIL) then
              LParent.right := FCurrent.Right else
              LParent.right := FCurrent.Left;
            end;
        2:  begin
              LReplacement := FCurrent.left;
              LReplacementParent := FCurrent;

              while LReplacement.right  nil do
              begin
                LReplacementParent := LReplacement;
                LReplacement := LReplacement.right;
              end;
              LReplacementParent.right := LReplacement.left;

              LReplacement.right := FCurrent.right;
              LReplacement.left := FCurrent.left;

              if (FCurrent.Identifier < LParent.Identifier) then
                LParent.left := LReplacement
              else
                LParent.right := LReplacement;
            end;
          end;
        end;
      end;
  end;

  result := LFound;
end;

procedure TQTXBTree.ForEach(const Process: TQTXBTreeProcessCB);

  function ProcessNode(const Node: TQTXBTreeNode): boolean;
  begin
    if Node  nil then
    begin
      if Node.left  nil then
      begin
        result := ProcessNode(Node.left);
        if result then
          exit;
      end;

      Process(Node, result);
      if result then
        exit;

      if (Node.right  nil) then
      begin
        result := ProcessNode(Node.right);
        if result then
          exit;
      end;
    end;
  end;

begin
  ProcessNode(FRoot);
end;

end.

Two new groups in the Developer family

July 1, 2019 2 comments

Delphi Developer is a group on Facebook that have been going strong for 12+ years. It was one of the first groups on Facebook, created the same week that Facebook allowed groups. With that group well established, it’s time to expand and clean up the feed.

RO-Single-Gear-512Last month I introduced a new group, RemObjects Developer, which is a group for developers that use RemObjects components, like the Remoting SDK, Data Abstract and/or Hydra – but more in particular, developers using Oxygene, C#, Swift, Java or Go via Elements (RemObjects compiler toolchain).

Two new groups

To further simplify syndication, and clean up the feeds (which so far has been a pot-purrey of many topics, dialects and products) an additional two groups is now in place:

Obviously there will be some overlapping. Since FPC and Delphi has much in common and are for the most part compatible, some news will be shared between those groups. But all in all this is to clean up the newsfeed which has so far been a mix and match of everything.

org

Simple overview of the groups

Node.js Developer is not meant to be purely about vanilla JavaScript. Node.js is ultimately a JavaScript runtime-engine. Which means you can use it to run or host WebAssembly libraries (as produced by Oxygene), or generate code via DWScript or Freepascal. You can think of it as a service-host if you like.

So if you are writing WebAssembly applications using Elements, then the node.js group will no doubt be interesting too. Same goes for DWScript users, Smart Pascal users and Freepascal users – providing web tech is what they like.

What is this Quartex Components?

It’s easier to manage multiple groups if you attach them to a parent-page. So if you wonder why all the groups says “by Quartex Components”, that is just a top-level page that helps me deal with with syndication. For some reason Facebook’s API only works for pages, not groups. So it’s impossible to auto-import news (for example) without a page.

The name, “Quartex Components” is ultimately the name of my personal company. I used to produce security components for Delphi, but decided to open-source those for the community.

So Quartex Components is just an organizational element.

Generic protect for FPC/Lazarus

June 30, 2019 Leave a comment

Freepascal is not frequently mentioned on my blog. I have written about it from time to time, not always in a positive light though. Just to be clear, FPC (the compiler) is fantastic; it was one particular fork of Lazarus I had issues with, involving a license violation.

On the whole, freepascal and Lazarus is capable of great things. There are a few quirks here and there (if not oddities) that prevents mass adoption (the excessive use of include-files to “fake” partial classes being one), but as object-pascal compilers go, Freepascal is a battle-hardened, production ready system.

It’s been Linux in particular that I have used Freepascal on. In 2015 Hydro Oil wanted to move their back-end from Windows to Linux, and I spent a few months converting windows-only services into Linux daemons.

Today I find myself converting parts of the toolkit I came up with to Oxygene, but that’s a post for another day.

Generic protect

If you work a lot with multithreaded code, the unit im posting here might come in handy. Long story short: sharing composite objects between threads and the main process, always means extra scaffolding. You have to make sure you don’t access the list (or it’s elements) at the same time as another thread for example. To ensure this you can either use a critical-section, or you can deliver the data with a synchronized call. This is more or less universal for all languages, no matter if you are using Oxygene, C/C++, C# or Delphi.

When this unit came into being, I was doing quite elaborate classes with a lot of lists. These classes could not share ancestor, or I could have gotten away with just one locking mechanism. Instead I had to implement the same boilerplate code over and over again.

The unit below makes insulating (or protecting) classes easier. It essentially envelopes whatever class-instance you feed it, and returns the proxy object. Whenever you want to access your instance, you have to unlock it first or use a synchronizer (see below).

Works in both Freepascal and Delphi

The unit works for both Delphi and Freepascal, but there is one little difference. For some reason Freepascal does not support anonymous procedures, so we compensate and use inline-procedures instead. While not a huge deal, I really hope the FPC team add anonymous procedures, it makes life a lot easier for generics based code. Async programming without anonymous procedures is highly impractical too.

So if you are in Delphi you can write:

var
 lValue: TProtectedValue;
 lValue.Synchronize( procedure (var Value: integer)
 begin
   Value := Value * 12;
 end);

But under Freepascal you must resort to:

var
 lValue: TProtectedValue;

procedure _UpdateValue(var Data: integer);
begin
 Data := Data * 12;
end;

begin
  lValue.Synchronize(@_UpdateValue);
end;

On small examples like these, the benefit of this style of coding might be lost; but if you suddenly have 40-50 lists that needs to be shared between 100-200 active threads, it will be a time saver!

You can also use it on intrinsic datatypes:

lazarus

OK, here we go:

unit safeobjects;

// 	SafeObjects
//	==========================================================================
//	Written by Jon-Lennart Aasenden
//	Copyright Quartex Components LTD, all rights reserved
//
//	This unit is a part of the QTX Patreon Library
//
//	NOTES ABOUT FREEPASCAL:
//	=======================
//	Freepascal does not allow anonymous procedures, which means we must
//	resort to inline procedures instead:
//
// 	Where we in Delphi could write the following for an atomic,
//	thread safe alteration:
//
// var
// 	LValue: TProtectedValue;
//
//	LValue.Synchronize( procedure (var Value: integer)
//	begin
//		Value := Value * 12;
//	end);
//
//	Freepascal demands that we use an inline procedure instead, which
//  is more or less the same code, just organized slightly differently.
//
// var
// 	LValue: TProtectedValue;
//
//  procedure _UpdateValue(var Data: integer);
//  begin
//  	Data := Data * 12;
//  end;
//
// begin
//	LValue.Synchronize(@_UpdateValue);
// end;
//
//
//
//

{$mode DELPHI}
{$H+}

interface

uses
  {$IFDEF FPC}
  SysUtils,
  Classes,
  SyncObjs,
  Generics.Collections;
	{$ELSE}
  System.SysUtils,
  System.Classes,
  System.SyncObjs,
  System.Generics.Collections;
  {$ENDIF}

type

  {$DEFINE INHERIT_FROM_CRITICALSECTION}

  TProtectedValueAccessRights = set of (lvRead, lvWrite);

  EProtectedValue = class(exception);
  EProtectedObject = class(exception);

  (* Thread safe intrinsic datatype container.
     When sharing values between processes, use this class
     to make read/write access safe and protected. *)

  {$IFDEF INHERIT_FROM_CRITICALSECTION}
  TProtectedValue = class(TCriticalSection)
  {$ELSE}
  TProtectedValue = class(TObject)
  {$ENDIF}
  strict private
    {$IFNDEF INHERIT_FROM_CRITICALSECTION}
    FLock: TCriticalSection;
    {$ENDIF}
    FData: T;
    FOptions: TProtectedValueAccessRights;
  strict protected
    function GetValue: T;virtual;
    procedure SetValue(Value: T);virtual;
    function GetAccessRights: TProtectedValueAccessRights;
    procedure SetAccessRights(Rights: TProtectedValueAccessRights);
  public
    type
  		{$IFDEF FPC}
      TProtectedValueEntry = procedure (var Data: T);
  		{$ELSE}
      TProtectedValueEntry = reference to procedure (var Data: T);
      {$ENDIF}
  public
    constructor Create(Value: T); overload; virtual;
    constructor Create(Value: T; const Access: TProtectedValueAccessRights); overload; virtual;
    constructor Create(const Access: TProtectedValueAccessRights); overload; virtual;
    destructor Destroy;override;

    {$IFNDEF INHERIT_FROM_CRITICALSECTION}
    procedure Enter;
    procedure Leave;
    {$ENDIF}
    procedure Synchronize(const Entry: TProtectedValueEntry);

    property AccessRights: TProtectedValueAccessRights read GetAccessRights;
    property Value: T read GetValue write SetValue;
  end;

  (* Thread safe object container.
     NOTE #1: This object container **CREATES** the instance and maintains it!
              Use Edit() to execute a protected block of code with access
              to the object.

     Note #2: SetValue() does not overwrite the object reference, but
              attempts to perform TPersistent.Assign(). If the instance
              does not inherit from TPersistent an exception is thrown. *)
  TProtectedObject = class(TObject)
  strict private
    FData:      T;
    FLock:      TCriticalSection;
    FOptions:   TProtectedValueAccessRights;
  strict protected
    function    GetValue: T;virtual;
    procedure   SetValue(Value: T);virtual;
    function    GetAccessRights: TProtectedValueAccessRights;
    procedure   SetAccessRights(Rights: TProtectedValueAccessRights);
  public
    type
			{$IFDEF FPC}
      TProtectedObjectEntry = procedure (const Data: T);
	    {$ELSE}
      TProtectedObjectEntry = reference to procedure (const Data: T);
      {$ENDIF}
  public
    property    Value: T read GetValue write SetValue;
    property    AccessRights: TProtectedValueAccessRights read GetAccessRights;

    function    Lock: T;
    procedure   Unlock;
    procedure   Synchronize(const Entry: TProtectedObjectEntry);

    Constructor Create(const AOptions: TProtectedValueAccessRights = [lvRead,lvWrite]); virtual;
    Destructor  Destroy; override;
  end;

  (* TProtectedObjectList:
     This is a thread-safe object list implementation.
     It works more or less like TThreadList, except it deals with objects *)
  TProtectedObjectList = class(TInterfacedPersistent)
  strict private
    FObjects: TObjectList;
    FLock: TCriticalSection;
  strict protected
    function GetEmpty: boolean;virtual;
    function GetCount: integer;virtual;

    (* QueryObject Proxy: TInterfacedPersistent allows us to
       act as a proxy for QueryInterface/GetInterface. Override
       and provide another child instance here to expose
       interfaces from that instread *)
  protected
    function GetOwner: TPersistent;override;

  public
    type
      {$IFDEF FPC}
      TProtectedObjectListProc = procedure (Item: TObject; var Cancel: boolean);
      {$ELSE}
      TProtectedObjectListProc = reference to procedure (Item: TObject; var Cancel: boolean);
      {$ENDIF}
  public
    constructor Create(OwnsObjects: Boolean = true); virtual;
    destructor  Destroy; override;

    function    Contains(Instance: TObject): boolean; virtual;
    function    Enter: TObjectList; virtual;
    Procedure   Leave; virtual;
    Procedure   Clear; virtual;

    procedure   ForEach(const CB: TProtectedObjectListProc); virtual;

    Property    Count: integer read GetCount;
    Property    Empty: boolean read GetEmpty;
  end;

implementation

//############################################################################
//  TProtectedObjectList
//############################################################################

constructor TProtectedObjectList.Create(OwnsObjects: Boolean = True);
begin
  inherited Create;
  FObjects := TObjectList.Create(OwnsObjects);
  FLock := TCriticalSection.Create;
end;

destructor TProtectedObjectList.Destroy;
begin
  FLock.Enter;
  FObjects.Free;
  FLock.Free;
  inherited;
end;

procedure TProtectedObjectList.Clear;
begin
  FLock.Enter;
  try
    FObjects.Clear;
  finally
    FLock.Leave;
  end;
end;

function TProtectedObjectList.GetOwner: TPersistent;
begin
  result := NIL;
end;

procedure TProtectedObjectList.ForEach(const CB: TProtectedObjectListProc);
var
  LItem:  TObject;
  LCancel:  Boolean;
begin
	LCancel := false;
  if assigned(CB) then
  begin
    FLock.Enter;
    try
    	{$HINTS OFF}
      for LItem in FObjects do
      begin
        LCancel := false;
        CB(LItem, LCancel);
        if LCancel then
        	break;
      end;
      {$HINTS ON}
    finally
      FLock.Leave;
    end;
  end;
end;

function TProtectedObjectList.Contains(Instance: TObject): boolean;
begin
  result := false;
  if assigned(Instance) then
  begin
    FLock.Enter;
    try
      result := FObjects.Contains(Instance);
    finally
      FLock.Leave;
    end;
  end;
end;

function TProtectedObjectList.GetCount: integer;
begin
  FLock.Enter;
  try
    result :=FObjects.Count;
  finally
    FLock.Leave;
  end;
end;

function TProtectedObjectList.GetEmpty: Boolean;
begin
  FLock.Enter;
  try
    result := FObjects.Count<1;
  finally
    FLock.Leave;
  end;
end;

function TProtectedObjectList.Enter: TObjectList;
begin
  FLock.Enter;
  result := FObjects;
end;

procedure TProtectedObjectList.Leave;
begin
  FLock.Leave;
end;

//############################################################################
//  TProtectedObject
//############################################################################

constructor TProtectedObject.Create(const AOptions: TProtectedValueAccessRights = [lvRead, lvWrite]);
begin
  inherited Create;
  FLock := TCriticalSection.Create;
  FLock.Enter();
  try
  	FOptions := AOptions;
  	FData := T.Create;
  finally
    FLock.Leave();
  end;
end;

destructor TProtectedObject.Destroy;
begin
	FData.free;
  FLock.Free;
  inherited;
end;

function TProtectedObject.GetAccessRights: TProtectedValueAccessRights;
begin
  FLock.Enter;
  try
    result := FOptions;
  finally
    FLock.Leave;
  end;
end;

procedure TProtectedObject.SetAccessRights(Rights: TProtectedValueAccessRights);
begin
  FLock.Enter;
  try
    FOptions := Rights;
  finally
    FLock.Leave;
  end;
end;

function TProtectedObject.Lock: T;
begin
  FLock.Enter;
  result := FData;
end;

procedure TProtectedObject.Unlock;
begin
  FLock.Leave;
end;

procedure TProtectedObject.Synchronize(const Entry: TProtectedObjectEntry);
begin
  if assigned(Entry) then
  begin
    FLock.Enter;
    try
      Entry(FData);
    finally
      FLock.Leave;
    end;
  end;
end;

function TProtectedObject.GetValue: T;
begin
  FLock.Enter;
  try
    if (lvRead in FOptions) then
    	result := FData
  	else
    	raise EProtectedObject.CreateFmt('%s:Read not allowed error',[classname]);
  finally
    FLock.Leave;
  end;
end;

procedure TProtectedObject.SetValue(Value: T);
begin
  FLock.Enter;
  try
    if (lvWrite in FOptions) then
    begin
      if (TObject(FData) is TPersistent)
      or (TObject(FData).InheritsFrom(TPersistent)) then
      	TPersistent(FData).Assign(TPersistent(Value))
    	else
      	raise EProtectedObject.CreateFmt
        	('Locked object assign failed, %s does not inherit from %s',
        	[TObject(FData).ClassName,'TPersistent']);

    end else
    raise EProtectedObject.CreateFmt('%s:Write not allowed error',[classname]);
  finally
    FLock.Leave;
  end;
end;

//############################################################################
//  TProtectedValue
//############################################################################

Constructor TProtectedValue.Create(const Access: TProtectedValueAccessRights);
begin
  inherited Create;
  {$IFNDEF INHERIT_FROM_CRITICALSECTION}
  FLock := TCriticalSection.Create;
  {$ENDIF}
  FOptions := Access;
end;

constructor TProtectedValue.Create(Value: T);
begin
  inherited Create;
  {$IFNDEF INHERIT_FROM_CRITICALSECTION}
  FLock := TCriticalSection.Create;
  {$ENDIF}
  FOptions := [lvRead, lvWrite];
  FData := Value;
end;

constructor TProtectedValue.Create(Value: T; const Access: TProtectedValueAccessRights);
begin
  inherited Create;
  {$IFNDEF INHERIT_FROM_CRITICALSECTION}
  FLock := TCriticalSection.Create;
  {$ENDIF}
  FOptions := Access;
  FData := Value;
end;

Destructor TProtectedValue.Destroy;
begin
  {$IFNDEF INHERIT_FROM_CRITICALSECTION}
  FLock.Free;
  {$ENDIF}
  inherited;
end;

function TProtectedValue.GetAccessRights: TProtectedValueAccessRights;
begin
  Enter();
  try
    result := FOptions;
  finally
    Leave();
  end;
end;

procedure TProtectedValue.SetAccessRights(Rights: TProtectedValueAccessRights);
begin
  Enter();
  try
    FOptions := Rights;
  finally
    Leave();
  end;
end;

{$IFNDEF INHERIT_FROM_CRITICALSECTION}
procedure TProtectedValue.Enter;
begin
  FLock.Enter;
end;

procedure TProtectedValue.Leave;
begin
  FLock.Leave;
end;
{$ENDIF}

procedure TProtectedValue.Synchronize(const Entry: TProtectedValueEntry);
begin
  if assigned(Entry) then
  Begin
    Enter();
    try
      Entry(FData);
    finally
      Leave();
    end;
  end;
end;

function TProtectedValue.GetValue: T;
begin
  Enter();
  try
    if (lvRead in FOptions) then
    	result := FData
    else
    	raise EProtectedValue.CreateFmt('%s: Read not allowed error', [Classname]);
  finally
    Leave();
  end;
end;

procedure TProtectedValue.SetValue(Value: T);
begin
  Enter();
  try
    if (lvWrite in FOptions) then
    	FData:=Value
    else
    	raise EProtectedValue.CreateFmt('%s: Write not allowed error', [Classname]);
  finally
    Leave();
  end;
end;

end.

Hydra now supports Freepascal and Java

June 15, 2019 2 comments

In case you guys missed it, RemObjects Hydra 6.2 now supports FreePascal!

This means that you can now use forms and units from .net and Java from your Freepascal applications – and (drumroll) also mix and match between Delphi, .net, Java and FPC modules! So if you see something cool that Freepascal lacks, just slap it in a Hydra module and you can use it across language barriers.

I have used Hydra for years with Delphi, and being able to use .net forms and components in Delphi is pretty awesome. It’s also a great framework for building modular applications that are easier to manage.

64296152_10156282425590906_705072396930908160_n

Being able to tap into Freepascal is a great feature. Or the other way around, with Freepascal showing forms from Delphi, .net or Java.

For example, if you are moving to Freepascal, you can isolate the forms or controls that are not available under Freepascal in a Hydra module, and voila – you can gradually migrate.

If you are moving to Oxygene Pascal the same applies, you can implement the immediate logic under .net, and then import and use the parts that can’t easily be ported (or that you want to wait with).

The best of four worlds — You gotta love that!

Check out Hydra here:

https://hydra.remobjects.com/hydra/whatsnew/default.aspx

 

Amibian.js under the hood

December 5, 2018 2 comments

Amibian.js is gaining momentum as more and more developers, embedded systems architects, gamers and retro computer enthusiasts discover the project. And I have to admit I’m pretty stoked about what we are building here myself!

intro

In a life-preserver no less 😀

But, with any new technology or invention there are two common traps that people can fall into: The first trap is to gravely underestimate a technology. JavaScript certainly invites this, because only a decade ago the language was little more than a toy. Since then JavaScript have evolved to become the most widely adopted programming language in the world, and runtime engines like Google’s V8 runs JavaScript almost as fast as compiled binary code (“native” means machine code, like that produced by a C/C++ compiler, Pascal compiler or anything else that produces programs that run under Linux or Windows).

It takes some adjustments, especially for traditional programmers that havent paid attention to where browsers have gone – but long gone are the days of interpreted JavaScript. Modern JavaScript is first parsed, tokenized and compiled to bytecodes. These bytecodes are then JIT compiled (“just in time”, which means the compilation takes place inside the browser) to real machine-code using state of the art techniques (LLVM). So the JavaScript of 2018 is by no means the JavaScript of 2008.

The second trap you can fall into – is to exaggerate what a new technology can do, and attach abilities and expectations to a product that simply cannot be delivered. It is very important to me that people don’t fall into either trap, and that everyone is informed about what Amibian.js actually is and can deliver – but also what it wont deliver. Rome was not built-in a day, and it’s wise to study all the factors before passing judgement.

I have been truly fortunate that people support the project financially via Patreon, and as such I feel it’s my duty to document and explain as much as possible. I am a programmer and I often forget that not everyone understands what I’m talking about. We are all human and make mistakes.

Hopefully this post will paint a clearer picture of Amibian.js and what we are building here. The project is divided into two phases: first to finish Amibian.js itself, and secondly to write a Visual Studio clone that runs purely in the browser. Since it’s easy to mix these things up, I’m underlining this easy – just in case.

What the heck is Amibian.js?

Amibian.js is a group of services and libraries that combined creates a portable operating-system that renders to HTML5. A system that was written using readily available web technology, and designed to deliver advanced desktop functionality to web applications.

The services that make up Amibian.js was designed to piggyback on a thin Linux crust, where Linux deals with the hardware, drivers and the nitty-gritty we take for granted. There is no point trying to write a better kernel in 2018, because you are never going to catch up with Linus Torvalds. It’s must more interesting to push modern web technology to the absolute limits, and build a system that is truly portable and distributed.

smart_ass

Above: Amibian.js is created in Smart Pascal and compiled to JavaScript

The service layer is written purely in node.js (JavaScript) which guarantees the same behavior regardless of host platform. One of the benefits of using off-the-shelves web technology is that you can physically copy the whole system from one machine to the other without any changes. So if you have a running Amibian.js system on your x86 PC, and copy all the files to an ARM computer – you dont even have to recompile the system. Just fire up the services and you are back in the game.

Now before you dismiss this as “yet another web mockup” please remember what I said about JavaScript: the JavaScript in 2018 is not the JavaScript of 2008. No other language on the planet has seen as much development as JavaScript, and it has evolved from a “browser toy” – into the most important programming language of our time.

So Amibian.js is not some skin-deep mockup of a desktop (lord knows there are plenty of those online). It implements advanced technologies such as remote filesystem mapping, an object-oriented message protocol (Ragnarok), RPCS (remote procedure call invocation stack), video codec capabilities and much more — all of it done with JavaScript.

In fact, one of the demos that Amibian.js ships with is Quake III recompiled to JavaScript. It delivers 120 fps flawlessly (browser is limited to 60 fps) and makes full use of standard browser technologies (WebGL).

utube

Click on picture above to watch Amibian.js in action on YouTube

So indeed, the JavaScript we are talking about here is cutting edge. Most of Amibian.js is compiled as “Asm.js” which means that the V8 runtime (the code that runs JavaScript inside the browser, or as a program under node.js) will JIT compile it to highly efficient machine-code.

Which is why Amibian.js is able to do things that people imagine impossible!

Ok, but what does Amibian.js consist of?

Amibian.js consists of many parts, but we can divide it into two categories:

  • A HTML5 desktop client
  • A system server and various child processes

These two categories have the exact same relationship as the X desktop and the Linux kernel. The client connects to the server, invokes procedures to do some work, and then visually represent the response This is identical to how the X desktop calls functions in the kernel or one of the Linux libraries. The difference between the traditional, machine code based OS and our web variation, is that our version doesn’t have to care about the hardware. We can also assign many different roles to Ambian.js (more about that later).

smartdesk

Enjoying other cloud applications is easy with Amibian.js, here is Plex, a system very much based on the same ideas as Amibian.js

And for the record: I’m trying to avoid a bare-metal OS, otherwise I would have written the system using a native programming language like C or Object-Pascal. So I am not using JavaScript because I lack skill in native languages, I am using JavaScript because native code is not relevant for the tasks Amibian.js solves. If I used a native back-end I could have finished this in a couple of months, but a native server would be unable to replicate itself between cloud instances because chipset and CPU would be determining factors.

The Amibian.js server is not a single program. The back-end for Amibian.js consists of several service applications (daemons on Linux) that each deliver specific features. The combined functionality of these services make up “the amibian kernel” in our analogy with Linux. You can think of these services as the library files in a traditional system, and programs that are written for Amibian.js can call on these to a wide range of tasks. It can be as simple as reading a file, or as complex as registering a new user or requesting admin rights.

The greatest strength of Amibian.js is that it’s designed to run clustered, using as many CPU cores as possible. It’s also designed to scale, meaning that it will replicate itself and divide the work between different instances. This is where things get’s interesting, because an Amibian.js cluster doesn’t need the latest and coolest hardware to deliver good performance. You can build a cluster of old PC’s in your office, or a handful of embedded boards (ODroid XU4, Raspberry PI’s and Tinkerboard are brilliant candidates).

But why Amibian.js? Why not just stick with Linux?

That is a fair question, and this is where the roles I mentioned above comes in.

As a software developer many of my customers work with embedded devices and kiosk systems. You have companies that produce routers and set-top boxes, NAS boxes of various complexity, ticket systems for trains and busses; and all of them end up having to solve the same needs.

What each of these manufacturers have in common, is the need for a web desktop system that can be adapted for a specific program. Any idiot can write a web application, but when you need safe access to the filesystem, unified API’s that can delegate signals to Amazon, Azure or your company server, things suddenly get’s more complicated. And even when you have all of that, you still need a rock solid application model suitable for distributed computing. You might have 1 ticket booth, or 10.000 nation wide. There are no systems available that is designed to deal with web-technology on that scale. Yet 😉

Let’s look at a couple of real-life scenarios that I have encountered, I’m confident you will recognize a common need. So here are some roles that Amibian.js can assume and help deliver a solution rapidly. It also gives you some ideas of the economic possibilities.

Updated: Please note that we are talking javascript here, not native code. There are a lot of native solutions out there, but the whole point here is to forget about CPU, chipset and target and have a system floating on top of whatever is beneath.

  • When you want to change some settings on your router – you login to your router. It contains a small apache server (or something similar) and you do all your maintenance via that web interface. This web interface is typically skin-deep, annoying to work with and a pain for developers to update since it’s connected to a native apache module which is 100% dependent on the firmware. Each vendor end up re-inventing the wheel over and over again.
  • When you visit a large museum notice the displays. A museum needs to display multimedia, preferably on touch capable devices, throughout the different exhibits. The cost of having a developer create native applications that displays the media, plays the movies and gives visual feedback is astronomical. Which is why most museums adopt web technology to handle media presentation and interaction. Again they re-invent the wheel with varying degree of success.
  • Hotels have more or less the exact same need but on a smaller scale, especially the larger hotels where the lobby have information booths, and each room displays a web interface via the TV.
  • Shopping malls face the same challenge, and depending on the size they can need anything from a single to a hundred nodes.
  • Schools and education spend millions on training software and programming languages every year. Amibian.js can deliver both and the schools would only pay for maintenance and adaptation – the product itself is free. Kids get the benefit of learning traditional languages and enjoying instant visual feedback! They can learn Basic, Pascal, JavaScript and C. I firmly believe that the classical languages will help make them better programmers as they evolve.

You are probably starting to see the common denominator here?

They all need a web-based desktop system, one that can run complex HTML5 based media applications and give them the same depth as a native operating-system; Which is pretty hard to achieve with JavaScript alone.

Amibian.js provides a rich foundation of more than 4000 classes that developers can use to write large, complex and media rich applications (see Smart Mobile Studio below). Just like Linux and Windows provides a wealth of libraries and features for native application development – Amibian.js aims to provide the same for cloud and embedded systems.

And as the name implies, it has roots in the past with the machine that defined multimedia, namely the Commodore Amiga. So the relation is more than just visually, Amibian.js uses the same system architecture – because we believe it’s one of the best systems ever designed.

If JavaScript is so poor, why should we trust you to deliver so much?

First of all I’m not selling anything. It’s not like this project is something that is going to make me a ton of cash. I ask for support during the development period because I want to allocate proper time for it, but when done Amibian.js will be free for everyone (LGPL). And I’m also writing it because it’s something that I need and that I havent seen anywhere else. I think you have to write software for yourself, otherwise the quality wont be there.

Secondly, writing Amibian.js in raw JavaScript with the same amount of functions and depth would take years. The reason I am able to deliver so much functionality quickly, is because I use a compiler system called Smart Mobile Studio. This saves months and years of development time, and I can use all the benefits of OOP.

Prior to starting the Amibian.js project, I spent roughly 9 years creating Smart Mobile Studio. Smart is not a solo project, many individuals have been involved – and the product provides a compiler, IDE (editor and tools), and a vast run-time library of pre-made classes (roughly 4000 ready to use classes, or building-blocks).

amibian_shell

Writing large-scale node.js services in Smart is easy, fun and powerful!

Unlike other development systems, Smart Mobile Studio compiles to JavaScript rather than machine-code. We have spent a great deal of time making sure we could use proper OOP (object-oriented programming), and we have spent more than 3 years perfecting a visual application framework with the same depth as the VCL or FMX (the core visual frameworks for C++ builder and Delphi).

The result is that I can knock out a large application that a normal JavaScript coder would spend weeks on – in a single day.

Smart Mobile Studio uses the object-pascal language, a dialect which is roughly 70% compatible with Delphi. Delphi is exceptionally well suited for writing large, data driven applications. It also thrives for embedded systems and low-level system services. In short: it’s a lot easier to maintain 50.000 lines of object pascal code, than 500.000 lines of JavaScript code.

Amibian.js, both the service layer and the visual HTML5 client application, is written completely using Smart Mobile Studio. This gives me as the core developer of both systems a huge advantage (who knows it better than the designer right?). I also get to write code that is truly OOP (classes, inheritance, interfaces, virtual and abstract methods, partial classes etc), because our compiler crafts something called a VMT (virtual method table) in JavaScript.

Traditional JavaScript doesn’t have OOP, it has something called prototypes. With Smart Pascal I get to bring in code from the object-pascal community, components and libraries written in Delphi or Freepascal – which range in the hundreds of thousands. Delphi alone has a massive library of code to pick from, it’s been a popular toolkit for ages (C is 3 years older than pascal).

But how would I use Amibian.js? Do I install it or what?

Amibian.js can be setup and used in 4 different ways:

  • As a true desktop, booting straight into Amibian.js in full-screen
  • As a cloud service, accessing it through any modern browser
  • As a NAS or Kiosk front-end
  • As a local system on your existing OS, a batch script will fire it up and you can use your browser to access it on https://127.0.0.1:8090

So the short answer is yes, you install it. But it’s the same as installing Chrome OS. It’s not like an application you just install on your Linux, Windows or OSX box. The whole point of Amibian.js is to have a platform independent, chipset agnostic system. Something that doesn’t care if you using ARM, x86, PPC or Mips as your CPU of preference. Developers will no doubt install it on their existing machines, Amibian.js is non-intrusive and does not affect or touch files outside its own eco-system.

But the average non-programmer will most likely setup a dedicated machine (or several) or just deploy it on their home NAS.

The first way of enjoying Amibian.js is to install it on a PC or ARM device. A disk image will be provided for supporters so they can get up and running ASAP. This disk image will be based on a thin Linux setup, just enough to get all the drivers going (but no X desktop!). It will start all the node.js services and finally enter a full-screen web display (based on Chromium Embedded) that renders the desktop. This is the method most users will prefer to work with Amibian.js.

The second way is to use it as a cloud service. You install Amibian.js like mentioned above, but you do so on Amazon or Azure. That way you can login to your desktop using nothing but a web browser. This is a very cost-effective way of enjoying Amibian.js since renting a virtual instance is affordable and storage is abundant.

The third option is for developers. Amibian.js is a desktop system, which means it’s designed to host more elaborate applications. Where you would normally just embed an external website into an IFrame, but Amibian.js is not that primitive. Hosting external applications requires you to write a security manifest file, but more importantly: the application must interface with the desktop through the window’s message-port. This is a special object that is sent to the application as a hand-shake, and the only way for the application to access things like the file-system and server-side functionality, is via this message-port.

Calling “kernel” level functions from a hosted application is done purely via the message-port mentioned above. The actual message data is JSON and must conform to the Ragnarok client protocol specification. This is not as difficult as it might sound, but Amibian.js takes security very seriously – so applications trying to cause damage will be promptly shut down.

You mention hosted applications, do you mean websites?

Both yes and no: Amibian.js supports 3 types of applications:

  • Ordinary HTML5/JS based applications, or “websites” as many would call them. But like I talked about above they have to establish a dialog with the desktop before they can do anything useful.
  • Hybrid applications where half is installed as a node.js service, and the other half is served as a normal HTML5 app. This is the coolest program model, and developers essentially write both a server and a client – and then deploy it as a single package.
  • LDEF compiled bytecode applications, a 68k inspired assembly language that is JIT compiled by the browser (commonly called “asm.js”) and runs extremely fast. The LDEF virtual machine is a sub-project in Amibian.js

The latter option, bytecodes, is a bit like Java. A part of the Amibian.js project is a compiler and runtime system called LDEF.

patron_asm2

Above: The Amibian.js LDEF assembler, here listing opcodes + disassembling a method

The first part of the Amibian.js project is to establish the desktop and back-end services. The second part of the project is to create the worlds first cloud based development platform. A full Visual Studio clone if you like, that allows anyone to write cloud, mobile and native applications directly via the browser (!)

Several languages are supported by LDEF, and you can write programs in Object Pascal, Basic and C. The Basic dialect is especially fun to work with, since it’s a re-implementation of BlitzBasic (with a lot of added extras). Amiga developers will no doubt remember BlitzBasic, it was used to create some great games back in the 80s and 90s. It’s well suited for games and multimedia programming and above all – very easy to learn.

More advanced developers can enjoy Object Pascal (read: Delphi) or a sub-set of C/C++.

And please note: This IDE is designed for large-scale applications, not simple snippets. The ultimate goal of Amibian.js is to move the entire development cycle to the cloud and away from the desktop. With Amibian.js you can write a cool “app” in BlitzBasic, run it right in the browser — or compile it server-side and deploy it to your Android Phone as a real, natively compiled application.

So any notion of a “mock desktop for HTML” should be firmly put to the side. I am not playing around with this product and the stakes are very real.

But why don’t you just use ChromeOS?

There are many reasons, but the most important one is chipset independence. Chrome OS is a native system, meaning that it’s core services are written in C/C++ and compiled to machine code. The fundamental principle of Amibian.js is to be 100% platform agnostic, and “no native code allowed”. This is why the entire back-end and service layer is targeting node.js. This ensures the same behavior regardless of processor or host system (Linux being the default host).

Node.js has the benefit of being 100% platform independent. You will find node.js for ARM, x86, Mips and PPC. This means you can take advantage of whatever hardware is available. You can even recycle older computers that have lost mainstream support, and use them to run Amibian.js.

A second reason is: Chrome OS might be free, but it’s only as open as Google want it to be. ChromeOS is not just something you pick up and start altering. It’s dependence on native programming languages, compiler toolchains and a huge set of libraries makes it extremely niche. It also shields you utterly from the interesting parts, namely the back-end services. It’s quite frankly boring and too boxed in for any practical use; except for Google and it’s technology partners that is.

I wanted a system that I could move around, that could run in the cloud, on cheap SBC’s. A system that could scale from handling 10 users to 1000 users – a system that supports clustering and can be installed on multiple machines in a swarm.

A system that anyone with JavaScript knowledge can use to create new and exciting systems, that can be easily expanded and serve as a foundation for rich media applications.

What is this Amiga stuff, isn’t that an ancient machine?

In computing terms yes, but so is Unix. Old doesn’t automatically mean bad, it actually means that it’s adapted and survived challenges beyond its initial design. While most of us remember the Amiga for its games, I remember it mainly for its elegant and powerful operating-system. A system so flexible that it’s still in use around the world – 33 years after the machine hit the market. That is quite an achievement.

image2

The original Amiga OS, not bad for a 33-year-old OS! It was and continues to be way ahead of everyone else. A testament to the creativity of its authors

Amibian.js as the name implies, borrows architectural elements en-mass from Amiga OS. Quite simply because the way Amiga OS is organized and the way you approach computing on the Amiga is brilliant. Amiga OS is much more intuitive and easier to understand than Linux and Windows. It’s a system that you could learn how to use fully with just a couple of days exploring; and no manuals.

But the similarities are not just visual or architectural. Remember I wrote that hosted applications can access and use the Amibian.js services? These services implement as much of the original ROM Kernel functions as possible. Naturally I can’t port all of it, because it’s not really relevant for Amibian.js. Things like device-drivers serve little purpose for Amibian.js, because Amibian.js talks to node.js, and node talks to the actual system, which in turn handles hardware devices. But the way you would create windows, visual controls, bind events and create a modern, event-driven application has been preserved to the best of my ability.

But how does this thing boot? I thought you said server?

If you have setup a dedicated machine with Amibian.js then the boot sequence is the same as Linux, except that the node.js services are executed as background processes (daemons or services as they are called), the core server is initialized, and then a full-screen HTML5 view is set up that shows the desktop.

But that is just for starting the system. Your personal boot sequence which deals with your account, your preferences and adaptations – that boots when you login to the system.

When you login to your Amibian.js account, no matter if it’s just locally on a single PC, a distributed cluster, or via the browser into your cloud account — several things happen:

  1. The client (web-page if you like) connects to the server using WebSocket
  2. Login is validated by the server
  3. The client starts loading preferences files via the mapped filesystem, and then applies these to the desktop.
  4. A startup-sequence script file is loaded from your account, and then executed. The shell-script runtime engine is built into the client, as is REXX execution.
  5. The startup-script will setup configurations, create symbolic links (assigns), mount external devices (dropbox, google drive, ftp locations and so on)
  6. When finished the programs in the ~/WbStartup folder are started. These can be both visual and non-visual.

As you can see Amibian.js is not a mockup or “fake” desktop. It implements all the advanced features you expect from a “real” desktop. The filesystem mapping is especially advanced, where file-data is loaded via special drivers; drivers that act as a bridge between a storage service (a harddisk, a network share, a FTP host, Dropbox or whatever) and the desktop. Developers can add as many of these drivers as they want. If they have their own homebrew storage system on their existing servers, they can implement a driver for it. This ensures that Amibian.js can access any storage device, as long as the driver conforms to the driver standard.

In short, you can create, delete, move and copy files between these devices just like you do on Windows, OSX or the Linux desktop. And hosted applications that run inside their own window can likewise request access to these drivers and work with the filesystem (and much more!).

Wow this is bigger than I thought, but what is this emulation I hear about? Can Amibian.js really run actual programs?

Amibian.js has a JavaScript port of UAE (Unix Amiga Emulator). This is a fork of SAE (scripted Amiga Emulator) that has been heavily optimized for web. Not only is it written in JavaScript, it performs brilliantly and thus allows us to boot into a real Amiga system. So if you have some floppy-images with a game you love, that will run just fine in the browser. I even booted a 2 gigabyte harddisk image 🙂

But Amiga emulation is just the beginning. More and more emulators are ported to JavaScript; you have Nes, SNes, N64, PSX I & II, Sega Megadrive and even a NEO GEO port. So playing your favorite console games right in the browser is pretty straight forward!

But the really interesting part is probably QEmu. This allows you to run x86 instances directly in the browser too. You can boot up in Windows 7 or Ubuntu inside an Amibian.js window if you like. Perhaps not practical (at this point) but it shows some of the potential of the system.

I have been experimenting with a distributed emulation system, where the emulation is executed server-side, and only the graphics and sound is streamed back to the Amibian.js client in real-time. This has been possible for years via Apache Guacamole, but doing it in raw JS is more fitting with our philosophy: no native code!

I heard something about clustering, what the heck is that?

Remember I wrote about the services that Amibian.js has? Those that act almost like libraries on a physical computer? Well, these services don’t have to be on the same machine — you can place them on separate machines and thus its able to work faster.

47470965_10155861938320906_4959664457727868928_n

Above: The official Amibian.js cluster, 4 x ODroid XU4s SBC’s in a micro-rack

A cluster is typically several computers connected together, with the sole purpose of having more CPU cores to divide the work on. The cool thing about Amibian.js is that it doesn’t care about the underlying CPU. As long as node.js is available it will happily run whatever service you like – with the same behavior and result.

The official Amibian.js cluster consists of 5 ODroid XU4/S SBC (single board computers). Four of these are so-called “headless” computers, meaning that they don’t have a HDMI port – and they are designed to be logged into and software setup via SSH or similar tools. The last machine is a ODroid XU4 with a HDMI out port, which serves as “the master”.

The architecture is quite simple: We allocate one whole SBC for a single service, and allow the service to copy itself to use all the CPU cores available (each SBC has 8 CPU cores). With this architecture the machine that deals with the desktop clients don’t have to do all the grunt work. It will accept tasks from the user and hosted applications, and then delegate the tasks between the 4 other machines.

Note: The number of SBC’s is not fixed. Depending on your use you might not need more than a single SBC in your home setup, or perhaps two. I have started with 5 because I want each part of the architecture to have as much CPU power as possible. So the first “official” Amibian.js setup is a 40 core monster shipping at around $250.

But like mentioned, you don’t have to buy this to use Amibian.js. You can install it on a single spare X86 PC you have, or daisy chain a couple of older PC’s on a switch for the same result.

Why Headless? Don’t you need a GPU?

The headless SBC’s in the initial design all have GPU (graphical processing unit) as well as audio capabilities. What they lack is GPIO pins and 3 additional USB ports. So each of the nodes on our cluster can handle graphics at blistering speed — but that is ultimately not their task. They serve more as compute modules that will be given tasks to finish quickly, while the main machine deals with users, sessions, traffic and security.

The 40 core cluster I use has more computing power than northern europe had in the early 80s, that’s something to think about. And the pricetag is under $300 (!). I dont know about you but I always wanted a proper mainframe, a distributed computing platform that you can login to and that can perform large tasks while I do something else. This is as close as I can get on a limited budget, yet I find the limitations thrilling and fun!

Part of the reason I have opted for a clustered design has to do with future development. While UAE.js is brilliant to emulate an Amiga directly in the browser – a more interesting design is to decouple the emulation from the output. In other words, run the emulation at full speed server-side, and just stream the display and sounds back to the Amibian.js display. This would ensure that emulation, of any platform, runs as fast as possible, makes use of multi-processing (read: multi threading) and fully utilize the network bandwidth within the design (the cluster runs on its own switch, separate from the outside world-wide-web).

I am also very interested in distributed computing, where we split up a program and run each part on different cores. This is a topic I want to investigate further when Amibian.js is completed. It would no doubt require a re-design of the LDEF bytecode system, but this something to research later.

Will Amibian.js replace my Windows box?

That depends completely on what you use Windows for. The goal is to create a self-sustaining system. For retro computing, emulation and writing cool applications Amibian.js will be awesome. But Rome was not built-in a day, so it’s wise to be patient and approach Amibian.js like you would Chrome OS. Some tasks are better suited for native systems like Linux, but more and more tasks will run just fine on a cloud desktop like Amibian.js.

Until the IDE and compilers are in place after phase two, the system will be more like an embedded OS. But when the LDEF compiler and IDE is in place, then people will start using it en-mass and produce applications for it. It’s always a bit of work to reach that point and create critical mass.

tomes

Object Pascal is awesome, but modern, native development systems are quite demanding

My personal need has to do with development. Some of the languages I use installs gigabytes onto my PC and you need a full laptop to access them. I love Amibian.js because I will be able to work anywhere in the world, as long as a browser and normal internet line is available. In my case I can install a native compiler on one of the nodes in the cluster, and have LDEF emit compatible code; voila, you can build app-store ready applications from within a browser environment.

 

I also love that I can set-up a dedicated platform that runs legacy applications, games – and that I can write new applications and services using modern, off the shelve languages. And should a node in the cluster break down, I can just copy the whole system over to a new, affordable SBC and keep going. No super expensive hardware to order, no absurd hosting fees, and finally a system that we all can shape and use in a plethora of systems. From a fully fledged desktop to a super advanced NAS or Router that use Amibian.js to give it’s customers a fantastic experience.

And yes, I get to re-create the wonderful reality of Amiga OS without the absurd egoism that dominates the Amiga owners to this day. I don’t even know where to begin with the present license holders – and I am so sick of the drama that rolling my own seemed the only reasonable path forward.

Well — I hope this helps clear up any misconceptions about Amibian.js, and that you find this as interesting as I do. As more and more services are pushed cloud-side, the more relevant Amibian.js will become. It is perfect as a foundation for large-scale applications, embedded systems — and indeed, as a solo platform running on embedded devices!

I cant wait to finish the services and cluster this sucker on the ODroid rack!

If you find this project interesting, head over to my Patreon website and get involved! I could really use your support, even if it’s just a $5 “high five”. Visit the project at: http://www.patreon.com/quartexNow

Delphi Developer Demo Competition votes

November 3, 2018 Leave a comment

A month ago we setup a demo competition on Delphi Developer. It’s been a few years since we did this, and demo competitions are always fun no matter what, so it was high time we set this up!

all_prices

This years prizes are awesome!

Initially we had a limit of at least 10 contestants for the competition to go through, but I will make an exception this time. The prices are great and worth a good effort. I was a bit surprised by the low number of contestants since more than 60 developers signed our poll about the event. So I was hoping for at least 20 to be honest.

I think the timing was a bit off, we are closer to the end of the year and most developers are working under deadlines. So next year I think I’ll move the date to June or July.

Be that as it may – a demo competition is a tradition by now, so we proceed to the voting process!

The contestants

The contestants this year are:

  • Christian Hackbart
  • Mogens Lundholm
  • steven Chesser
  • Jens Borrisholt
  • Paul Nicholls

Note: Dennis is a moderator on Delphi Developer, as such he cannot partake in the voting process.

The code

Each contestant has submitted a project to the following repositories (in the same order as the names above), so make sure you check out each one and inspect them carefully before casting your vote.

Voting

We use the poll function built-into Facebook, so just visit us at Delphi Developer to cast your vote! You can only vote once and there is a 1 week deadline on this (so votes are done on the 10th this month.

Delphi Developer Competition

September 28, 2018 Leave a comment

The Delphi Developer group on Facebook has been around for a few years, and in that time we have held two very interesting demo competitions. The last competition we held was for Smart Pascal (Smart Mobile Studio) only, but we are extending it to include the dialects supported by our group; meaning Delphi, Smart Pascal, Freepascal and Remobjects Oxygene!

Embarcadero shipped over some extra goodies for us, so the competition this year is indeed a magical one. The top 3 contestants all get the official Embarcadero T-Shirt. We also throw in 10 Sencha ball-pens for each of the top 3 contestants; this is in addition to the actual prizes listed below (!)

The #1 winner not only get the 100€ FPGA devkit (see prizes below), he or she walks off with a high-quality, stainless steel Embarcadero branded coffee mug that holds half a litre of breakfast! (I seriously wanted to keep this for myself).

all_prices

The prizes in all their glory!

Submission rules are:

  • Source submission (GPL, LGPL) + binary
  • No dependencies on commercial libraries or components
  • Submissions must be available through GIT or BitBucket
  • Submission must include everything it needs to be compiled

Submission categories are:

  • Graphical demo (demo-scene style)
  • Games and multimedia
  • General purpose (utility programs)

Use the following Google form to register:

The purpose of the submissions is to show off both the language and your skills. Back in 2013 we got a ton of really cool demo-scene stuff, demonstrating timeless techniques; everything from bouncing meta-balls, gouraud shaded vectors, sinus scroll-texts and webgl landscape flight. We also had a fantastic fractal explorer program, bitmap rotozoom generator – and two great games! Which both made it onto AppStore and Google Play!

First prize

first_price.png

The winner walks off with some exciting stuff!

The first prize this year is something really, really special. The winner walks off with a spiffing Altera Cyclone IV FPGA starter board. This is a spectacular FPGA kit that allows you to upload a wide range of ready-to-rock FPGA core’s, as well as your own logic designs.

But to make it more accessible we added a retro daughter board, this gives you VGA, audio, keyboard, mouse, MicroSD, serial and two old school joystick ports. The daughterboard is needed if you plan on using some of the retro-cores out there. I personally love the Amiga core (shock, I know) but you can run anything from a humble Spectrum to Sega Megadrive, SNES, Atari ST/E, Neo-Geo and many others.

While the daughter-board makes this wonderful for retro-computing and gaming, fpga is first and foremost a tool for engineering. It ships with a USB-Blaster which allows you to connect it directly to your PC and it will be recognized as a device. FPGA modeling applications will pick this up and you can test out designs “live”, or just place a core on the SD-card and edit the boot config.

The kit sells for roughly 100€ with a case, but getting both the motherboard and the retro daughter-board is difficult. These things are sold separately, and the daughter board is produced in small numbers by dedicated hackers. So winning a kit that is pre-assembled, soldered and ready to go is quite a prize!

If you are even remotely interested in FPGA programming, this should give you goosebumps!

Second prize

tinker

The most powerful SBC I have ever used

The silver medal is the powerful Asus Tinkerboard, this is probably the most powerful SBC you can get below 100€. It delivers 10 times the firepower a Raspberry PI 3b can muster – and is superbly suited for Android development, Smart Mobile Studio kiosk systems and much, much more.

Of all the board I have tested and own this is the one with enough CPU grunt (even the mighty ODroid XU4 can’t touch this) to rival a low-end x86 laptop. You have to fork out for a SnapDragon IV to beat the Tinkerboard.

I have two of these around the house myself, one as a game console running Emulation Station (emulates PSX 1, 2 and 3 games), and another under my TV with Kodi and a 2 terabyte movie collection.

Third prize

Last but not least the bronze medal is a Raspberry PI 3b. The PI should be no stranger to programmers today, it more or less defines the IOT revolution and has, by far, the biggest collection of software available of all SBC (single board computers) available today.

Raspberry_Pi_3_Large

The device that represents the IOT phenomenon

The PI is a wonderful starter board for Delphi developers who want to play with hardware under android. It’s also a fantastic board for Smart and FPC development.

I use a PI to test node.js services written in Smart Mobile Studio.

Dates

We start the clock on the 1st of october and submission must be delivered by the 31st. So you have a full month to code something cool!

Remember comments

While not always possible, try to write clean code. Part of the point here is to use these demos as an educational source.

We wont reject non-commented code, but please try to avoid 20k lines of spaghetti.

Hints and tips

Delphi has brilliant support for DirectX and OpenGL, so taking advantage of hardware acceleration should not be a problem. FMX is largely powered by the GPU and has 3d rendering and modeling as an integral feature – so Delphi developers have a slight advantage there.

16_bit_smb2_smm_wip_by_trackmasterfan341-da3nch3

Tilesets are graphics-blocks that can be used to create large game levels with a map-editor

If you want to use DIB’s under vanilla WinAPI there is always Graphics32, a wonderful and exceptionally detailed library for fast graphics.

Music: Most demo-scene code use mod music (actually today people play MP3’s as well), and there are good wrappers for player libraries like Bass. It’s always a nice touch to add a spot of music (and literally millions of free mod tracks freely available). So give your demo some flair by adding a kick-ass mod track, or impress us by writing a score yourself?

In the world of demo coding anything goes! Bring out that teenage spirit and go wild, create wonderful graphical effects, vector objects, scrolling texts, games or whatever tickles your fancy. If you need inspiration, check out the demo scene videos on YouTube (if that is what you would like to submit of course). A kick-ass database application, X server renderer, paint program or a compiler — it’s all good!

Make people go WOW that is cool!

Tile graphics: which is often used in games and demos, can be found almost anywhere. If you google “tileset” or “game tiles” you should get more than you need. Brilliant for parallax scrolling. Why not give Super Mario a run for its money? Show the next generation how to code a platform game! Check out the Tiled map-editor, this has a JSON export filter for you Smart Pascal coders.

screenshot-objects

Tiled is a powerful map editor. There is also mappy, which I believe have a Delphi player

OK guys, the game is a-foot! May the best coder win!

Using Smart Mobile Studio under Linux

April 22, 2018 Leave a comment

Every now and then when I post something about Smart Mobile Studio, an individual or two wants to inform me how they cannot use Smart because it’s not available for Linux. While more rare, the same experience happens now and then with OS X.

linux

The Smart desktop demo running in Firefox on Ubuntu, with Quake 3 at 60 fps

While the request for Linux or OS X support is both valid and understandable (and something we take seriously), more often than not these questions can be a pre-cursor to a larger picture; one that touches on open-source, pricing and personal philosophical points of view; often with remarks on pricing.

Truth be told, the price we ask for Smart Mobile Studio is close to symbolic. Especially if you take the time to look at the body of work Smart delivers. We are talking hundreds of hand written units with thousands of classes, each spesifically adapted for HTML5, Phonegap and Node.js. Not to mention ports of popular JavaScript frameworks.

If you compare this to native object pascal development tools with similar functionality, they can set you back thousands of dollars. Our asking price of $149 for the pro edition, $399 for the enterprise edition, and a symbolic $42 for the basic edition, that is an affordable solution. Also keep in mind that this gives you access to updates for a duration of 12 months. When was the last time you bought a full development suite that allows you to write mobile applications, platform independent servers, platform independent system services and HTML5 web applications for less that $400 ?

prices

Our price model is more than reasonable considering what you get

With platform independent we mean that in the true sense of the word: once compiled, no changes are required. You can write a system service on Windows and it will run just fine under Linux or OS X. No re-compile needed. You can take a server and copy it to Amazon or Azure, run it in a cluster or scale it from a single instance to 100 instances without any change. That is unheard of for object pascal until now.

Smart Mobile Studio is the only object pascal development system that delivers a stand-alone IDE, stand-alone compiler, a wast object-oriented run-time library written from scratch to cover HTML5, Node.js and embedded systems that run JavaScript.

And yes, we know there are other systems in circulation, but none of them are even close to the functionality that we deliver. Functionality that has been polished for seven years now. And our RTL is growing every day to capture and expose more and more advanced functionality that you can use to enrich your products.

21272810_1585822718147745_4358597225084661216_o

The RTL class browser shows the depth of our RTL

As of writing we have a team of six people working on Smart Mobile Studio. We have things in our labs that is going to change the way people build applications forever. We were the first to venture into this new landscape. There were nobody we could imitate, draw inspiration from or learn from. We had to literally make the path as we moved forward.

And our vision and goal remains the same today as it was seven years ago: to empower object pascal developers – and to secure their investment in the language and methodology that is object pascal.

Discipline and purpose

There is so much I would like to work on right now. Things I want to add to Smart Mobile Studio because I find them interesting, powerful and I know people are going to love them. But that style of development, the “Garage days” as people call it, is over. It does wonders in the beginning of a project maybe, but eventually you reach a stage where a formal timeline and business plan must be carved in stone.

And once defined, you have to stick to it. It would be an insult to our customers if we pivoted left and right on a whim. Like every company we have room for research, even a couple of skunkwork projects, but our primary focus is to make our foundation rock solid before further growth.

j5

By tapping into established JavaScript frameworks you can cover more than 40+ embedded systems and micro-controllers. More and more hardware supports JS for automation

Our “garage days” ended around three years ago, and through hard work we defined our timeline, business model and investor program. In 2017 we secured enough capital to continue full-time development.

Our timeline has been published earlier, but we can re-visit some core points here:

The visual components that shipped with Smart Mobile Studio in the beginning, were meant more as examples than actual ready-to-use modules. This was common for other development platforms of the day, such as Xamarin’s C# / Mono toolchain, where you were expected to inherit from and implement aspects of a “partial class”. This is also why Smart Pascal has support for partial classes (neither Delphi or Freepascal supports this wonderful feature).

native

One of our skunkwork projects is a custom linux distro that runs your Smart applications directly in the Linux framebuffer. No X or desktop, just your code. Here running “the smart desktop” as the only visual front-end under x86 vmware

Since developers coming from Delphi had different expectations, there was only one thing to do. To completely re-write every single visual control (and add a few new controls) so that they matched our customers expectations. So the first stretch of our timeline has been 100% dedicated to the visual aspects of our RTL. While doing so we have made the RTL faster, more efficient, and added some powerful sub-systems:

  • A dedicated theme engine
  • Unified event delegation
  • Storage device classes
  • Focus and control tracking
  • Support for relative position modes
  • Support for all boxing models
  • Inline linking ( {$R “file.js”} will now statically link an external library)
  • And much, much more

So the past eight months has been all about visual components.

22814515_1630289797034370_9138255627706616601_n

Theming is important

The second stretch, which we are in right now, is dedicated to the non-visual infrastructure. This means in particular Node.js but also touches on non-visual components, TAction support and things that will appear in updates this year.

Node.js is especially important since it allows you to write platform and chipset independent web servers, system services and command-line applications. This is pioneering work and we are the first to take this road. We have successfully tamed the alien landscape of JavaScript, both for client, mobile and server – and terraformed it into a familiar, safe and productive environment for object pascal developers.

I feel the results speak for themselves, and our next update brings Smart Mobile Studio to the next level: full stack cloud and web development. We now cover back-end, middle-ware and front-end technologies. And our RTL now stretches from micro-controllers to mobile application to clustered cloud services.

This is the same technology used by Netflix to process terabytes of data every second on a global scale. Which should tell you something about the potential involved.

Working on Linux

Since Smart Mobile Studio was designed to be a swiss army knife for Delphi and Lazarus developers, capable to reaching segments of the market where native code is unsuitable – our primary focus is Microsoft Windows. At least for now.

Delphi itself is a Windows-based development system, and even though it supports multiple targets, Windows is still the bread and butter of commercial software development.

Like I mentioned above, we have a timeline to follow, and until we have reached the end of that line, we are not prepared to refactor our IDE for Linux or OS X. This might change sooner than people think, but until our timeline for the RTL is concluded, we will not allocate time for making the IDE platform independent.

But, you can actually run Smart Mobile Studio on both Linux and OS X today.

Linux has a system called Wine. This is a system that implements the Windows API, but delegates all the calls to Linux. So when a Windows program calls a WinAPI through Wine, its delegated to Linux variation of the same call. This is a massive undertaking, but it has years of work behind it and functions extremely well.

So on linux you can install it by opening up a shell and typing:

sudo apt-get install wine

I take for granted here that your Linux flavour has aperture installed (I’m using Ubuntu since that is easy to work with), which is the package manager that gives you the “apt-get” command. If you have some other system then just google how to install a package.

With Wine and it’s dependencies installed, you can run the Smart Mobile Studio installer. Wine will create a virtual, sandboxed disk for you – so that all the files end up where they should. Once finished you punch in the license serial number as normal, and voila – you can now use Smart Mobile Studio on Linux.

Note: in some cases you have to right-click the SmartMS.exe and select “run with -> Wine”, but usually you can just double-click the exe file and it runs.

Smart Mobile Studio on OSX

Wine has been ported to OS X, but it’s more user friendly. You download a program called wine-bottler, which takes Smart and bundles it with wine + any dependencies it needs. You can then start Smart Mobile Studio like it was a normal OS X application.

Themes and look

The only problem with Wine is that it doesnt support Windows themes out of the box. It would be illegal for them to ship those files. But you can manually copy over the windows theme files and install them via the Wine config application. Once installed, Smart will look as it should.

By default the old Windows 95 look & feel is used by Wine. Personally I dont care too much about this, it’s being able to code, compile and run the applications that matters to me – but if you want a more modern look then just copy over the Windows theme files and you are all set.

 

 

TextCraft 1.2 for Smart Pascal

January 26, 2018 Leave a comment

TextCraft is a fast, generic object-pascal text parsing framework. It provides you with the classes you need to write fast text parsers that builds effective data models.

The Textcraft framework was recently moved up to version 1.2 and has been ported from Delphi to both Freepascal and Smart Pascal (the dialect used by Smart Mobile Studio). This is probably the only parsing framework that spans 3 compilers.

Smart Pascal coders can download the framework unit here. This can be placed in their $Install/Library folder  (where $install is where Smart’s library and rtl folder is installed): BitBucket TextCraft Repository

Buffer, parser, model

Textcraft divides the job of parsing into 4 separate objects; each of them representing a concept familiar to people writing compilers; these are: buffer, parser, model and context. If you are parsing a programming language the “model” would be what people call the AST (short for “Abstract Symbol Tree”). This AST is later feed to the code generator, turning it into an executable program (Smart Pascal compiles to JavaScript so there really is no limit to the transformation, just level of complexity).

Note: Textcraft is not a compiler for any particular language, it is a generic text parsing framework that is language-agnostic. Meaning that it makes it easy for you to make parsers with it. We recently used it to parse command-line parameters for Freepascal, so it doesn’t have to be about languages.

The buffer

The buffer has one of the most demanding jobs in the framework. In other frameworks the buffer is often just a memory allocation with a simple read method; but in TextCraft the model is responsible for a lot more. It has to expose functions that makes text recognition simple and effective; it has to keep track of column and row position as you move through the buffer content – and much, much more. So in TextCraft the buffer is where text methodology is implemented in full.

The parser

Like mentioned the parser is responsible for using the buffer’s methods to recognize and make sense of a text. As it makes its way through the buffer content, it creates model-objects that represents each element. Typical for a language would be structures (records), classes, enums, properties and so on. Each of these will be registered in the AST data model.

The Model

The model is a construct. It is made up of as many mode-object instances as you need to express the text in symbolic form. It doesn’t matter if you are parsing a text document or source code, you would still have to define a model for it.

The model obviously reflect your needs. If you just need a superficial overview of the data then you create a simple model. If you need more elaborate information then you create that.

Note: When parsing a text document, a traditional organization would be to divide the model into: chapter, section, paragraph, line and individual words.

The Context

The context object is what links the parser to our model and buffer objects. By default the parser doesn’t know anything about the buffer or model. This helps us abstract away things that would otherwise turn our code into a haystack of references.

The way the context is used can be described like this:

When parsing complex data you often divide the job into multiple classes. Each class deals with one particular topic. For example: if parsing Delphi source code, you would write a class that parses records, a parser that handles classes, another that handles field declarations (and so on).

As a parser recognize mentioned objects, like say a record, it will create a record model object to hold the information. It will then add that to the context by pushing it onto its reference stack.

The first thing a parser does is to grab the model object from the reference to stack. This way the child parsers will always know where to store their model information. It doesn’t matter how deep or recursive something gets, the stack approach and passing the context object to the child parsers – will always make sure each parser “knows” where to store information.

Why is this important?

This is important because it’s cost-effective in computing terms. The TextCraft framework allows you to create parsers that can chew through complex data without turning your project into spaghetti.

So no matter if you are parsing phone-numbers, zip codes or complex C++ source code, TextCraft will make help you get the job done; in a way that is easy to understand and mentain.

Amiga OS 4, object pascal and everything

August 2, 2017 2 comments

Those that read my blog knows that I’m a huge fan of the Commodore Amiga machines. This was a line of computers that took the world by storm around 1985 and held its ground until 1993. Sadly the company had to file for bankruptcy after a series of absurd financial escapades by its management.

carlsassenrath_teamamiga

The original team before it fell prey to mismanagement

The death of Commodore is one of the great tragedies in computing history. There is no doubt that Commodore represented a much-needed alternative to Microsoft and Apple – and the death of Commodore meant innovation of technology took a turn for the worse.

Large books have been written on this subject, as well as great documentaries and movies – so I’m not going to dig further into the drama here. Ars Technica has a range of articles covering the whole story, so if you want to understand how the market got the way it is today, head over and read up on the story.

On a personal level I find the classic Amiga machines a source of great inspiration even now. Despite Commodore dying in the 90’s, today 30 years after the fact I still stumble over amazing source-code on this awesome computer; There are a few things in Amiga OS that “hint” to its true age, but ultimately the system has aged with amazing elegance and grace. It just blows people away when they realize that the Amiga desktop hit the market in 1984 – and much of what we regard as a modern desktop experience is actually inherited from the Amiga.

amigasys4_04

Amiga OS is highly customizable. Here showing OS 3.9 [the last of the classic OS versions]

As I type this the Amiga is going through a form of revival. It’s actually remarkable to be a part of this because the scope of such an endeavour is monumental. But even more impressive is just how many people are involved. It’s not like some tiny “computer cult” where a bunch of misfits hang out in sad corners of the internet. Nope, we are talking about thousands of educated and technical people who still use their Amiga computers on a daily basis.

For instance: the realization of the new Amiga models have cost £ 1.2 million, so there are serious players involved in this.

The user-base is varied of course, it’s not all developers and engineers. You have gamers who love to kick back with some high quality retro-gaming. You have graphics designers who pixel large masterpieces (an almost lost art in this day and age). And you have musicians who write awesome tracks; then use that to spice up otherwise flat and dull PC based tracks.

What is even more awesome is the coding. Even the latest Freepascal has been ported, so if you were expecting people hand punching hex-codes you will be disappointed. While the Amiga is old in technical terms, it was so far ahead of the competition that people are surprised just how capable the classic systems are.

And yes, people code games, demos and utility programs for the classical Amiga systems even today. I just installed a Dropbox cloud driver on my system and it works brilliantly.

The brand new Amiga

Classic Amiga machines are awesome, but this post is not about the old models; it’s about the new models that are coming out now. Yes, you read right: next generation Amiga computers that have finally become a reality. Having waited for 22 years I am thrilled to say that I just ordered a brand new Amiga 5000! (and cant wait to install Freepascal and start coding).

It’s also quite affordable. The x5000 model (which is the power system) retails at around €1650, which is roughly half the price I paid for my Intel i7, Nvidia GeForce GTX 970 workstation. And the potential as a developer is enormous.

Just think about the onslaught of Delphi code I can port over, and how instrumental my software can become by getting in early. Say what you will about Freepascal but it tends to be the second compiler to hit a platform after GCC. And with Freepascal in place a Delphi developer can do some serious magic!

20431276_643626252509574_7473564293748990830_nRight. So the first Amiga  is the power model, the Amiga 5000. This can be ordered today. It cost the same as a good PC (1600€ range depending on import tax and vat). This is far less than I paid for my crap iMac (that I never use anymore).

The power model is best suited for people who do professional work on the machine. Software development doesn’t necessarily need all the firepower the x5000 brings, but more demanding tasks like 3d rendering or media composition will.

The next model is the A1222 which is due out around x-mas 2017 /slash/ first quarter

IMG_9251

The A1222 “Tabour”

2018. You would perhaps expect a mid-range model, something retailing at around €800 or there abouts – but the A1222 is without a doubt a low-end model.

It should retail for roughly €450. I think this is a great idea because AEON (who makes hardware) have different needs from Hyperion (who makes the new Amiga OS [more about that further into the article]). AEON needs to get enough units out to secure the foundation – while Hyperion needs vertical market penetration (read: become popular and also hit other hardware platforms as well). These factors are mutually exclusive, just like they are for Windows and OS X. Which is probably why Apple refuse to sell OS X without a mac, or they could end up competing with themselves.

A brave new Amiga OS

But there is more to this “revival” than just hardware. Many would even say that hardware is the least interesting about the next generation systems, and that the true value at this point in time is the new and sexy operating system. Because what the world needs now more than hardware (in my opinion) is a lightweight alternative to Linux and Windows. A lean, powerful, easy to use, highly customizable operating system that will happily boot on a $35 Raspberry PI 3b, or a $2500 Intel i7 monster. Something that makes computing fun, affordable and most of all: portable!

AmigaOS 41 Final (Commodore-Amiga, 2015, Amiga)_2

My setup of Amiga OS 4, with FPC and Storm C/C++

And with lean I have to stress that the original Amiga operating system, the classic 3.x system that was developed all the way to the end – was initially created to thrive in as little as 512kb. At most I had 2 megabytes of ram in my Amiga 1200 and that was ample space to write and run large programs, play the latest games and enjoy the rich, colorful and user-friendly desktop environment. We have to remember that Amiga had a multi-tasking, window based OS a decade before Microsoft.

Naturally the next-generation systems is built to deal with the realities of 2017 and beyond, but incredibly enough the OS will run just fine with as little as 256 megabytes. Not even Windows embedded can boot up on that. Linux comes close with distributions like Puppy and DSL, but Amiga OS 4 gives you a lot more functionality out of the box.

What way to go?

OK so we have new hardware, but what about the software? Are the new Amiga’s supposed to run some ancient version of Amiga OS? Of-course not! The people behind the new hardware have teamed up with a second company, Hyperion, that has believe it or not, done a full re-implementation of Amiga OS! And naturally they have taken the opportunity to get rid of annoying behavior – and adding behavior people expect in 2017 (like double-clicking on a window header to maximize it, easy access to menus and much more). Visually Amiga OS 4  is absolutely gorgeous. Just stunning to look at.

Now there are many different theories and ideas about where a new Amiga should go. Sadly it’s not just as simple as “hey let’s make a new amiga“; the old system is literally boiled in patent and legislation issues. It is close to an investors worst nightmare since ownership is so fragmented. Back when Commodore died, different parts of the Amiga was sold to different companies and individuals. The main reason we havent seen a new Amiga until now – is because the owners have been fighting between themselves. The Amiga as we know it has been caught in limbo for close to two decades.

My stance on the whole subject is that Trevor Dickenson, the man behind the next generation Amiga systems, has done the only reasonable thing a sane human being can when faced with a proverbial patent kebab: the old hardware is magical for us that grew up on it – but by todays standard they are obsolete dinosaurs. The same can be said about the Amiga OS 3.9. So Trevor has gone for a full re-implementation and hardware.

The other predominant idea is more GNU/Linux in spirit, where people want Amiga OS to be platform independent (or at least written in a way that makes the code run on different hardware as long as some fundamental infrastructure exists). This actually resulted in a whole new OS being written, namely Aros, which is a community made Amiga OS clone. A project that has been perpetually maintained for 20 years now.

131-3

Aros, a community re-implementation of Amiga OS for x86

While I think the guys behind Aros should be applauded, I do feel that AEON and Hyperion have produced something better. There are still kinks to work out on both systems – and don’t get me wrong: I am thrilled that Aros is available, I just enjoy OS 4 more than I do Aros. Which is my subjective opinion of course.

New markets

Right. With all this in mind, let us completely disregard the old Amiga, the commodore drama and instead focus on the new operatingsystem as a product. It doesn’t take long before a few thrilling opportunities present themselves.

The first that comes to my mind is how well suited OS 4 would be as an embedded platform. The problem with Linux is ultimately the same that haunts OS X and Windows, namely that size and complexity grows proportionally over time. I have seen Linux systems as small as 20 megabytes, but for running X based full screen applications, taking advantage of hardware accelerated graphics – you really need a bigger infrastructure. And the moment you start adding those packages – Linux puts on weight and dependencies fast!

embedded-systems-laboratory-systems

The embedded market is one place where Amiga OS would do wonders

With embedded systems im not just talking about head-less servers or single application devices. Take something simple like a ticket booth, an information kiosk or POS terminal. Most of these run either Windows embedded or some variation of Linux. Since both of these systems require a fair bit of infrastructure to function properly, the price of the hardware typically start at around 300€. Delphi and C++ based solutions, at least those that I have seen, end up using boards in the 300€ to $400€ range.

This price-tag is high considering the tasks you need to do in a POS terminal or ticket system. You usually have a touch enabled screen, a network connection, a local database that will cache information should the network be down – the rest is visual code for dealing with menus, options, identification and fault tolerance. If a visa terminal is included then a USB driver must also be factored in.

These tasks are not heavy in themselves. So in theory a smaller system if properly adapted for it could do the same if not better job – at a much better price.

More for less, the Amiga legacy

Amiga OS would be able to deliver the exact same experience as Windows and Linux – but running on more cost-effective hardware. Where modern Windows and Linux typically need at least 2 gigabyte of ram for a heavy-duty visual application, full network stack and database services – Amiga OS is happy to run in as little as 512 megabytes. Everything is relative of course, but running a heavy visual application with less than a gigabyte memory in 2017 is rare to say the least.

Already we have cut cost. Power ARM boards ships with 4 gigabytes of ram, powered by a snappy ARM v9 cpu – and medium boards ship with 1 or 2 gigabytes of ram and a less powerful cpu. The price difference is already a good 75€ on ram alone. And if the CPU is a step down, from ARM v9 to ARM v8, we can push it down by a good 120€. At least if you are ordering in bulk (say 100 units).

The exciting part is ultimately how well Amiga OS 4 scales. I have yet to try this since I don’t have access to the machine I have ordered yet – and sadly Amiga OS 4.1 is compiled purely for PPC. This might sound odd since everyone is moving to ARM, but there is still plenty of embedded systems based on PPC. But yes, I would urge our good friend Trevor Dickenson to establish a migration plan to ARM because it would kill two birds with one stone: upgrading the faithful Amiga community while entering into the embedded market at the same time. Since the same hardware is involved these two factors would stimulate the growth and adoption of the OS.

amihw

The PPC platform gives you a lot of bang-for-the-buck in the A1222 model

But for sake of argument let’s say that Amiga OS 4 scales exceptionally well, meaning that it will happily run on ARM v8 with 1 gigabyte of ram. This would mean that it would run on systems like the Asus Tinkerboard that retails at 60€ inc. vat. This would naturally not be a high performance system like the A5000, but embedded is not about that – it’s about finding something that can run your application safely, efficiently and without problems.

So if the OS scales gracefully for ARM, we have brought the cost down from 300€ to 60€ for the hardware (I would round that up to 100€, something always comes up). If the customers software was Windows-based, a further 50€ can be subtracted from the software budget for bulk licensing. Again buying in bulk is the key.

Think different means different

Already I can hear my friends that are into Linux yell that this is rubbish and that Linux can be scaled down from 8 gigabytes to 20 megabytes if so needed. And yes that is true. But what my learned friends forget is that Linux is a PITA to work with if you havent spent a considerable amount of time learning it. It’s not a system you can just jump into and expect to have results the next day. Amiga OS has a much more friendly architecture and things that are often hard to do on Windows and Linux, is usually very simple to achieve on the Amiga.

Another fact my friends tend to forget is that the great majority of commercial embedded projects – are done using commercial software. Microsoft actually presented a paper on this when they released their IOT support package for the Raspberry PI. And based on personal experience I have to agree with this. In the past 20 years I have only seen 2 companies that use Linux as their primary OS both in products and in their offices. Everyone else uses Windows embedded for their products and day-to-day management.

So what you get are developers using traditional Windows development tools like Visual Studio or Delphi (although that is changing rapidly with node.js). And they might be outstanding programmers but Linux is still reserved for server administrators and the odd few that use it on hobby basis. We simply don’t have time to dig into esoteric “man pages” or explore the intricate secrets of the kernel.

The end result is that companies go with what they know. They get Windows embedded and use an expensive x86 board. So where they could have paid 100€ for a smaller SBC and used Amiga OS to deliver the exact same product — they are stuck with a 350€ baseline.

Be the change

The point of this little post has been to demonstrate that yes, the embedded market is more than open for alternatives. Linux is excellent for those that have the time to learn its many odd peculiarities, but over the past 20 years it has grown into a resource hungry beast. Which is ironic because it used to be Windows that was the bloated scapegoat. And to be honest Windows embedded is a joy to work with and much easier to shape to your exact needs – but the prices are ridicules and it wont perform well unless you throw at least 2 gigabyte on it (relative to the task of course, but in broad strokes that’s the ticket).

But wouldn’t it be nice with a clean, resource friendly and extremely fast alternative? One where auto-starting applications in exclusive mode was a “one liner” in the startup-sequence file? A file which is actually called “startup-sequence” rather than some esoteric “init.d” alias that is neither a folder or an archive but something reminiscent of the Windows registry? A system where libraries and the whole folder structure that makes up drivers, shell, desktop and service is intuitively named for what they are?

asus

Amiga OS could piggyback on the wave of low-cost ARM SBC’s that are flooding the market

You could learn how to use Amiga OS in 2 days tops; but it holds great depth so that you can grow with the system as your needs become more complex. But the general “how to” can be picked up in a couple of days. The architecture is so well-organized that even if you know nothing about settings, a folder named “prefs” doesn’t leave much room for misinterpretation.

But the best thing about AmigaOS is by far how elegant it has been architected. You know, when software is planned right it tends to refactor out things that would otherwise be an obstacle. It’s like a well oiled machinery where each part makes perfect sense and you don’t need a huge book to understand it.

From where I am standing, Amiga OS is ultimately the biggest asset the Hyperion and AEON have to offer. I love the new hardware that is coming out – but there is no doubt in my mind, and I know I am right about this, that the market these companies should focus on now is not PPC – but rather ARM and embedded systems.

It would take an effort to port over the code from a PPC architecture to ARM, but having said that – PPC and ARM have much more in common than say, PPC and x86.

I also think the time is ripe for a solid power ARM board for desktop computers. While smaller boards gets most of the attention, like the Raspberry PI, the ODroid XU4 and the (S)Tinkerboard – once you move the baseline beyond 300€ you see some serious muscle. Boards like iMX6 OpenRex SBC Ultra packs a serious punch, and like expected it ships with 4 gigabyte of ram out of the box.

While it’s impossible to do a raw comparison between the A1222 and the iMX6 OpenRex, I would be surprised if the iMX6 delivered terrible performance compared to the A1222 chipset. I am also sure that if we beefed up the price to 700€, aimed at home computing rather than embedded – the ARM power boards involved would wipe the floor with PPC. There are a ton of factors at play here – a fast CPU doesn’t necessarily mean better graphics. A good GPU should make up at least 1/5 of the price.

Another cool factor regarding ARM is that the bios gives you a great deal of features you can incorporate into your product. All the ARM board I have gives you FAT32 support out of the box for instance, this is supported by the SoC itself and you don’t need to write filesystem drivers for it. Most boards also support Ext2 and Ext3 filesystems. This is recognized automatically on boot. The rich bios/mini kernel is what makes ARM so attractive to code for, because it takes away a lot of the boring, low-level tasks that took months to get right in the past.

Final words

This has been a long article, from the early years of Commodore – all the way up to the present day and beyond. I hope some of my ideas make sense – and I also hope that those who are involved in the making of the new Amiga perhaps pick up an idea or two from this material.

Either way I will support the Amiga with everything I got – but we need a couple of smart ideas and concrete plans on behalf of management. And in my view, Trevor is doing exactly what is needed.

While we can debate the choice of PPC, it’s ultimately a story with a long, long background to it. But thankfully nothing is carved in stone and the future of the Amiga 5000 and 1222 looks bright! I am literally counting the days until I get one!

Freepascal online, awesome work!

June 20, 2017 Leave a comment

Freepascal is, with the exception of gcc – the gnu c/c++ compiler toolchain, probably the most versatile and adaptable compiler in the world. Unlike most compilers outside the C camp – Freepascal is complete toolchain, and it has more than a few similarities to c/c++ in that it’s largely self-reliant.

MUI_PascalDemo

Freepascal scales from the old mc68000 family to the latest Intel i7, impressive!

FPC is easy to port over to new (or old) platforms, and as far as languages go – object pascal has a level of productivity that I have yet to find in other languages.

Now I have no interest in language wars or fanboy stereotyping, all languages have aspects that are unique and beneficial. So this is not an attempt to promote A over B.

Targeting both old and new from a single codebase

While mainstream and work related topics never brush into this – there is actually a healthy group of people still using older platforms. Naturally this is well within the “hobby” range – where passion and curiosity are the prime motivators rather than cash. But still, it is interesting to see how little of modern software is capable of scaling back to processors 10, 15 or 20 years out of date.

Alb42 is the nickname of a very industrious individual who has taken it on himself to port freepascal back to his roots. Which just happens to be the Amiga line of computers.

MiniSlugAmiga

Some of the best arcade games ever made are “vintage” by nature

But he has also done a lot of work for modern systems. A new Amiga was actually released this year (although most people havent noticed it) – and finding development software for a brand new platform is not easy. You have a variant of C/C++ naturally – but the honest truth is that C/C++ is not the language most people use to create desktop productivity software in. That is left to object pascal, C# and variations of Java. Delphi alone is responsible for a huge portfolio of popular software on Windows for instance. Only surpassed recently by monoliths like C#.

So getting freepascal running on Amiga OS 4 (which is the latest and modern operating system shipping with the new Amiga x5000 model) is very important to the market share of the platform.

Do it online!

Alb42 has setup an online compiler testbed where you can type in your freepascal snippets and compile to executable code on the spot. You can choose between MorphOS, Classic 68k, Aros (x86) and Amiga OS 4 (ppc). It’s a very simple setup yet it’s possible to do ad-hoc compilation and play around with the system 🙂

onlinecompile

While humble, Alb42 has setup an interesting online kitchen sink that demonstrates how flexible and powerful object pascal really is

To give it a go, head over to: https://home.alb42.de/fpamiga/

And as always, visit his blog for both VMware images with cross compilation and binaries for your favorite retro system.

But perhaps even more important, your new x5000 (!)

Dont forget to join us at Amiga Dispatch on Facebook.

Cheers!

Understanding Smart Pascal

January 15, 2017 9 comments

One of the problems you get when working pro-bono on a project, is a constant lack of time. You have a fixed amount of hours you can spare, and every day you have to make decisions about where to invest those hours. The result is that Smart Mobile Studio has a wealth of technical resources and depth, but lacks the documentation you expect such a product to have. This has been and continues to be a problem.

Documentation really is a chicken and egg thing. It doesn’t start out that way, but once the product is launched, you get trapped in this boolean dynamics: “Few people buy it because it lacks documentation; You can’t afford to write documentation because few people buy it“. Considering the size of our codebase I don’t blame people for being a bit overwhelmed.

Despite our shortcomings Smart Mobile Studio is growing. It has a slow but steady growth as opposed to explosive growth. But all products needs periods of explosive growth to build up resources so that future evolution of the product can be financed. So this lack of solid documentation acts almost like a filter. Only those that are used to coding in Delphi or Lazarus at a certain level, writing their own classes and components, will feel comfortable using it.

It has become a kind of elite toolkit, used only by the most advanced coders.

The benefit of true OOP for JSVM is unquestionable

The benefit of true OOP for JSVM is unquestionable

Trying to explain

The other day I talked to a man who simply could not wrap his head around Smart Pascal at all. Compile for JavaScript? But.. how.. How do you get classes? He looked at me with a face of disbelief. I told him that we emit a VMT (virtual method table) in JavaScript itself. That way, you get real classes, real interfaces and real inheritance. But it was like talking to a wall.

In his defence, he understood conceptually what a VMT was, no doubt from having read about it in context with Delphi; but how it really works and that the principle is fundamental to object orientation at large, was alien to him.

var TObject={
  $ClassName: "TObject",
  $Parent: null,
  ClassName: function (s) { return s.$ClassName },
  ClassType: function (s) { return s },
  ClassParent: function (s) { return s.$Parent },
  $Init: function () {},
  Create: function (s) { return s },
  Destroy: function (s) { for (var prop in s) if (s.hasOwnProperty(prop)) delete s.prop },
  Destroy$: function(s) { return s.ClassType.Destroy(s) },
  Free: function (s) { if (s!==null) s.ClassType.Destroy(s) }
}

Above: In object orientation the methods are only compiled once while the instance is cloned. This is why methods in OOP languages are compiled with a secret first parameter that is the instance. Inheritance never duplicates the code inherited from ancestors.

In retrospect I have concluded that it had more to do with “saving face” than this guy not understanding. He had just spent months writing a project in JavaScript that he could complete in less than a day using Smart Pascal – so naturally, he would look the fool to admit that he just wasted a ton of company money. The easiest way to dismiss any ignorance on his part, was to push our tech into obscurity.

But what really baked my noodle was his lack of vision. He had failed completely in understanding what cloud is, where the market is going and what that will mean to both his skill-set, his job prospects and the future of software development.

It’s not his fault. If anything it’s my fault for not writing about it earlier. In my own defense I took it for granted that everyone understood this and knew what was coming. But that is unfair because the ability to get a good overview of the situation depends greatly on where you are.

JavaScript, the most important language in the world

It may be hard for many people to admit this, but it is none the less true. JavaScript has become the single most important language on the planet. 50% of all software written right now, regardless if it’s for the server or the browser, is written in JavaScript.

I knew this would happen as early as 2008, all the signs pointed to it. In 2010 Delphi was in a really bad place and I had a choice: drop Delphi and throw 20 years of hard-earned skills out the window and seek refuge in C++ or C#; or adapt object pascal to the new paradigm and try to save as much of our collective knowledge and skills as I could.

Even before I started on Smart I knew that something like node.js would appear. It was inevitable. Not because I am so clever, but because emerging new technology follows a pattern. Once it reaches critical mass – universal adoption and adaptation will happen. It follows logical steps of evolution that apply to all things, regardless of what the product or solution may be.

What is going to happen, regardless of what you feel

Ask yourself, what are the implication of program code being virtual? What are the logical steps when your code is 100% abstracted from hardware and the underlying, native operative system? What are the implications when script code enjoy the speed of native code (the JavaScript virtual machine uses LLVM to the point that JavaScript now runs en-par with native code), yet can be clustered, replicated, moved and even paused?

Let me say it like this: The next generation rapid application development wont deliver executable files or single platform installers. You will deliver entire eco-systems. Products that can be scaled, moved between hosts, replicated -that runs and exist in the cloud purely as virtual instances.

Norwegian developed FriendOS is just one of the purely cloud based operative systems in development

Norwegian developed FriendOS is just one of the cloud based operative systems in development right now. It will have a massive impact on the world

Where Delphi developers today drag and drop components on a form, future developers will drag and drop entire service stacks. You wont drop just a single picture on a form, but connectors to international media resource managers; services that guarantee accessibility across continents. 24 hours a day, seven days a week.

You think chrome-book is where it ends? It’s just the beginning.

Right now there are 3 cloud-based operating systems in development. All of them with support for the new, distributed software model. They allow you to write both the back-end and front-end of your program, which in the new model is regarded as a single entity or eco-system. Things like storage have been distributed for well over a decade now, and you can pipe data between Dropbox, Google drive or any host that implements the REST storage interface.

Some of the most powerful companies in the world are involved in this. Now take a wild guess what language these systems want you to use.

I’m sorry, but the way we make programs today is about to go extinct.

Understanding the new software model

As a Delphi or Lazarus developer you are used to the notion of server-side applications and client side applications. The distinction between this has always clear, but that is about to change. It’s still going to be around, at least for the next decade or so, but only for legacy purposes.

To backtrack just a second: Smart introduced support for node.js applications earlier, but it was on a very low-level. In the next update we introduce a large number high-level classes that is going to change the way you look at node completely.

Two new project types will be introduced in the future, giving you a very important distinction. Namely:

  • Cloud service
  • System service

To understand these concepts, you first have to understand the software model that next generation cloud operating systems work with. Superficially it may look almost identical to the good old two-tier model, but in the new paradigm it is treated as a single, portable, scalable, cluster capable entity.

In the new paradigm this is now a single entity

In the new paradigm this is now a single entity

The thing about clustering and scaling is what tends to confuse traditional developers. Because scaling in a native language is hard work. First you have to write your program in such a way that it can be scaled (e.g work as a member in a group, or cluster). Secondly you have to write a gate-keeper or head that delegates connections between the members of the cluster. If you don’t do this from the very beginning it will be a costly affair to retrofit a project with the required mechanisms.

Node.js is just awesome because it can cluster your code without you having to program for that. How? Because JavaScript is virtual. So you can fire up 50, 100 or 10.000 instances of the same process and the only thing you need to worry about is the gate-keeper process. You just park the cluster in another IP range that is only accessible by the gatekeeper, and that’s pretty much it.

When a software eco-system is installed on a cloud host, the entire architecture described by the package is created. So the backend part of your code is moved to an instance dedicated for that, the front end is installed where it belongs, same with database and so on. Forget about visual controls and TComponent, because on this level your building blocks are whole services; and the code you write scales from low-level socket coding to piping terabytes of data between processes.

Running your Smart Pascal server as a system-level daemon is very easy once you know what to look for :)

PM2 is a node.js process manager that gives you clustering and pooled application behavior for free out of the box. You don’t even have to tailor your code for it

Services that physically move

While global accessibility is fine and good, speed is a factor. It goes without saying that having to fetch data from Asia when you are in the US is going to be less than optimal. But this is where cloud is getting smarter.

Your services will actually physically move to a host closer to where you are. So let’s say you take a business trip from the US to Hong-Kong. The service will notice this, find a host closer to where you are, and replicate itself to that server.

This is not science-fiction, it’s already implemented in Azure and Google’s APIs take height for this behavior. It’s pretty cool if you ask me.

Is node.js really that powerful?

Let me present you with a few examples. Its important to understand that these examples doesnt mean everyone have to operate on this scale. But we feel it’s important to show people just what you can achieve and what node is capable of.

netflix-amazon-cloudNetflix is an online video streaming service that has become a household name in a very short time. Cloud can often be a vague term, but no other service demonstrates the potential of cloud technology as much as Netflix. In 2015 Netflix had more than 69 million paying customers in roughly 60 countries. It streams at average 100 million media hours a day.

Netflix moved from a traditional, native software model to a 100% clustered Node.js powered model in 2014. The ability for Netflix to run nearly universally on all devices, from embedded computers to Smart TV’s is largely due to their JavaScript codebase.

PayPal is a long-standing online banking and payment service that was first established in 1998. In Q4 of 2016 PayPal had 192 million registered customers world-wide. The service’s annual mobile payment volume was 66 billion US dollars in 2016. More than triple that of the previous year. Paypal moved from a traditional, native server model to Node.js back in 2015, when their entire transaction service layer was re-written from scratch.

Uber is a world-wide taxi company that is purely cloud and web-based. Unlike traditional taxi companies Uber owns no cars and doesn’t employ drivers; Instead its a service that anyone can partake in – either as a customer or a driver. In 2016 Uber operates in 551 cities across 60 countries. It delivers more than one million rides daily and have an estimated 10 million customers.

Uber’s server technology is based purely on Node.js and exists purely as a cloud based service. Uber has several client applications for various mobile devices, the majority of these are HTML5 applications that use Cordova Phonegap (same as Smart applications).

Understanding Smart

While the RTL and full scope of the technology has been a bit of a “black box” for many people, hopefully the idea and concepts around it has matured enough for people to open up for it. We were a bit early with it, and without the context that is showing up now I do understand that it can be hard to get the full scope of it (not to mention the potential).

With the cloud and some of its potential (and no, it’s not going away), a sense of urgency should start to set in. Native programming is not going away, but it will be marginalized to the point where it goes back to its origins: as a dicipline and part of engineering.

Public software and services will move to the cloud and as such, developers will be forced to use tools and languages better suited for that line of work.

We firmly believe that object pascal is one of the best languages ever created. Smart pascal has been adapted especially for this task, and the time-saving aspects and “edge” you get by writing object pascal over vanilla JavaScript is unquestionable. Inheritance alone is helpful, but the full onslaught of high-level features Smart brings takes it to the next level.

The benefits of writing object oriented, class based code is readability, order and maintainability. The benefits of a large RTL is productivity. The most important aspect of all in the world of software development.

The benefits of writing object oriented, class based code is readability, order and maintainability. The benefits of a large RTL is productivity. The most important aspect of all in the world of software development.

Hopefully the importance of our work will be easier to understand and more aparent now that cloud is becoming more visible and people are picking up the full implications of this.

The next and obvious step is moving Smart itself to the cloud, which we are planning for now. It will mean you can code and produce applications regardless of where you are. You can be in Spain, France or Oklahoma USA – all you will need is a browser, your object pascal skills and you’re good to go.

Things like “one click” hosting, instance budgets for auto scaling; the value for both developers and investors should be fairly obvious at this point.

Starting monday we will actively look for investors.

Sincerly

Jon Lennart Aasenden

Embedded boards, finally!

December 19, 2016 Leave a comment

I was about to conclude that this day horizontally sucked beyond measure, but just as I thought as much -the door bell rang. It was FedEx (ta da!) with not one package, but two packages of super nerdy goodness. And here I was sure these puppies wouldnt arrive until after Xmas!

Top the x86 UP board, left bottom a Raspberry PI 3, bottom right the ODroid XU4

Top the x86 UP board, left bottom a Raspberry PI 3, bottom right the ODroid XU4

The ODroid XU4

A while back I ordered the very exciting “Raspberry PI 3 killer” ODroid XU4. It’s a bit of a unicorn, said to be roughly 10 times faster than the Raspberry PI 3. Honestly, having looked at the specs I can’t imagine it being more than 3 to 4 times faster; depending greatly on what your application is doing and the operative system in question. Here is the full spec sheet:

  • Samsung Exynos 5422 Cortex™-A15 2 GHz and Cortex™-A7 Octa core CPUs
  • Mali-T628 MP6(OpenGL ES 3.0/2.0/1.1 and OpenCL 1.1 Full profile)
  • 2 Gbyte LPDDR3 RAM PoP stacked
  • eMMC 5.0 HS400 Flash Storage
  • 2 x USB 3.0 Host, 1 x USB 2.0 Host
  • Gigabit Ethernet port
  • HDMI 1.4a for display

As you can see the ODroid comes armed with 8 CPU cores while the Raspberry PI 3 (RPI3) has only 4. It also comes with twice the RAM (which really impacts performance), and tests have shown the ODroid disk/IO speed is roughly double that of the RPI3. But the cores is what caught my eye, because 8 cores means 8 simultaneous threads. This means code written for node.js, apache [php] or indeed custom, natively compiled Free Pascal servers will be able to handle twice the payload straight off the bat. For stateless protocols like http, I am guessing a performance factor of 3 to 1 compared to an RPI3.

Having said all this, there will be exceptions where the PI3 performs equal of better. The RPI3 SoC have better HD video functionality, so the ODroid have to work harder to produce the same.

For those of you thinking the ODroid will solve all your Amiga emulation problems, the answer is yes. It is significantly faster than the RPI3. But never forget that single threaded applications like UAE (Unix Amiga Emulator) involves a high degree of chipset synchronization. If the chipset performs out of sync (for instance if the blitter finishes faster than it should), all manner of problems will occur. So all that synchronization causes some parts to wait. Meaning no matter how fast your computer is (even my Intel i7 CPU) UAE will never reach peak performance. It will never use the full capacity of a single core due to synchronization.

A small note is also popularity, which means less updates. ODroid has a somewhat slower update cycle than Raspberry. It has thousands of users, but it’s not even close to getting the attention of the Raspberry PI project. And where the Raspberry website has been community oriented, with inspiring graphics, free tutorials and a huge forum from day one – the ODroid has nothing that even compares.

From what I read and hear the biggest problem has been kernel updates. But, seriously, that is often a Linux fetish. Unless it’s a super important update, like going from 32 to 64 bit or patching a really terrible security flaw – most users are not going to be bothered by a 2 versions old kernel. You still have access to the proverbial library of Alexandria that is aperture package manager (apt-get command) and compiling a few programs from code is not that hard. It’s typically download, unpack, configure, make and install – and that’s it.

Naturally, considering the faster CPU of the ODroid, double the ram, double the IO speed – emulators like UAE will be awesome. ODroid is also the only ARM SoC out there in this price range that plays Sega Saturn and PSX 2 games without any problems. And it will also be far better suited for servers, be it natively compiled freepascal servers, mono ASP.net or Smart Pascal [node.js] work.

The UP x86 board

The second package contained another embedded board, this time the x86 based UP board. I bought the most expensive model they had, containing 4 gigabytes of ram and 64 GB EMMC on-board storage. The board sports a 64 bit Intel® Atom™ x5 Z8350 processor, running as high as 1.92 GHz. Between Raspberry PI 3, ODroid XU4 and UP – let there be no doubt which model will come out on top.

  • Intel® Atom™ x5 Z8350 Processor 64 bit – up to 1.92GHz
  • Intel® HD 400 Graphics ,12 EU GEN 8, up to 500MHz Support DX*11.1/12, Open GL*4.2, Open CL*1.2 OGL ES3.0, H.264, HEVC(decode), VP8
  • 4GB DDR3L
  • 64GB eMMC
  • 4 x USB2.0 external connector
  • 2 x USB2.0 port (pin header)
  • 1 x USB 3.0 port
  • 1 x Gb Ethernet (full speed) RJ-45
  • HDMI (DSI / eDP)
  • MIPI-CSI Camera interface
  • 5V DC-in @ 3A 5.5/2.1mm jack

Where the ODroid is rumoured to be 10 times faster than a RPI3, that is a statement closer to an urban myth rather than fact; The UP board on the other hand IS without a shadow of a doubt 10 times faster (and then some), no question about it. Since this is essentially a vanilla x86 SoC PC, the world is your oyster. The full onslaught of x86 software is at your disposal, be it Windows or Linux you chose as your base.

x86 UP board, same size as the PI3 but packing a hell of a punch!

x86 UP board, same size as the PI3 but packing a hell of a punch!

The board has no problems running Windows 10, Ubuntu (full 64 bit version) and Android. And it’s actually more responsive than a few laptops on sale. I wanted a cheap laptop for dedicated Amiga Emulation – but having tested both the low-end Asus and Dell models it just left me wanting. The problem with cheap model laptops is often the absurd memory they ship with (1 to 2 gigabyte). The devices spend more time swapping data back and forth between ram and disk than they do running your code!

This is not the case for the UP board thanks to the on-board 4 gigabytes of ram.

Being a Embarcadero Delphi and Smart Mobile Studio (object pascal for JavaScript) developer this board is perfect for my needs. It has so much more to offer than the cheap ARM boards and is something I can use to create custom hardware projects and avoid many of the adaptation problems associated with ARM Linux.

While I love Raspberry PI 3, Linux takes some getting used to. There are things that takes me days to figure out on Linux that I completed in minutes on Windows (I have been using Windows since the beginning after all). This will change as my skill and insight into Linux matures, but if I can choose between the RPI3 and the UP board, I would pick the UP board every time.

Price is a factor here. RPI3 sells for between $35-40, the ODroid retails for $99 while the x86 UP board can be yours for $150. But you can also buy a cheaper model with less ram and EMMC storage. The UP provider has the following options:

  • $99 – 2 gigabyte memory, 16 gigabyte EMMC storage
  • $109 – 2 gigabyte memory, 32 gigabyte EMMC storage
  • $129 – 4 gigabyte memory, 32 gigabyte EMMC storage
  • $150 – 4 gigabyte memory, 64 gigabyte EMMC storage

If you’re thinking “oh, for that price I could get 2, 3 and 4 PI’s respectively!”, keep in mind the level of CPU power, graphics speed and available software here. The fact that you can install and run Windows 10 and just copy over your Delphi or Smart applications is really sweet.

And if you are into emulation then naturally, Windows has the latest and greatest. Things like EmulationStation is going to run so much better on this device than anything else at this price out there, especially if you get the x86 Linux edition. You can run the latest WinUAE on Windows, giving you full PPC emulation and essentially booting straight into Workbench and OS4.1. $150 for an OS4.x capable SoC that is x86 based makes a hell of a lot more sense than forking out $489 for the A1222 low-end PPC based Amiga NG board planned for 2017. I mean, who the hell buys PPC in 2017 (!) Emulation will always be a little bit slower than the real thing, but we are talking negligible.

And with the UP board you can also re-cycle the hardware when you get bored with OS 4. I mean, there are only so many things you can do with a modern Amiga. It is great fun for enthusiasts like myself, but I would much rather run a juiced up version of OS 3.9 with my massive collection of WHDLoad software, cd32 software and modern compilers (like Free Pascal 3.1 which work brilliantly on an emulated, classic Amiga).

Programming

For the developer the UP board gives you the same choices you enjoy on your Windows development machine in general. You can use it to deliver your Delphi, C++ builder or Smart Pascal solutions. If you happen to own a Microsoft Embedded license its performance will be greatly enhanced since you can drop a lot of the “standard” stuff that has to ship with Windows. Remember, a standard Windows  installation is written to work on millions of PC’s and equal number of different hardware configurations. For customized, single purpose applications (like a kiosk system, information booth, cash machine type system) you will be able to cut out quite a lot. Do you need support for printers? Do you need driver-collection for hardware that is not there? Windows embedded allows you to cut the disk image down to the bones, and it’s also written to run on slower CPU’s than what people in general own – so the performance is much better.

  • Run your Delphi projects, it’s a normal PC after all
  • Run your node.js Smart Pascal projects with ease
  • Make use of nodewebkit for Smart Pascal to create full screen, desktop oriented software, enjoy the full scope of GPU powered CSS
  • Enjoy the debugging tools you already know
  • Run familiar databases like MSSQL, MySQL and Firebird with a more friendly and developed editors and tools
  • Use the more friendly backup solution that ships with Windows rather than some cryptic Linux solution (although some Linux versions that have desktop control-panels are just as great!)
  • Use GPIO ports from Delphi and C++ builder (!)
  • Just hook your UP board into your network and install/setup via remote desktop. It saves a lot of time.

If you are a Delphi programmer looking for a reasonable embedded board to ship your killer Windows-based product, the UP board is by far the best option I have seen (and I have tested quite a few board out there!).

The problem with high-end boards is not just the initial price, which can be anything from $300 to $400. Sure, these boards Intel i2 or i3 processors (much faster), but you end up paying extra for everything. Need a new ram module? That will set you back a pretty penny! Want GPIO? Again we are talking $100+ just to get that.

By the time you sum up the expenses, you realize it would have been cheaper to just visit the local computer store and bought a mico-atx board with more ram and a faster processor (!). Micro-atx is not that big, perhaps 2 times larger than the UP board (if you place them into a square). The micro-atx is often to high to be practical embedded boards where you want to cram as much hardware as you can into something the size of a router or set-top-box. The heat sink hovers over the motherboard like the eye of london.

Here is what you should have in mind:

  • Is size a factor?
    • No
      • Buy a cheap mico-atx pc
    • Yes
      • Take a look at the boards listed below
  • Do you need Windows support?
    • No
      • Get a ARM based device
    • Yes
  • Do you need an i3 or i5 CPU?
  • Do you need GPIO built-in?

Note: The UP project is presently busy working on their second kickstarter, which is cleverly called “Up 2” (sigh). This board is slightly larger, being dubbed “UP squared”, but there is a good reason for that. First of all it ships with the more powerful Intel® Pentium™ N4200 2.5 GHz CPU, up to 8 gigabyte of memory and 128 gigabyte emmc storage. Just like the present UP board you will be able to pick a configuration that matches your wallet, but they are aiming at roughly the same price-range as they have now. Head over to the UP project and have a peek at the specs!

Negatives

So far the only negative thing about the UP board is the speed of the emmc storage. As you probably know, emmc is a storage medium designed to be a cheap alternative to SSD. But when it comes to speed it can be anything from SD card to USB 3 in actual performance. This is very vendor spesific and obviously the cheaper models are going to be the slowest. So the first thing you are going to notice is that even though this is a PC, installing things takes a lot longer.

You can however enter the bios and boot from a USB 3 stick. For homebrew projects that shouldnt matter much, and these days a sexy and small 128 or 256 gigabyte (you know, those tiny usb storage devices that is barely the USB socket and little else) is affordable.

I find myself having to look for negatives here. I do think the UP organization should do something about the startup of the device. When you boot the UP logo is displayed, but they should have added a line of text about what key to press for the bios. I ended up going through the typical ones, F1, F2, F8 and F10 which did nothing. The next key was printscreen, before i finally hit DEL and the bios editor came up.

Insignificant, but a small detail that would make their product more polished.

A far worse notion is how they charge money for branding. When the product boots the UP logo comes into view for a couple of seconds (in full-screen). If you want to replace that, the minimal fee is $500 (for a small picture). This is something that simply infuriates me, because you cant change it.

When you buy an embedded board for production purposes, hidden costs such as this for something as simple as a picture – is completely unacceptable. I sincerly hope they drop this practise, because I will not use a board with a fixed boot picture in the embedded systems I deliver. There are plenty of other boards about there with similar specs at competitive prices. Being able to brand your own system is considered normal in 2016 and it has been common for years now.

At least an option in the bios for removing or hiding the UP logo should be in place.

 

The test

For the next few days before and after Xmas, I’ll be playing with these boards as much as time allows. I have already installed the latest Ubuntu on the UP board – and it performed brilliantly. I am presently giving Windows 10 a test drive, and my primary aim will be Smart Mobile Studio graphics demos and UAE running Amiga OS 4.1 final edition. I will also test how emulators work. This is especially exciting for the ODroid since that is the one most people will pick as an alternative to Raspberry PI 3.

If you are a dedicated retro-gamer or just love all things Amiga then again, the UP board should be of special interest to you. It will set you back $150, but considering that it has the exact same form-factor as the Raspberry PI (except its components go a few mm. higher) is considerably faster (in a league way beyond both the PI and ODroid) – this could be your ticket to get a cheap “next-gen” emulated Amiga. It will run OS 4.1 final without problems under WinUAE and it will be a pleasant experience.

I will give you updates as the frame rates, execution speed and overall performance comes in. Oh and the tests will obviously use the RPI3 as the baseline.

Cheers guys!

 

Goodbye G+ Delphi Developers group

December 6, 2016 20 comments
A bit sad today. I typically post blog articles about Delphi, Smart Pascal and embedded projects both on Facebook and Google+, but apparently Smart Pascal is not welcome at Google’s Delphi group. Even examples where Delphi talks with a node.js server on a Linux box is apparently “off topic” and “not Delphi”. So much so that my post was deleted. I’m getting so sick and tired of this double standard. This is not the first time Smart is pushed aside. We were even banned from speaking at Delphi Tage a couple of years back. I really dont understand this attitude at all.
So easy, so powerful and you can deploy it anywhere. An embedded system, a dedicated server - or do a push to your Amazon / Azure cloud stack. Node.js is so powerful once you understand how to use it.

This is from the deleted post. If you look at the picture you will notice a Delphi UDP client/server on the right talking with a node.js service on the left. Aparently this is not Delphi enough for the admin in charge of the Delphi group on G+

I was particularly saddened by this since I have from time to time seen Elevate Software post about their web builder utility. But for some reason Smart pascal is not allowed to do the same (in this case it was a post from my blog). With all due respect for Elevate Software, but when it comes to interfacing Delphi with a modern IOT infrastructure or eco-system, Elevate is not even in the same ballpark. But focus here is on the technical, and Elevate’s web builder is ultimately in the same category as Smart Mobile Studio. So how their system that compiles a subset of object pascal to JavaScript is somehow allowed – while our product is not, I find both morally unacceptable and hypocritical. First of all since Embarcadero have held workshops in Angular.js (!).
Looking into the actual data, with JavaScript leading and JavaScript libraries on client and server side (Angular.js, Node.js) on the rise, it was nice to see that while Delphi was not listed as an option, it was the most typed entry in the “others” category -Source: Marco Cantu
Let me sum up a few facts:
  • Smart Mobile Studio is written 100% in Delphi
  • Smart Mobile Studio was created from scratch to compliment Delphi
  • Smart Mobile Studio supports Remobjects SDK out of the box
  • Smart Mobile Studio supports Embarcadero Datasnap out of the box
  • Smart Mobile Studio helps Delphi developers write the middleware or interfacing that sits between a native Delphi solutions and a customer’s node.js or purely web-based solution. This is 2016 after all.
  • Where a Delphi developer would previously have to decline a job offering because the customer’s existing infrastructure is based on node or emits data unsuitable for traditional Delphi components, developers can now relax and write the missing pieces in a Smart pascal, a dialect roughly 90% compatible with the language they know and love to begin with.
  • Smart Mobile Studio ships with a wast RTL that greatly simplify talking with Delphi. It also has a VCL inspired component hierarchy where more and more complex behavior is introduced vertically. This gives your codebase a depth which is very hard to achieve under vanilla JavaScript or Typescript.
  • Smart Mobile Studio is all about Delphi. It is even used to teach programming in the UK. Teenagers who by consequence and association is statistically more likely to buy Delphi as they mature.
Now I have no problem understanding or respecting the notion of single-topic groups. But having said that I expect the administrator(s) to apply this rule equally and without exception. Not just Smart pascal.
The reality of 2016 is that no Delphi developer use a single language or dialect. That may have been true 10-15 years ago, but that’s not where we are today. When a customer demands that your interface your Delphi software with their existing node.js or io based eco-system there are only 3 options available:
  • Decline the project
  • Learn JavaScript or Typescript and do battle with its absurd idiosyncrasies, lack of familiar data types, lack of inheritance and lack of everything you are used to
  • Use Smart Mobile Studio to write the middleware between your native solution and the customer existing infrastructure

If you pick option number two, it wont take many days before you realize just how alien JavaScript is compared to Delphi or C++ builder. And you will consequently start to understand the value of our RTL which is written to deal with anything from low-level coding (allocmem, reallocmem, fillmemory, move, buffers, direct memory access, streams and even threading). Our RTL is written to make the JavaScript virtual machine palatable to Delphi developers.

Banning dialects?

Once you start banning dialects of a language or an auxiliary utillity designed to empower Delphi and make sure developers can better interface with that aspect of the marketplace – where does it stop? I am curious to where exactly the Google+ Delphi group draws the line here to be honest.

Should Remobject Oxygene likewise be banned since it helps Delphi developers target the dot net framework? That would be odd since Delphi shipped Oxygene for years.

Should script engines be banned? What about SQL? SQL is a language most Delphi developers know and use – but it is by no measure object pascal and never will be. Interestingly, Angular.js seems to be just fine for the Google+ Delphi group. Which is a bit hypocritical since that is JavaScript plain and simple.

What about report engines? Take FastReport for instance: FastReport have for the past decade or more bolted their own scripting engine into the product, a script engine that supports a subset of object pascal but also visual basic (blasphemy!). On iOS you are, if we are going to follow the Apple license agreement down to the letter, not even allowed to use FastReport. Apple is very clear on this. Any applications that embed scripting engines and at the same time download runnable code (which would be the case when downloading report files) is not allowed on appstore. The idea here is that should some code be downloaded that is harmful, well then Apple will give you a world of hurt. And even if you consider it a report-file, it does contain runnable code – and that is a violation.

So is Fastreport Delphi enough? Or is that banned as well?

Where exactly do we draw the line? I can understand banning posts about dot net if it’s all about C#, but if it’s in relation to Delphi or deals with Delphi talking with dot net (or an actual dialect like Oxygene) then I really don’t see why it could be banned or deleted. Even Delphi itself uses dot net. It’s one of the pre-requisites for installing Delphi in the first place. I guess Delphi should also be banned from the Delphi group then?

In our group on Facebook, all are welcome. Embarcadero, Lazarus and Free Pascal, Elevate software (be it their database engines or web builder), Pax compiler, DWScript, Smart Pascal, NewPascal (or whatever it’s called these days) and even Turbo Pascal for that matter. And we sure as shit dont delete posts of Delphi talking to another system.

So goodbye G+ and good luck

Raspberry PI fun

November 1, 2016 2 comments

Just got this box in the mail, now just waiting for a my new touch-screens. Looks like I got the weekend covered.

14900531_10153919669245906_4646692047244893332_n

There was a small LCD screen that came with the kit. I will be putting that in my Amiga 500 retro-mod. Will try to wire it up so that every time i start a game or program – the title of the executable will scroll over the display.

This will involve a fpc compiled signal bridge, but I may be able to do the whole thing in node.js. On the Amiga side of things a task dump running on interrupt should be enough to catch the filename, then broadcast it via UDP on localhost. That way the Linux side of things can pick up the data -and push the text to the display.

Nerdvana..