What is new in Smart Mobile Studio 3.0

July 16, 2018 Leave a comment

Trying to sum up the literally thousands of changes we have done in Smart Mobile Studio the past 12 months is quite a challenge. Instead of just blindly rambling on about every little detail – I’ll try to focus on the most valuable changes; changes that you can immediately pick up and experience for yourself.

Scriptable css themes

theme_structure

A visual control now has its border and background styled from our pre-defined styles. The styles serve the same function in all themes even though they look different.

This might not feel like news since we introduced this around xmas, but like all features it has matured through the beta phases. The benefits of the new system might not be immediately obvious.

So what is so fantastic about the new theme files compared to the old css styling?

We have naturally gone over every visual control to make them look better, but more importantly – we have defined a standard for how visual controls are styled. This is important because without a theme system in place, making application “theme aware” would be impossible.

  • Each theme file is constructed according to a standard
  • A visual control is no longer styled using a single css-rule (like we did before), but rather a combination of several styles:
    • There are 15 background styles, each with a designated role
    • There are 14 borders, each designed to work with specific backgrounds
    • We have 4 font sizes to simplify what small, normal, medium and large means for a particular theme.
  • A theme file contains both CSS and Smart pascal code
  • The code is sandboxed and has no access to the filesystem or RTL
  • The code is executed at compile time, not runtime (!). So the code is only used to generate things like gradients based on constants; “scaffolding” code if you will that makes it easier to maintain and create new themes.

Optimized and re-written visual controls

Almost all our visual controls have been re-written or heavily adjusted to meet the demands of our users. The initial visual controls were originally designed as examples, following in the footsteps of mono where users are expected to work more closely with the code.

To remedy this we have gone through each control and added features you would expect to be present. In most cases the controls are clean re-writes, taking better advantage of HTML5 features such as flex-boxing and relative positions (you can now change layout mode via the PositionMode property. Displaymode is likewise a read-write property).

flexing

Flex boxing relieves controls of otherwise expensive layout chores and evenly distributes elements

Flex-boxing is a layout technique where the browser will automatically stretch or equally distribute screen real estate for child elements. Visual controls like TW3Toolbar and TW3ListMenu makes full use of this – and as a result they are more lightweight, requires no resize code and behave like native controls.

Momentum scrolling as standard

Apple have changed the rules for scrolling 3 times in the past eight years, and it’s driving HTML/JS developers nuts every time. We decided years ago that we had enough and implemented momentum scrolling ourselves written in Smart Pascal. So no matter if Apple or anyone else decides to make life difficult for developers – it wont bother us.

momentum

Momentum scrolling with indicator (or scrollbars) are now standard for all container controls and lists.

Our new TW3Scrollbox and (non visual) TW3ScrollController means that all our container and list controls supports GPU powered momentum scrolling by default. You can also disable this and use whatever default method the underlying web-view or browser has to offer.

Bi-directional Tab control

A good tab control is essential when making mobile and web applications, but making one that behaves like native controls do is quite a challenge. We see a lot of frameworks that have problems doing the bi-directional scrolling that mobile tabs do, where the headers scroll in-place as you click or touch them – and the content of the tab scroll in from either side (at the same time).

tabcontrol

Thankfully this was not that hard to implement for us, since we have proper inheritance to fall back on. JS developers tend to be limited to prototype cloning, which makes it difficult to build up more and more complex behavior. Smart enjoys the same inheritance system that Delphi and C++ uses, and this makes life a lot easier.

Google Maps control

Not exactly hard to make but a fun addition to our RTL. Very useful in applications where you want to pinpoint office locations.

google-maps-android-100664872-orig

Updated ACE coding editor

ACE is by many regarded as the de-facto standard text and code editor for JavaScript. It is a highly capable editor en-par with SynEdit in the Delphi and C++ world. This is actually the only visual control that we did not implement ourselves, although our wrapper code is substantial.

ace

Ace comes with a wealth of styles (color themes) and support for different programming languages. It can also take on the behavior of other editors like emacs (an editor as old as Unix).

We have updated Ace to the latest revision and tuned the wrapper code for speed. There was a small problem with padding that caused Ace to misbehave earlier, this has now been fixed.

The Smart Desktop, windowing framework

People have asked us for more substantial demos of what Smart Mobile Studio can do. Well this certainly qualifies. It is probably the biggest product demo ever made and represents a complete visual web desktop with an accompanying server (the Ragnarok Websocket protocol).

37013365_10155541149775906_3122577366065348608_o

The Smart Desktop showcases some of the power Smart Mobile Studio can muster

It involves quite a bit of technology, including a filesystem that uses the underlying protocol to browse and access files on the server as if they were local. It can also execute shell applications remotely and pipe the results back.

A shell window and command-line system is also included, where commands like “dir” yields the actual directory of whatever path you explore on the server.

Since the browser has no concept of “window” (except a browser window) this is fully implemented as Smart classes. Moving windows, maximizing them (and other common operations) are all included.

The Smart desktop is a good foundation for making large-scale, enterprise level web applications. Applications the size of Photoshop could be made with our desktop framework, and it makes an excellent starting-point for developers involved in router, set-top-boxes and kiosk systems.

Node.JS and server-side technology

While we have only begun to expand our node.js namespace, it is by far one of the most interesting aspects of Smart Mobile Studio 3.0. Where we only used to have rudimentary support (or very low-level) for things like http – the SmartNJ namespace represents high-level classes that can be compared to Indy under Delphi.

As of writing the following servers can be created:

  • HTTP and HTTPS
  • WebSocket and WebSocket-Secure
  • UDP Server
  • Raw TCP server

The cool thing is that the entire system namespace with all our foundation code, is fully compatible and can be used under node. This means streams, buffers, JSON, our codec classes and much, much more.

I will cover the node.js namespace in more detail soon enough.

Unified filesystem

The browser allows some access to files, within a sandboxed and safe environment. The problem is that this system is completely different from what you find under phonegap, which in turn is wildly different from what node.js operates with.

In order for us to make it easy to store information in a unified way, which also includes online services such as Azure, Amazon and Dropbox — we decided to make a standard.

filesys

The Smart Desktop shows the filesystem and device classes in action. Here accessing the user-account files on the server both visually and through our command-line (shell) application.

So in Smart Mobile Studio we introduce two new concepts:

  • Storage device classes (or “drivers”)
  • Path parsers

The idea is that if you want to save a stream to a file, there should be a standard mechanism for doing so. A mechanism that also works under node, phonegap and whatever else is out there.

For the browser we went as far as implementing our own filesystem, based on a fast B-Tree class that can be serialized to both binary and JSON. For Node.js we map to the existing filesystem methods – and we will continue to expand the RTL with new and exciting storage devices as we move along.

Path parsers deals with how operative-systems name and deal with folders and files. Microsoft Windows has a very different system from Unix, which again can have one or two subtle differences from Linux. When a Smart application boots it will investigate what platform it’s running on, and create + install an appropriate path parser.

You will also be happy to learn that the unit System.IOUtils, which is a standard object pascal unit, is now a part of our RTL. It contains the class TPath which gives you standard methods for working with paths and filenames.

New text parser

Being able to parse text is important. We ported our TextCraft parser (initially written for Delphi) to Smart, which is a good framework for making both small and complex parsers. And we also threw in a bytecode assembler and virtual-cpu demo just for fun.

Note: The assembler and virtual cpu is meant purely as a demonstration of the low-level routines our RTL has to offer. Most JS based systems run away from raw data manipulation, well that is not the case here.

asmparse

Time to get excited!

I hope you have enjoyed this little walk-through. There are hundreds of other items we have added, fixed and expanded (we have also given the form-designer and property inspector some much needed love) – but some of the biggest changes are shown here.

For more information stay tuned and visit www.smartmobilestudio.com

Advertisements

Starting at Embarcadero

July 16, 2018 3 comments

My Facebook messenger App has been bombarded with questions since it became known that I now work for Embarcadero. Most of the messages are in the form of questions; what is the future of Smart Mobile Studio, will I be involved in this or that and so on.

Well those that have followed my blog over the years, or stay in touch with me via Delphi Developer on Facebook, should know by now that I don’t tip-toe around subjects; I tend to be quite direct, and even though it’s absurdly premature –let’s just grab the hot potato and get it over with.

Future of Smart Mobile Studio

Me working for Embarcadero will not change the future of Smart Mobile Studio. Smart is our baby and the whole team at The Smart Company AS will continue, like we have for the past eight years, to evolve, improve and foster Smart Pascal. So let me be absolutely clear on this: my work on Smart Mobile Studio will continue uninterrupted in my free time. Smart Pascal is a labour of love, passion and creativity.

So there is a crystal clear line between my personal and professional time.

Nor is there any potential conflict in this material, as some have openly speculated; Delphi and Smart Mobile Studio targets two fundamental different market segments; Delphi is the best native development suite money can buy, while Smart is the best development system for building mobile, cloud and large-scale JSVM (JavaScript virtual machine) infrastructures.

What people often forget, even though I have underlined it 10.000 times, is that Smart Mobile Studio is written in Delphi (!) It was created to compliment Delphi. To enrich and allow Delphi developer’s to re-apply existing skills in a new paradigm.

In many ways Smart Mobile Studio is a testament to what Delphi is capable of in the right hands.

My role at Embarcadero

It was actually Facebook that “outed” me when I changed my employment status. Instead of a silent alteration on my profile, it plastered the change in bold, underline and italic.

But writing anything about this would be premature, nor do I feel it’s my place to do so.

All I can say is that I am very excited to work for the company that makes Delphi. A product that I love and have used on a daily basis since it was first launched.

Smart Mobile Studio 3

To make those of you who have been worried about what might happen to Smart Mobile Studio as a consequence of my new path, I hope I have made you feel optimistic about the future so far. Because I am super optimistic! Seriously, this is awesome!

As I type this Smart Mobile Studio 3.0 beta 3 should be available via the automatic-update tool for our customers. I can’t remember a year where we have worked so hard; and we have achieved above and beyond the schedule we set back in 2017.

37013365_10155541149775906_3122577366065348608_o

Smart Mobile Studio 3.0 ~ Node.js server and desktop framework demo

I don’t know how many nights my girlfriend has found me scribbling data-paths and formulas on my home office white-board, or getting up at 03:30 at night to test some idea – but when you compare our new RTL with the previous, especially our focus on node.js, you will witness a quantum leap on quality, features and technical wealth.

Separating apples from pears

Speaking of the future; my blogging style wont change, but I will avoid mixing apples and pears. Delphi posts should be about Delphi, and Smart posts should be about Smart. I don’t think I need to explain why that is a good idea. It’s important to maintain that line between work and personal projects.

The only reason I mention both here now, is to put things to rest and make it clear for everyone that it’s all good. And it’s going to get even better.

Smart Mobile Studio 3.0 is an epic release! And Delphi is going from strength to strength. So there is a lot to be happy about! I cant even remember when object pascal developers had so many options at their disposal.

Cheers

Jon L. Aaasenden

 

Nano-pi Fire 3: The curse of Mali

July 5, 2018 Leave a comment

Being able to buy SBCs (single board computers) for as little as $35 has revolutionized computing as we know it. Since the release of the Raspberry PI 1b back in 2012, single board computers have gone from being something electrical engineers work with, to something everyone can learn to use. Today you don’t need a background in electrical engineering to build a sophisticated embedded system; you just need a positive spirit, willingness to learn and a suitable SBC starter-kit.

Single board computers

If you are interested in SBC’s for whatever reason, I am sure you have picked up a few pointers about what works and doesn’t. In these times of open forums and 24/7 internet, the response-time from customers are close to instantaneous; and both positive and negative feedback should never be taken lightly. It’s certainly harder to hide a poor product in 2018, so engineers thinking they can make a quick buck selling sloppy-tech should think twice.

Fire3_02-900x630I have no idea how many boards have been released since 2016, but some 20+ new boards feels like a reasonable number. Which under normal circumstances would be awesome, because competition can be healthy. It can stimulate manufacturers to deliver better quality, higher performance and to use eco-friendly resources.

But it can also backfire and result in terrible quality and an unhealthy focus on profit.

The Mali SoC

The MALI graphics chipset is a name you see often in connection with SBC’s. If you read up on the Mali SoC it sounds fantastic. All that power, open architecture, partner support – surely this design should rock right? If only that were true. My experience with this chipset, which spans a variety of boards, is anything but fantastic. It’s actually a common factor in a growling list of boards that are unstable and unreliable.

I don’t have an axe to grind here, I have tried to remain optimistic and positive to every board that pass my desk. But Mali has become synonymous with awful performance and unreliable operation.

Out of the 14 odd boards I have tested since 2016, the 8 board that I count as useless all had the Mali chipset. This is quite remarkable considering Mali has an open driver architecture.

Open is not always best

If you have been into IOT for a few years you may recall the avalanche of critique that hit the Raspberry PI foundation for their choice of shipping with the Broadcom SoC? Broadcom has been a closed system with proprietary drivers written by the vendor exclusively, which made a lot of open-source advocates furious [at the time].

Raspberry_Pi_3_LargeYou know what? Going with the Broadcom chipset is the best bloody move the PI foundation ever did; I don’t think I have ever owned a SBC or embedded platform as stable as the PI, and the graphics performance you get for $35 is simply outstanding. Had they listened to their critics and used Mali on the Raspberry PI 2b, it would have been a disaster. The IOT revolution might never have occurred even.

The whole point of the Mali open driver architecture, is that developers should have easy access to documentation and examples – so they can quickly implement drivers and release their product. I don’t know what has gone wrong here, but either developers are so lazy that they just copy and paste code without properly testing it – or there are fundamental errors in the hardware itself.

To date the only board that works well with a Mali chipset, out of all the boards I have bought and tested, is the ODroid XU4. Which leads me to conclude that something has gone terribly wrong with the art of making drivers. This really should not be an issue in 2018, but the number of bankrupt mali boards tell another story.

Nano-PI Fire 3

When reading the specs on the Nano-pi fire 3 I was impressed with just how much firepower they managed to squeeze into such a tiny form-factor. Naturally I was sceptical due to the Mali, which so far only have ODroid going for it. But considering the $35 price it was worth the risk. Worst case I can recycle it as a headless server or something.

And the board is impressive! Let there be no doubt about the potential of this little thing, because from an engineering point of view its mind-blowing how much technology $35 buys you in 2018.

Fire3_en_03

I don’t want to sound like a grumpy old coder, but when you have been around as many SBC’s as I have, you tend to hold back on the enthusiasm. I got all worked-up over the Asus Tinkerboard for example (read part 1 and part 2 here), and despite the absolutely knock-out specs, the mali drivers and shabby kernel work crippled an otherwise spectacular board. I still find it inconceivable how Asus, a well-respected global technology partner, could have allowed a product to ship with drivers not even worthy of public domain. And while they have updated and made improvements it’s still not anywhere near what the board could do with the right drivers.

The experience of the Nano-PI so far has been the same as many of the other boards; especially those made and sold straight from smaller, Asian manufacturers:

  • Finding the right disk-image to download is unnecessarily cumbersome
  • Display drivers lack hardware acceleration
  • Poor help forums
  • “Wiki” style documentation
  • A large Linux distro that max out the system

More juice

The first thing you are going to notice with the Nano-pi is how important the power supply is. The nano ships with only one usb socket (one!) so a usb hub is the first thing you need. When you add a mouse and keyboard to that equation you have already maxed out a normal 5v 2a mobile power supply.

I noticed this after having problems booting properly when a mouse and keyboard was plugged in. I first thought it was the SD card, but no matter what card I tried – it still wouldn’t boot. It was only when I unplugged the mouse and keyboard that I could log in. Or should we say, cant log in because you don’t have a keyboard attached (sigh).

Now in most cases a Raspberry PI would run fine on 5v 2a, at least for ordinary desktop work; But the nano will be in serious trouble if you don’t give it more juice. So your first purchase should be a proper 5 volt 3 amp PSU. This is also recommended for the original Raspberry PI, but in my experience you can do quite a lot before you max out a PI.

Bluetooth

A redeeming factor for the lack of USB ports and power scaling, is that the board has Bluetooth built-in. So once you have paired and connected a BT keyboard things will be easier to work with. Personally I like keyboard and mouse to be wired. I hate having to change batteries or be disconnected at random (which always happens when you least need it). So the lack of USB ports and power delegation is negative for me, but objectively I can live with it as a trade-off for more CPU power.

Lack of accelerated graphics

It’s not hard to check if X uses the gpu or not. Select a large region of the desktop (holding the left mouse button down obviously) and watch in terror as it sluggishly tries to catch up with the cursor, repainting every cached co-ordinate. Had the GPU been used properly you wouldn’t even see the repaint of the rectangle, it would be smooth and instantaneous.

SD-card reader

I’m sorry but the sd-card reader on this puppy is the slowest I have ever used. I have never tested a device that boots so slow, and even something simple like starting chrome takes ages.

I tested with a cheap SD-card but also a more expensive class 10 card. I’m really trying to find something cool to write about, but it’s hard when boot times is worse than Raspberry PI 1b back in 2012.

1 gigabyte of ram

One thing that I absolutely hate in some of these cheap boards, is how they imagine Ubuntu to be a stamp of approval. The Raspberry PI foundation nailed it by creating a slim, optimized and blistering fast Debian distro. This is important because users don’t buy alternative boards just to throw that extra power away on Ubuntu, they buy these boards to get more cpu and gpu power (read: better value for money) for the same price.

Lubuntu is hopelessly obese for the hardware, as is the case with other cheap SBC’s as well. Something like Pixel is much more interesting. You have a slim, efficient and optimized foundation to build on (or strip down). Ubuntu is quite frankly overkill and eats up all the extra power the board supposedly delivers.

When it comes to ram, 1 gigabyte is a bit too small for desktop use. The reason I say this is because it ships with Ubuntu, why would you ship with Ubuntu unless the desktop was the focus? Which again begs the question: why create a desktop Linux device with 1 gigabyte of memory?

The nano-pi would rock with a slim distro, and hopefully someone will bake a more suitable disk-image for it.

Verdict so far

I still have a lot to test so giving you a final verdict right now would be unfair.

But I must be honest and say that I’m not that happy about this board. It’s not that the hardware is particularly awful (although the mali drivers renders it almost pointless), it’s just that it serves no point.

In order to turn this SBC into a reasonable device you have to buy parts that brings the price up to what you would pay for a ODroid XU4. And to be honest I would much rather have an ODroid XU4 than four nano-pi boards. You have plenty of USB ports, good power scaling (ODroid will start just fine on a normal charger), Bluetooth and pretty much everything you need.

For those rare projects where a single USB is enough, although I cannot for the life of me think of one right now, then sure, it may be cost-effective in quanta. But for homebrew servers, gaming rigs and/or your first SBC experience – I think you will be happier with an original Raspberry PI 3b+, a ODroid XU4 or even the Tinkerboard.

Modus operandi

Having said all that .. there is also something to say about modus-operandi. Different boards are designed for different systems. It may very well be that this system is designed to run Android as it’s primary system. So while they provide a Linux image, that may in fact only be a “bonus” distro. We shall soon see as I will test Android next.

Next up, how does it fare with the multi-threaded uae4arm? Stay tuned for more!

Smart Mobile Studio: Q&A about v3.0 and beyond

July 1, 2018 4 comments

A couple of days back I posted a sneak-peek of our upcoming Smart Mobile Studio 3.0 web desktop framework; as a consequence my Facebook messenger app has practically exploded with questions.

smart_desktop

The desktop client / server framework is an example of what you can do in Smart

As you can imagine, the questions people ask are often very similar; so similar in fact that I will answer the hottest topics here. Hopefully that will make it easier for everyone.

If you have further questions then either ask them on our product forums or the Delphi Developer group on Facebook.

 

Generics

Yes indeed we have generics running in the labs. We havent set a date on when we will merge the new compiler-core, but it’s not going to happen until (at the earliest) v3.4. So it’s very much a part of Smart’s future but we have a couple of steps left on our time-line for v3.0 through v3.4.

RTTI access

RTTI is actually in the current version, but sadly there is a bug there that causes the code generator to throw a fit. The fix for this depends on a lot of the sub-strata in the new compiler-core, so it will be available when generics is available.

Associative arrays

This is ready and waiting in the new core, so it will appear together with generics and RTTI.

Databases

We have supported databases since day 1, but the challenge with JavaScript is that there are no “standards” like we are used to from established systems like Delphi or Lazarus.

Under the browser we support WebSQL and our own TW3Dataset. We also compiled SQLite from native C to JavaScript so we can provide a fast, lightweight SQL engine for the browser regardless of what the W3C might do (WebSQL has been deprecated but will be around for many years still).

Server side it’s a whole different ballgame. There you have drivers (or modules) for every possible database you can think of, even Firebird. But each module is implemented as the authors see fit. This is where our Database framework comes in, sets a standard, and we then inherit out classes and implement the engines we want.

This framework and standard is being written now, but it wont be introduced until v3.1 and v3.2. In the meantime you have sqlite both server-side and client-side, WebSQL and TW3Dataset.

Attributes

This question is often asked separately from RTTI, but it’s ultimately an essential part of what RTTI delivers.

So same answer: it will arrive with the new compiler-core / infrastructure.

Server-side scripting

22555292_1630289757034374_6911478701417326545_n

The new theme system in action

While we do see how this could be useful, it requires a substantial body of work to make a reality. Not only would we have to implement the whole “system” namespace from scratch since JavaScript would not be present, but we would also have to introduce a a secondary namespace; one that would be incompatible with the whole RTL at a fundamental level. Instead of going down this route we opted for Node.js where creating the server itself is the norm.

 

If we ever add server-side scripting it would be JavaScript support under node.js by compiling the V8 engine from C to asm.js. But right now our focus is not on server-side-scripting, but on cloud building-blocks.

Bytecode compilation

I implemented the assembler and runtime for our bytecode system (LDef) this winter / early spring; So we actually have the means to create a pure bytecode compiler and runtime.

But this is not a priority for us at this time. Smart Mobile Studio was made for JavaScript and while it would be cool to compile Delphi sourcecode to portable bytecodes, such a project would require not just a couple of namespaces – but a complete rewrite of the RTL. The assembler source-code and parser can be found in the “Next Generation Demos” folder (Smart Mobile Studio 3.0 demos). Feel free to build on the codebase if you fancy creating your own language;Get creative and enjoy! **Note: Using the assembler in your own projects requires a valid Smart Mobile license.

Native Apps

It’s interesting that people still ask this, since its one of our central advantages. We already generate native apps via the Phonegap post-processor.

phonegap

Phonegap turns your JS apps into native apps

Phonegap takes your compiled Smart (visual projects only) compiled code, processes it, and spits out native apps for every mobile OS on the market (and more). So you don’t have to compile especially for iOS, Android, Tizen or FireOS — Phonegap generates one for each system you need, ready for AppStore.

So we have native covered by proxy. And with millions of users Phonegap is not going anywhere.

Release date

We are going over the last beta as I type this, and Smart Mobile Studio 3.0 should be available next week. Which day is not easy to say, but at least before next weekend if all goes accoring to plan.

Make sure you visit www.smartmobilestudio.com and buy your license!

Patching Smart Mobile Studio’s ACE editor

June 30, 2018 Leave a comment

The Ace text-editor has been a part of the Smart Mobile Studio component set for a while now. It is seen by many as the de-facto code and text editor for JavaScript, and much like SynEdit for Delphi and C++ builder – Ace has support for a myriad of themes, languages and even key shortcut mapping.

Align problems

22814515_1630289797034370_9138255627706616601_nWith the introduction of our new theme engine, we completely revamped the entire notion of how a theme is organized. Gone are the hard-coded styles that targeted each individual control regardless if it was used or not. Instead we created a theme engine with a fixed number of borders and backgrounds, which are used as building-blocks by our visual controls.

This makes life much easier for everyone, especially Smart developers who write their own custom-controls (which you kinda have to do if you want something unique).

But Ace didn’t like this one bit. It has taken quite a debugging chore to track down what the heck is causing Ace to mis-place the cursor like that. It only happens when you apply a language theme to ace (Ace has its own themes and language parsers). And it’s not a superficial bug either, it renders Ace useless for anything serious.

Fixing the padding

The “bug” turned out to be as simple as padding. In our theme-files we are very careful and avoid imposing on other styles that might be loaded. But there are two sections where we apply values globally (as in “apply this to all elements of type x, y and z”).

One of these values is the padding. Depending on the theme, the padding is either set to 1px or 2px. This is set in a constant (Smart supports scriptable stylesheets) almost at the top of the file.

Before you start changing the theme files, I suggest you do the following:

  • Copy the existing theme files and prefix them with “np” (no padding). Just keep these copies in the same folder as the other themes
  • You should now have the following files in your theme folder:
    • npDefault.css
    • npAndroid.css
    • npiOS.css
    • Default.css
    • Android.css
    • iOS.pcss
  • Now edit each file (only those prefixed with np), and change the constant “stdpadding” which is defined on the top of each file (line #6), and set it to “0px” rather than the original “2px”.
  • Save all changes to the files
  • Restart Smart Mobile Studio

When the Smart IDE restarts it will have your additional theme files in the project options (under “linker”).

If you use Ace in your application then simply pick one of the new files as an alternative to the older. This fixes the problem with Ace’s cursor ending up behind the last character on a styled line.

Less intrusive fix

An alternative and less intrusive remedy, is to define a custom css style for Ace directly in your code. This is now very simple thanks to our css classes, but if you use Ace a lot then the above fix is probably the best for now.

stylecode

Injecting a CSS style is very simple 

Smart Pascal file enumeration under node.js

May 10, 2018 Leave a comment

Ok. I admit it. Writing an RTL from scratch has been one of the hardest tasks I have ever undertaken. Thankfully I have not been alone, but since I am the lead developer for the RTL, it naturally falls on me to keep track of the thousands of classes it comprises; how each affect the next, the many inheritance chains and subsequent causality timelines that each namespace represents.

We were the first company in the world to do this, to establish the compiler technology and then author a full RTL on top of that – designed to wrap and run on top of the JavaScript virtual machine. To be blunt, we didn’t have the luxury to looking at what others had done before us. For every challenge we have had to come up with solutions ourselves.

Be that as it may, after seven years we have gotten quite good at framework architecture. So whenever we need to deal with a new runtime environment such as node.js – we  have already built up a lot of experience with async JSVM development, so we are able to absorb and adapt much faster than our competitors.

Digging into a new platform

Whenever I learn a new language, I typically make a little list of “how do I do this?” type questions. It can be simple, like writing text to stdout, or more elaborate like memory mapped files, inheritance model, raw memory access and similar topics.

But one of the questions have always been: how do I enumerate files in a folder?

While this question is trivial at best, it stabs at the heart of the sub structure of any language. On operating systems like Linux a file is not just data on a disk like we are used to from Windows. A file can be a socket, a virtual access point exposed by the kernel, a domain link, a symbolic link or a stream. So my simple question is actually motivated to expose the depth of the language im learning. I then write down whatever topics come up and then research / experiment on them separately.

Node, like the browser, executes code asynchronously. This means that the code you write cannot be blocking (note: node does support synchronous file IO methods, but you really don’t want to use them in a server. They are typically used before the server is started to load preferences files and data).

As you can imagine, this throws conventional coding out the window. Node exposes a single function that returns an array of filenames (array of string), which helps, but it tells you nothing about the files. You don’t get the size, the type, create and modify timestamps – just the names.

To get the information I just mentioned you have to call a function called “fs.stat”. This is a common POSIX filesystem command. But again we face the fact that everything is async, so that “for / next” loop is useless.

Luke Filewalker

In version 3.0 of Smart Mobile Studio our Node.JS namespace (collection of units with code) has been upgraded and expanded considerably. We have thrown out almost all our older dependencies (like utf8.js and base64.js) and implemented these as proper codec classes in Smart Pascal directly.

Our websocket framework has been re-written from scratch. We threw out the now outdated websocket-io and instead use the standard “ws” framework that is the most popular and actively maintained module on NPM.

We have also implemented the same storage-device class that is available in the browser, so that you can write file-io code that works the same both server-side and client-side. The changes are in the hundreds so I wont iterate through them all here, they will be listed in detail on the release-notes document when the time comes.

But what is a server without a fast, reliable way of enumerating files?

Well, here at the Smart Company we use our own products. So when writing servers and node micro-services we face the exact same challenges as our customers would. Our job is to write ready solutions for these problems, so that you don’t have to spend days and weeks re-inventing the wheel.

Enumerating files is handled by the class TNJFileWalker (I was so tempted to call it Luke). This takes care of everything for you, all the nitty-gritty is neatly packed into a single, easy to use class.

Here is an example:

luke

Enumerating files has literally been reduced to childs play

The class also expose the events you would expect, including a filtering event where you can validate if a file should be included in the final result. You can even control the dispatching speed (or delay between item processing) which is helpful for payload balancing. If you have 100 active users all scanning their files at the same time -you probably want to give node the chance to breathe (20ms is a good value).

The interface for the class is equally elegant and easy to understand:

luke2

What would you prefer to maintain? 500.000 lines of JavaScript or 20.000 lines of pascal?

Compare that to some of the spaghetti JavaScript developers have to live with just to perform a file-walk and then do a recursive “delete folder”. Sure hope they check for “/” so they don’t kill the filesystem root by accident.

const fs = require('fs');
const path = require('path');

function filewalker(dir, done) {
    let results = [];

    fs.readdir(dir, function(err, list) {
        if (err) return done(err);

        var pending = list.length;

        if (!pending) return done(null, results);

        list.forEach(function(file){
            file = path.resolve(dir, file);

            fs.stat(file, function(err, stat){
                // If directory, execute a recursive call
                if (stat && stat.isDirectory()) {
                    // Add directory to array [comment if you need to remove the directories from the array]
                    results.push(file);

                    filewalker(file, function(err, res){
                        results = results.concat(res);
                        if (!--pending) done(null, results);
                    });
                } else {
                    results.push(file);

                    if (!--pending) done(null, results);
                }
            });
        });
    });
};

function deleteFile(dir, file) {
    return new Promise(function (resolve, reject) {
        var filePath = path.join(dir, file);
        fs.lstat(filePath, function (err, stats) {
            if (err) {
                return reject(err);
            }
            if (stats.isDirectory()) {
                resolve(deleteDirectory(filePath));
            } else {
                fs.unlink(filePath, function (err) {
                    if (err) {
                        return reject(err);
                    }
                    resolve();
                });
            }
        });
    });
};

function deleteDirectory(dir) {
    return new Promise(function (resolve, reject) {
        fs.access(dir, function (err) {
            if (err) {
                return reject(err);
            }
            fs.readdir(dir, function (err, files) {
                if (err) {
                    return reject(err);
                }
                Promise.all(files.map(function (file) {
                    return deleteFile(dir, file);
                })).then(function () {
                    fs.rmdir(dir, function (err) {
                        if (err) {
                            return reject(err);
                        }
                        resolve();
                    });
                }).catch(reject);
            });
        });
    });
};

Writing Smart Pascal Controls, async initialization and the tao pattern

May 7, 2018 Leave a comment

Async programming can take a bit getting used to if you come straight from Delphi or Lazarus. So in this little article I am going to show you an initialization pattern that will help you initialize your custom-controls and forms in way that is reliable.

Object Ready

In 99.9% of the custom-controls you create, you will either inherit directly from an existing control (like TW3Button, TW3EditBox or other traditional visual controls), or directly from TW3CustomControl.

If you have a quick look at the source for the RTL, which we take for granted that you do, you will find that our RTL is very familiar. It is loosely based on the LCL (lazarus component library), VCL (Visual component library) and with a dash of Mono GTK# thrown in for good measure. But while familiar in appearance, it really is a completely new RTL written to deliver the best of what HTML5 / JS has to offer.

One of the more interesting methods of TW3CustomControl is ObjectReady. This is actually introduced further down in the inheritance chain with TW3MovableControl, but most developers want the infrastructure TW3CustomControl delivers – so that will be the focus on the topic today.

In short, ObjectReady is called when your visual control has been created, injected into the DOM and is ready for use.

The common mistake

A common mistake with ObjectReady() is that the ready state somehow covers any child elements you might have created for your control. This is not the case. ObjectReady() is called when the current control is finished with its initialization, and is ready for manipulation.

Just before the ObjectReady() method is called, the csReady flag is added to the ComponentState set (note: if you don’t know what a set is, it’s a bit like an array of enums. Please google “Delphi sets” to investigate this further if you are just starting out).

Checking if a control is ready can be done manually by reading the csReady state from a controls ComponentState. But naturally, that only works if the control has already reached that state. Prior to the ready state the csCreating state is added to ComponentState, this is removed as the initialization completes and the control enters ready state.

The united states of custom-controls

To better understand when the different component states are set and what really happens when you create a visual control, let’s go through the steps one by one.

  • TW3TagObj
    • Ordinary constructor (create) is called
      • csCreating is added to ComponentState
      • DOM element name to be managed is obtained via TW3TagObj.MakeElementTagId()
    • Handle is obtained via TW3TagObj.MakeElementTagObj()
      • csLoading in added to ComponentState
      • A DOM level identifier (name) is assigned to the control
      • ZIndex is calculated and assigned to the control
    • StyleTagObject() method is called for any css adjustments
    • InitializeObject() is called, this is the constructor in our RTL
    • Control instance is registered with the global control tracker
      • csCreating is removed from ComponentState
      • csLoading in removed from ComponentState
  • TW3MovableControl
    • Alpha blending is initialized but not activated
    • if cfIgnoreReadyState is set in CreationFlags() then ObjectReady is called immediately without any delay
    • If not cfIgnoreReadyState is set, the ReadySync() method is called

the ReadySync() method is of special importance here.

Since JavaScript is asynchronous, reality is that whatever child controls you have created during InitializeObject, can still be under construction even after InitializeObject() finishes. The JavaScript engine might have returned a handle, but the data for the instance is still under construction behind the scenes.

To be blunt: Never trust JavaScript to deliver a 100% ready to use element. If the browser is under heavy stress from other websites and internal work, that can have catastrophic consequences on the state of the objects it returns.

This is one of many reasons that made us chose to write our RTL from scratch rather than just fork CLX or try to be as Delphi friendly as possible. That would have been much easier for us, but it would also be to sell you on the tooth-fairy because that’s not how JavaScript works.

We want our users to have full control and enjoy the same freedom and simplicity that made us fall in love with object pascal all those years ago. And if we forced JavaScript into a pre-fabricated mold like the LCL; the spark and flamboyance that JavaScript brings to the table would have been irreparably damaged if not lost.

But let’s get back on topic.

Like mentioned above, ReSync() is of special importance. It will continuously check if the control is “ready” on a 10ms interval, and it keeps going until the criteria matches or it times out. To avoid infinite loops it has a maximum call stack of 300. Meaning it will keep trying 300 times, a total of 3 seconds and then break out free for safety reasons.

But once the criteria for ready-state matches (or the waiting interval times out)  – ObjectReady() is finally called.

Keep your kids in order

While knowing when the control is ready for us is great for writing components, what practical purpose does it really serve if the child controls is excluded?

Well, again we are back at freedom. We could have baked in a wait sequence for our forms (since the designer knows what child elements are involved). But sadly that wont work on custom controls that the IDE doesn’t manage. And it would only work on forms.

A negative side-effect of this (yet I did test it) is that a form will remain blank until all child controls, their children and their grand children – all reports “ready”.

In short: our code cannot manage what it doesn’t know. The IDE cannot (for reasons obvious) know what your code creates at runtime. And in large and complex controls like grids, planners or MDI systems – such code would get in your way and render the benefits null and void quickly.

As of writing there are some creative solutions to this, trying to get the timing right

  • Write your own checking routines inspired by ReadySync
  • Ignore the whole thing and just check ready-state and that child elements are not NIL in Resize(). This is what most people do.
  • Use TW3Dispatch and throw in an extra Resize() call somewhere in ObjectReady()

While perfectly legal (or perhaps better said: not illegal), these solutions are not very reliable. If the browser is under stress it can prioritize your layout code as less important – and suddenly you have a button where it’s not supposed to be, or a panel that never stretched as planned.

The Tao pattern

Tao (time aware operation) is pattern I created to solve this problem with a bit of grace. Much like the ReadySync() method we talked about earlier, it performs interval based checking of child element states, and thus allows you to do operations in a timely fashion.

As you probably know, under Smart Pascal you are not supposed to override the constructor when you create new controls. Instead you override InitializeObject(). The same goes for the destructor, there you override FinalizeObject().

So the 5 “must-know” methods for writing your own controls are:

  1. InitializeObject
  2. FinalizeObject
  3. ObjectReady
  4. Resize
  5. StyleTagObject

Note: Since Smart Mobile Studio has an evolved theme engine, it is rare that people override StyleTagObject() these days. But there are cases where you want to set some HTML attribute or alter a style; changes that are too small to justify a new style in the global stylesheet. It’s also the place to call ThemeReset() if you don’t want your control to use themes at all, or perhaps set a different theme border and background.

OK, let’s look at a practical example of how TAO works. It is simple, flexible and requires minimal adaptation if you want to adjust older controls you have made.

taoselect

Lets build a simple path selector control. Easy and ad-hoc

In this example we will be making a path selector. This is essentially an edit box with a button situated at the far-right. Clicking the button would bring up some form of dialog. I am excluding that for brevity since it’s not the control that is interesting here, but rather how we initialize the control.

type

  TTaoControl = class(TW3CustomControl)
  private
    FButton:  TW3Button;
    FEdit:    TW3EditBox;
  protected
    procedure InitializeObject; override;
    procedure FinalizeObject; override;
    procedure ObjectReady; override;
    procedure StyleTagObject; override;
    procedure Resize; override;
  end;

As you can see the control class is defined exactly the same way as before. There is no change what so ever in how you write your classes. Now let’s look at the implementation:

procedure TTaoControl.InitializeObject;
begin
  inherited;
  TransparentEvents := false;
  SimulateMouseEvents := false;

  // Create our editbox
  FEdit := TW3EditBox.Create(self);
  FEdit.SetSize(80, 28);

  // reate our select button
  FButton := TW3Button.Create(self);
  FButton.SetSize(70, 28);
end;

procedure TTaoControl.FinalizeObject;
begin
  FEdit.free;
  FButton.free;
  inherited;
end;

procedure TTaoControl.ObjectReady;
begin
  inherited;
  // set some constraints (optional)
  Constraints.MinWidth := 120;
  Constraints.MinHeight := 32;
  Constraints.Enabled := true;

  // TAO: Wait for the child controls to reach ready-state
  TW3Dispatch.WaitFor([FEdit, FButton], 5,
    procedure (Success: boolean)
    begin
      if Success then
      begin
        // set some properties for the edit box
        FEdit.ReadOnly := true;
        FEdit.PlaceHolder := 'Please selected a file';

        // set caption for button
        FButton.Caption := 'Select';

        // Do an immediate resize
        Resize();
      end;
    end);
end;

procedure TTaoControl.StyleTagObject;
begin
  inherited;

  // Set a container border. This border is
  // typically used by TW3Panel and TW3GroupBox
  ThemeBorder := btContainerBorder;
end;

procedure TTaoControl.Resize;
var
  LBounds:  TRect;
  dx, dy: integer;
  wd, EditWidth: integer;
begin
  inherited;
  // Make sure we dont do anything if resize has been
  // called while the control is being destroyed
  if not (csDestroying in ComponentState) then
  begin
    // Make sure we have ready state
    if (csReady in ComponentState) then
    begin
      // Check that child elements are all assigned
      // and that they have their csReady flag set in
      // ComponentState. This can be taxing. A more lightweight
      // version is TW3Dispatch.Assigned() that doesnt check
      // the ready state (see class declaration for more info)
      if TW3Dispatch.AssignedAndReady([FButton, FEdit]) then
      begin
        // Finally: layout the controls. This can be
        // optimized quite a bit, but focus is not on
        // layout code, but rather the sequence in which operations
        // are executed and handled.
        LBounds := ClientRect;
        wd := LBounds.width;
        dy := (LBounds.Height div 2) - (FEdit.Height div 2);
        EditWidth := (wd - FButton.Width) - 4;
        FEdit.SetBounds(LBounds.left, dy, EditWidth, FEdit.Height);

        dx := LBounds.left + EditWidth + 2;
        dy := (LBounds.Height div 2) - (FButton.Height div 2);
        FButton.SetBounds(dx, dy, FButton.Width, FButton.Height);
      end;
    end;
  end;
end;

If you look closely, what we do here is essentially to spread the payload and cost of creating child elements over multiple methods.

We reduce the constructor, InitializeObject(), to just creating our child controls and setting some initial sizes. This last point, setting initial size, is actually important. Because if the control has no size (width = 0, or height = 0) the browser will not treat the element as visible. Which in turn causes TW3Dispatch.WaitFor() to wait until a size is set.

TW3Dispatch methods

TW3Dispatch is a partial class. This is something that neither Delphi or Freepascal supports, and it has it’s root in C# and the dot net framework.

In short it means that a class can have it’s implementation spread over multiple files. So instead of having to complete a class in a single unit, or inherit from a class and then expanding it – partial classes allows you to expand a class over many units.

This is actually really cool and useful, especially when you work with multiple targets. For example, TW3Dispatch is first defined in System.Time.pas which is the universal namespace that only contains code that runs everywhere. This gives you the basic functionality like delayed execution (just open the unit and have a look).

The class is then further expanded in SmartCL.Time (SmartCL being the namespace for visual, HTML based JavaScript applications). There it gains methods like RequestAnimationFrame() which doesnt exist under node.js for example.

Smart Mobile Studio’s namespaces makes good use of partial classes

TW3Dispatch is further expanded in SmartCL.Components.pas, which is the core unit for visual controls. So starting with version 3.0 the functions I have just demonstrated will be available in the RTL itself.

Until then, you can download TW3Dispatch with the TAO methods here. You need to put it in your own unit, and naturally – use it with care.

Click here to download the TW3Dispatch code.

Note: This code is not free or open-source. It is indended for Smart Mobile Studio owners exclusively, and made available here so registered users can start working with the control coding pattern.

Using Smart Mobile Studio under Linux

April 22, 2018 Leave a comment

Every now and then when I post something about Smart Mobile Studio, an individual or two wants to inform me how they cannot use Smart because it’s not available for Linux. While more rare, the same experience happens now and then with OS X.

linux

The Smart desktop demo running in Firefox on Ubuntu, with Quake 3 at 60 fps

While the request for Linux or OS X support is both valid and understandable (and something we take seriously), more often than not these questions can be a pre-cursor to a larger picture; one that touches on open-source, pricing and personal philosophical points of view; often with remarks on pricing.

Truth be told, the price we ask for Smart Mobile Studio is close to symbolic. Especially if you take the time to look at the body of work Smart delivers. We are talking hundreds of hand written units with thousands of classes, each spesifically adapted for HTML5, Phonegap and Node.js. Not to mention ports of popular JavaScript frameworks.

If you compare this to native object pascal development tools with similar functionality, they can set you back thousands of dollars. Our asking price of $149 for the pro edition, $399 for the enterprise edition, and a symbolic $42 for the basic edition, that is an affordable solution. Also keep in mind that this gives you access to updates for a duration of 12 months. When was the last time you bought a full development suite that allows you to write mobile applications, platform independent servers, platform independent system services and HTML5 web applications for less that $400 ?

prices

Our price model is more than reasonable considering what you get

With platform independent we mean that in the true sense of the word: once compiled, no changes are required. You can write a system service on Windows and it will run just fine under Linux or OS X. No re-compile needed. You can take a server and copy it to Amazon or Azure, run it in a cluster or scale it from a single instance to 100 instances without any change. That is unheard of for object pascal until now.

Smart Mobile Studio is the only object pascal development system that delivers a stand-alone IDE, stand-alone compiler, a wast object-oriented run-time library written from scratch to cover HTML5, Node.js and embedded systems that run JavaScript.

And yes, we know there are other systems in circulation, but none of them are even close to the functionality that we deliver. Functionality that has been polished for seven years now. And our RTL is growing every day to capture and expose more and more advanced functionality that you can use to enrich your products.

21272810_1585822718147745_4358597225084661216_o

The RTL class browser shows the depth of our RTL

As of writing we have a team of six people working on Smart Mobile Studio. We have things in our labs that is going to change the way people build applications forever. We were the first to venture into this new landscape. There were nobody we could imitate, draw inspiration from or learn from. We had to literally make the path as we moved forward.

And our vision and goal remains the same today as it was seven years ago: to empower object pascal developers – and to secure their investment in the language and methodology that is object pascal.

Discipline and purpose

There is so much I would like to work on right now. Things I want to add to Smart Mobile Studio because I find them interesting, powerful and I know people are going to love them. But that style of development, the “Garage days” as people call it, is over. It does wonders in the beginning of a project maybe, but eventually you reach a stage where a formal timeline and business plan must be carved in stone.

And once defined, you have to stick to it. It would be an insult to our customers if we pivoted left and right on a whim. Like every company we have room for research, even a couple of skunkwork projects, but our primary focus is to make our foundation rock solid before further growth.

j5

By tapping into established JavaScript frameworks you can cover more than 40+ embedded systems and micro-controllers. More and more hardware supports JS for automation

Our “garage days” ended around three years ago, and through hard work we defined our timeline, business model and investor program. In 2017 we secured enough capital to continue full-time development.

Our timeline has been published earlier, but we can re-visit some core points here:

The visual components that shipped with Smart Mobile Studio in the beginning, were meant more as examples than actual ready-to-use modules. This was common for other development platforms of the day, such as Xamarin’s C# / Mono toolchain, where you were expected to inherit from and implement aspects of a “partial class”. This is also why Smart Pascal has support for partial classes (neither Delphi or Freepascal supports this wonderful feature).

native

One of our skunkwork projects is a custom linux distro that runs your Smart applications directly in the Linux framebuffer. No X or desktop, just your code. Here running “the smart desktop” as the only visual front-end under x86 vmware

Since developers coming from Delphi had different expectations, there was only one thing to do. To completely re-write every single visual control (and add a few new controls) so that they matched our customers expectations. So the first stretch of our timeline has been 100% dedicated to the visual aspects of our RTL. While doing so we have made the RTL faster, more efficient, and added some powerful sub-systems:

  • A dedicated theme engine
  • Unified event delegation
  • Storage device classes
  • Focus and control tracking
  • Support for relative position modes
  • Support for all boxing models
  • Inline linking ( {$R “file.js”} will now statically link an external library)
  • And much, much more

So the past eight months has been all about visual components.

22814515_1630289797034370_9138255627706616601_n

Theming is important

The second stretch, which we are in right now, is dedicated to the non-visual infrastructure. This means in particular Node.js but also touches on non-visual components, TAction support and things that will appear in updates this year.

Node.js is especially important since it allows you to write platform and chipset independent web servers, system services and command-line applications. This is pioneering work and we are the first to take this road. We have successfully tamed the alien landscape of JavaScript, both for client, mobile and server – and terraformed it into a familiar, safe and productive environment for object pascal developers.

I feel the results speak for themselves, and our next update brings Smart Mobile Studio to the next level: full stack cloud and web development. We now cover back-end, middle-ware and front-end technologies. And our RTL now stretches from micro-controllers to mobile application to clustered cloud services.

This is the same technology used by Netflix to process terabytes of data every second on a global scale. Which should tell you something about the potential involved.

Working on Linux

Since Smart Mobile Studio was designed to be a swiss army knife for Delphi and Lazarus developers, capable to reaching segments of the market where native code is unsuitable – our primary focus is Microsoft Windows. At least for now.

Delphi itself is a Windows-based development system, and even though it supports multiple targets, Windows is still the bread and butter of commercial software development.

Like I mentioned above, we have a timeline to follow, and until we have reached the end of that line, we are not prepared to refactor our IDE for Linux or OS X. This might change sooner than people think, but until our timeline for the RTL is concluded, we will not allocate time for making the IDE platform independent.

But, you can actually run Smart Mobile Studio on both Linux and OS X today.

Linux has a system called Wine. This is a system that implements the Windows API, but delegates all the calls to Linux. So when a Windows program calls a WinAPI through Wine, its delegated to Linux variation of the same call. This is a massive undertaking, but it has years of work behind it and functions extremely well.

So on linux you can install it by opening up a shell and typing:

sudo apt-get install wine

I take for granted here that your Linux flavour has aperture installed (I’m using Ubuntu since that is easy to work with), which is the package manager that gives you the “apt-get” command. If you have some other system then just google how to install a package.

With Wine and it’s dependencies installed, you can run the Smart Mobile Studio installer. Wine will create a virtual, sandboxed disk for you – so that all the files end up where they should. Once finished you punch in the license serial number as normal, and voila – you can now use Smart Mobile Studio on Linux.

Note: in some cases you have to right-click the SmartMS.exe and select “run with -> Wine”, but usually you can just double-click the exe file and it runs.

Smart Mobile Studio on OSX

Wine has been ported to OS X, but it’s more user friendly. You download a program called wine-bottler, which takes Smart and bundles it with wine + any dependencies it needs. You can then start Smart Mobile Studio like it was a normal OS X application.

Themes and look

The only problem with Wine is that it doesnt support Windows themes out of the box. It would be illegal for them to ship those files. But you can manually copy over the windows theme files and install them via the Wine config application. Once installed, Smart will look as it should.

By default the old Windows 95 look & feel is used by Wine. Personally I dont care too much about this, it’s being able to code, compile and run the applications that matters to me – but if you want a more modern look then just copy over the Windows theme files and you are all set.

 

 

Multiplication decomposer

April 22, 2018 Leave a comment

Here is a fun little number decomposer I made a while back. As you might know, the ancient egyptians were no strangers to binary. They also didnt use multiplication tables like we do – but instead used a method called “double down”.

To demonstrate this technique I wrote a simple little program that takes a multiplication and then breaks it down to the numbers an egyptian would look-up in his table to find the right answere.

egypt

It’s actually pretty cool. They did not apply multiplication like we do at all, but adding

You will need to drop two textboxes, 3 labels, one button and a memo control on your form (see layout in picture).

 

unit Form1;

interface

uses
  System.Types,
  System.Types.Convert,
  System.Objects,
  System.Time,
  SmartCL.System,
  SmartCL.Time,
  SmartCL.Graphics,
  SmartCL.Components,
  SmartCL.FileUtils,
  SmartCL.Forms,
  SmartCL.Fonts,
  SmartCL.Theme,
  SmartCL.Borders,
  SmartCL.Application,
  SmartCL.Controls.Button,
  SmartCL.Controls.Label,
  SmartCL.Controls.EditBox,
  SmartCL.Controls.Memo;

type

  TDecomposer = Class(TObject)
  private
    FBitValues:   TIntArray;
    FLeftPillar:  TIntArray;
    FRightPillar: TIntArray;
    FSumPillar:   TIntArray;
  protected
    procedure BuildLeftPillar(Number:Integer);
    procedure BuildRightPillar(Multiplier:Integer);
  public
    Property  LeftPillar:TIntArray read FLeftPillar;
    Property  RightPillar:TIntArray read FRightPillar;
    Property  SumPillar:TIntArray read FSumPillar;

    function  DecomposeNumber(aNumber:Integer):String;
    function  DecomposeMultiplication(aNumber,aMultiplier:Integer):String;

    Constructor Create;Virtual;
  end;

  TForm1 = class(TW3Form)
    procedure W3Button1Click(Sender: TObject);
  private
    {$I 'Form1:intf'}
  protected
    procedure InitializeForm; override;
    procedure InitializeObject; override;
    procedure Resize; override;
  end;

implementation

{ TForm1 }

procedure TForm1.W3Button1Click(Sender: TObject);
begin
  var LObj := TDecomposer.Create;
  try
    var LText := LObj.DecomposeMultiplication(StrToInt(w3editbox1.Text),StrToInt(w3editbox2.text));
    w3memo1.text := LText;
    writeln(LText);
  finally
    LObj.free;
  end;
end;

procedure TForm1.InitializeForm;
begin
  inherited;
  // this is a good place to initialize components
end;

procedure TForm1.InitializeObject;
begin
  inherited;
  {$I 'Form1:impl'}
end;

procedure TForm1.Resize;
begin
  inherited;
end;

//#############################################################################
// TDecomposer
//#############################################################################

constructor TDecomposer.Create;
begin
  inherited Create;

  (* Build table of bitvalues *)

  var mValue := 1;
  for var x :=1 to 32 do
  begin
    FBitValues.add(mValue);
    mValue:=mValue * 2;
  end;
end;

procedure TDecomposer.BuildLeftPillar(Number:Integer);
begin
  FLeftPillar.clear;
  if FBitValues.length>0 then
  begin
    var mValue := 1;
    for var x := FBitValues.low to FBitValues.high do
    begin
      if FBitValues[x] <= Number then         FLeftPillar.add(FBitValues[x])       else         break;     end;   end; end; procedure TDecomposer.BuildRightPillar(Multiplier:Integer); begin   FRightPillar.clear;   if FLeftPillar.length>0 then
  begin
    for var x := FLeftPillar.low to FLeftPillar.high do
    begin
      FRightPillar.add(Multiplier);
      Multiplier:=Multiplier * 2;
    end;
  end;
end;

function TDecomposer.DecomposeMultiplication
         (aNumber,aMultiplier:Integer):String;
begin
  var mSum := aNumber * aMultiplier;
  BuildRightPillar(aMultiplier);

  result := aNumber.toString + ' x '
     + aMultiplier.toString
     + ' = '
     + DecomposeNumber(mSum);
end;

function TDecomposer.DecomposeNumber(aNumber:Integer):String;
begin
  FSumpillar.clear;
  FLeftPillar.clear;
  FRightPillar.clear;

  BuildLeftPillar(aNumber);

  for var x := FLeftPillar.low to FLeftPillar.High do
  begin
    if TInteger.getBit(x,aNumber) then
    FSumPillar.add(FBitValues[x]);
  end;

  if FSumPillar.length>0 then
  Begin
    result:=aNumber.ToString + ' = ';
    for var x:=FSumPillar.low to FSumpillar.high do
    begin
      if x=FSumPillar.low then
      result += FSumPillar[x].toString else
      result += ' + ' + FSumPillar[x].toString;
    end;
  end;
end;

function QTX_GetNumberProducer(aNumber:Integer):String;
const
  QTX_BITS:  Array[0..31] of Integer =
  ( 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048,
    4096, 8192, 16384, 32768, 65536, 131072, 262144, 524288,
    1048576, 2097152, 4194304, 8388608, 16777216, 33554432,
    67108864, 134217728, 268435456, 536870912, 1073741824,
    2147483648);
Begin
  var LStack: array of string;
  if aNumber > 0 then
  begin
    for var x := QTX_BITS.low to QTX_BITS.high do
    begin
      if TInteger.getBit(x,aNumber) then
        LStack.add(QTX_BITS[x].toString);
      if QTX_BITS[x] >= aNumber then
        break;
    end;

    if LStack.length>0 then
    begin
      result := aNumber.toString + ' = ';
      for var x := LStack.low to LStack.high do
      begin
        if x > LStack.low then
          result += ' + ' + LStack[x]
        else
          result += LStack[x];
      end;
    end;
  end else
    result := "0";
end;

function QTX_GetNumberMultiplier(aFirst,aSecond:Integer):String;
Begin
  var LSum := aFirst * aSecond;
  result := aFirst.ToString() + ' x ' + aSecond.ToString();
  result += ' = ';
  result += QTX_GetNumberProducer(LSum);
end;

initialization
  Forms.RegisterForm({$I %FILE%}, TForm1);
end.

Paypal, enough is enough

April 20, 2018 6 comments

I used to love PayPal. Really, it was a brilliant solution to a global problem.
As a software developer living in Norway, where I spend most of my time with people who live and work in the United States, India or the Arab Emirates – commerce can sometimes be a challenge. It’s a strange situation to be in, where you have lunch with people thousands of miles away. You call up your friends in NYC after work just like you would a friend down the street; and in the weekend you share a cold beer over video chat, or team up on Playstation to enjoy a game together.

I have become, for all means and purposes, an american by proxy.

s949607563640099117_p14_i1_w1000As a software developer part of what I do is to produce software components. These are intricate blocks of code that can be injected into programs, thus saving other developers the time it takes to implement the functionality from scratch. This is a fantastic thing because one person cannot possibly cope with “everything”. Buying components and libraries is a fundamental part of what a software manager does when prototyping a new product. It is a billion dollar industry and it’s not going away any time soon.

The reason is simple: if you hire someone to research, implement and test something you need in your product, the wages you pay will be 10-100 times higher than if you just buy a pre-fabricated module. I mean, allocating 2 developers to work full-time for a month to make a PDF rendering component (as an example of a complex component) will cost you two months salary. This also leaves you with the responsibility for bugs, updates – the whole nine yards.

“PayPal has a policy where it completely ignores the voice of merchants. They automatically side with the customer and will thus remove funds from your account as they please”

Lets say you have two junior developers making $6000 a month each, coupled with an estimate of 8 weeks to finish the functionality (which is always wrong, so add a couple of weeks for Q&A), that brings us to $12000 + $6000 = $18000. OR — you can just buy a ready to use component for $500 and have PDF support up and running in a day. This also delegates bug-fixing, documentation and updates onto the vendor.

When I wanted to set up shop, I figured that PayPal would be an excellent platform to use. I mean, it’s been around for so long that it’s become intrinsic to international, online economics. It’s available everywhere, and their percentage of sales is reasonable.

Well, that turned out to be a mistake. PayPal is not cool at all when you move from consumer to merchant. Which takes weeks by the way if you live outside the US. You have to send in photocopies of your passport, credit card receipts and social security information; which is illegal in Norway and a serious breach of privacy.

It’s only your money if we allow it

We live in a world where there are a lot of terrible people. People that sell broken goods, that lie, steal and is willing to do just about anything if it benefits them. Honesty is almost regarded as a burden in online business, which I detest and refuse to take part in.

“The second and third calls [to PayPal] resulted in 45 and 90 minutes of “please hold”. They literally exhausted their own merchant to make the case go away.”

asus

UPS is on my door more than the average American household. This is the new reality.

Lord knows I have been victim to some extremely unjust sales representative in my time (havent we all). And the experience has often been that you are helpless once you have received a product. It doesn’t matter if the product you received was faulty, the wrong size – or even the wrong bloody product! As a consumer you often have to calculate how much it will cost you to fight back. And more often than not, fighting back costs more than just accepting that you have been ripped off. I mean, nobody is stupid enough to return the wrong goods to China (for example), because you will never hear from them again.

Well, once I switched from being just a consumer to selling goods and becoming a PayPal merchant – I was shocked to discover that it’s the same situation on the other side! But not from small, semi anonymous scam artists; no it turned out to be PayPal.

PayPal has a policy where it completely ignores the voice of merchants. They automatically side with the customer and will thus remove funds from your account as they please. This happens without a dialog with you as a merchant first. They just waltz in and help themselves to your funds. It’s like something out of a 12th century trial where you are guilty by default and thus there is no room for documentation or evidence to the contrary.

“PayPal didn’t even bother to contact me for verification or comments. They just helped themselves to my registered credit card – which in Norway would have landed them in jail for theft.”

In my case where I sell software components, which by nature is digital and delivered via e-mail, that leaves me as a vendor completely without a voice.

Just weeks ago I got a strange e-mail from a customer who claimed he had not received my software. I naturally took that very seriously so I checked, double checked and triple checked that the software had been sent. I also checked the log on my server to see if the download ticket had been activated (it is marked as active when a full download has been completed. It remains open for 12 months which is the duration of the license).

Well the ticket was active, so there was no doubt that the customer had indeed downloaded the product. And it was downloaded in full. The server picks up on partial downloads so it doesn’t activate should the customer have network problems.

But hey, accident can happen, maybe the customer managed to delete the file or his hard disk was damaged. I gave him the benefit of the doubt and informed him that the ticket has been activated, but he can download as many times as he wants for the duration of 12 months.

In return I got an email saying: “He he, its all good. Thx!

Well, it sure was all good for him, but not for me. Not only had this man downloaded and made use of my product, he sent a false claim to PayPal stating that he never received the software. And since PayPal can’t deal with packages that are not shipped through their explicit channels (which is made for physical goods, not digital), that was that.

PayPal didn’t even bother to contact me for verification or comments. They just helped themselves to my registered credit card – which in Norway would have landed them in jail for theft.

“A parallel is of a man entering a store, buying an ice-cream, slowly removing the wrapping and starting to eat – while walking over to the store manager claiming, he never got an ice-cream to begin with.”

Had PayPal bothered to contact me, which both Norwegian and European law demands, I could easily document that the customer had indeed downloaded and activated the product. I have both the e-mails between the customer and myself, as well as the ticket logs from the hosting company I use.

There is no doubt that this ticket has been spent, only hours before this scam artist sent his false claim to PayPal.

International vs. national law

Norwegian law gives a merchant 3 chances to rectify a situation. This law applies where the customer has not received what they ordered, where they have received a broken item – or where there has been problems with delivery.

When you sell software however, there are two types with very different rules attached to them. The second method is rarely used outside the world of engineering:

  • Compiled proprietary software, which doesn’t avail how the product is made and the customer does not have access to the source code.
  • Source code for proprietary software, where the customer receives the actual source code for the product and are allowed to adapt the code. But there are strict rules for not sharing or re-selling this – since it’s very much intellectual property and once shared, cannot be un-shared.

The latter, source code packages (which is what my customer bought), also falls under “spoilables”, meaning that once the customer has received the package, they cannot return it. This applies to other goods too, such as underwear. Since the merchant cannot know if the product has been used (or copied in the case of source-code) – there is never any return policy on such goods once delivered. If the product has not been delivered however, normal return policies apply.

Since PayPal is an American company, I can understand there is some aversion for adapting their services to every known framework of law known to mankind. But I cannot imagine that American legislation on this topic can differ much from Norwegian law. Selling compiled code vs. source-code are two very different things. Comparable to frozen goods and fresh goods. You dont have a 3 week return policy on fruit for obvious reasons.

A parallel is of a man entering a store, buying an ice-cream, slowly removing the wrapping and starting to eat – while walking over to the store manager claiming, he never got an ice-cream to begin with.

There is no way in hell that this would fly with an american store manager. A friend of mine in San-Diego was so upset on my behalf that he called Paypal directly, but they refused to comment without written consent from me. Which I then sent, only to magically disappear.

The second and third calls resulted in 45 and 90 minutes of “please hold”. They literally exhausted their own merchant to make the case go away.

PayPal, trust is a two way street

This episode has shocked me. In fact it has forced me to close my PayPal merchant account permanently. And I will avoid using PayPal as much as possible until they can show normal, human decency for law-abiding citizen, regardless of what country they come from.

Would you run a business with a third-party that can just help themselves to your accounts? I can’t imagine anyone would.

I have no problem giving a customer his money back, provided the delivery ticket is un-spent. Had the customer been unable to download or somehow gain access to the product – then of course normal money back rules apply. I’m not out to cheat anyone, nor am I hard to talk with.

But when there is no dialog at all – and your “bank” ignores the fact that some people are willing to do anything to cheat his fellow-man, that’s when I pack up and leave.

The Amiga ARM project

April 19, 2018 Leave a comment

This has been quite the turbulent week. Without getting into all the details, a post that I made with thoughts and ideas for an Amiga inspired OS for ARM escaped the safe confines of our group, Amiga Disrupt, and took on a life of its own.
This led to a few critical posts being issued publicly, which all boiled down to a misunderstanding. Thankfully this has been resolved and things are back to normal.

The question on everyone’s lips now seem to be: did Jon mean what he said or was it just venting frustration? I thought I made my points clear in my previous post, but sadly Commodore USA formulated a title open for interpretation (which is understandable considering the mayhem at the time). So let’s go thrugh the ropes and put this to rest.

Am I making an ARM based Amiga inspired OS?

Hopefully I don’t have to. My initial post, the one posted to the Amiga Disrupt comment section (and mistaken for a project release note), had a couple of very clear criteria attached:

If nothing has been done to improve the Amiga situation [with regards to ARM or x86] by the time I finish Amibian.js (*), I will take matters into my own hand and create my own alternative.

(*) As you probably know, Amibian.js is a cloud implementation of Amiga OS, designed to bring Amiga to the browser. It is powered by a node.js application server; a server that can be hosted either locally (on the same machine as the html5 client) or remotely. It runs fine on popular embedded devices such as Tinkerboard and ODroid, and when run in a full-screen browser with no X or Windows desktop behind it – it is practically indistinguishable from the real thing.

We have customers who use our prototype to deliver cloud based learning for educational institutions. Shipping ready to use hardware units with pre-baked Amibian.js installed is perfect for schools, libraries, museums, routers and various kiosk projects.

smart_desktop

Amibian.js, here running Quake 3 at 60 fps in your browser

Note: This project started years before FriendOS, so we are not a clone of their work.

Obviously this is a large task for one person, but I have written the whole system in Smart Mobile Studio, which is a product our company started some 7 years ago, and that now has a team of six people behind it. In short it takes object pascal code such as Delphi and Freepascal, and compiles this to JavaScript. Suitable for both the browser and NodeJS. It gives you a full IDE with form designer, drag & drop visual components and a wast and rich RTL (run-time library) which naturally saves me a lot of time. So this gives me an edge over other companies working with similar technology. So while it’s a huge task, it’s leveraged considerably by the toolchain I made for it.

So am I making a native OS for ARM or x86? The short answer: I will if the situation havent dramatically improved by the time Amibian.js is finished.

Instead of wasting years trying to implement everything from scratch, Pascal Papara took the Linux kernel and ran with it. So Aeros boots by virtue of the Linux Kernel, but jumps straight into Aros once the drivers has loaded

If you are thinking “so what, who the hell do you think you are?” then perhaps you should take a closer look at my work and history.

I am an ex Quartex member, which was one of the most infamous hacking cartels in europe. I have 30 years of software development behind me, having worked as a professional developer since the age of 17. I have a history of taking on “impossible” projects and finding ways to deliver them. Smart Mobile Studio itself was deemed impossible by most Delphi developers; It was close to heresy, triggering an avalanche of criticism for even entertaining the idea that object pascal could be compiled to JavaScript. Let alone thrive on JSVM (JavaScript Virtual Machine).

assembler

Amibian.js runs javascript, but also bytecodes. Here showing the assembler prototype

You can imagine the uproar when our generated JavaScript code (compiled from object pascal) actually bested native code. I must admit we didn’t expect that at all, but it changed the way Delphi and object pascal developers looked at the world – for the better I might add.

What I am good at, is taking ordinary off the shelves parts and assembling them in new and exciting ways. Often ways the original authors never intended; in order to produce something unique. My faith is not in myself, but in the ability and innate capacity of human beings to find solutions. The biggest obstacle to progress is ultimately pride and fear of losing face. Something my Buddhist training beat our of me ages ago.

So this is not an ego trip, it’s simply a coder that is completely fed-up with the perpetual mismanagement that has held Amiga OS in captivity for two decades.

Amiga OS is a formula, and formulas are bulletproof

People love different aspects of the same thing – and the Amiga is no different. For some the Amiga is the games. Others love it for its excellent sound capabilities, while some love it for the ease of coding (the 68k is the most friendly cpu ever invented in my book). And perhaps all of us love the Amiga for the memories we have. A harmless yet valuable nostalgia of better times.

image3

Amiga OS 3.1 pimped up, running on Amibian [native] Raspberry PI 3b

But for me the love was always the OS itself. The architecture of Amiga OS is so elegant and dare I say, pure, compared to other systems. And I’m comparing against both legacy and contemporary systems here. Microsoft Windows (WinAPI) comes close, but the sheer brilliance of Amiga OS is yet to be rivaled.

We are talking about a design that delivers a multimedia driven, window based desktop 10 years before the competition. A desktop that would thrive in as little as 512 kb of ram, with fast and reliable pre-emptive multitasking.

I don’t think people realize or understand the true value of Amiga OS. It’s not in the games (although games is definitively a huge part of the experience), the hardware or the programs. The reason people have been fighting bitterly over Amiga OS for a lifetime, is because the operating system architecture or “formula” is unmatched to this very day.

Can you imagine what a system that thrives under 512 KB would do to the desktop market? Or even better, what it could bring to the table for embedded and server technology?

And this is where my frustration soars up. Even though we have OS 4.1, we have been forced to idly stand by and watch, as mistake after mistake is being made. opportunities that are ripe for the taking (some of them literally placed on the doorstep of Hyperion), have been thrown by the wayside time and time again.

And they are not alone. Aros and Morphos has likewise missed a lot of opportunities. Both opportunities to generate income and secure development as well as embracing new technology. Although I must stress that I sympatize with Aros since they lack any official funding. Morphos is doing much better using a normal, commerical license.

Frustration, the mother of invention

When the Raspberry PI was first released I jumped of joy. Finally a SBC (single board computer) with enough power to run a light version of Amiga OS 4.1, with a price tag that everyone can live with. I rushed over to Hyperion to see if they had issued a statement about the PI, but nothing could be found. The AEON site was likewise empty.

The PI version 2 came and went, still no sign that Hyperion would capitalize on the situation. I expected them to issue a “Amiga OS 4.1 light” edition for ARM, which would put them on the map and help them establish a user base. Without a user base and fresh blood there is no chance in hell of selling next generation machines in large enough quantities to justify future development. But once again, opportunity after oppertunity came and went.

Sexy, fast and modern: Amiga OS 4.1

Sexy, fast and modern: Amiga OS 4.1 would do wonders on ARM

Faster and better suited SBC’s started to turn up in droves: The ODroid, Beaglebone black, The Tinkerboard, The Banana PI – and many, many others. When the SnapDragon IV CPU’s shipped on a $120 SBC, which is the same processor used by Samsung Galaxy 6S, I was sure Hyperion would wake up and bring Amiga OS to the masses. But not a word.

Instead we were told to wait for the Amiga x5000 which is based on PPC. I have no problem with PPC, it’s a great platform and packs a serious punch. But since PPC no longer sell to mainstream computer companies like it used to, the price penalty would be nothing short of astronomical. There is also the question of longevity and being able to maintain a PPC based system for the forseeable future. Where exactly is PPC in 15 years?

Note: One of the reasons PPC was selected has to do with coding infrastructure. PPC has an established standard, something ARM lacked at the time (this was first established for ARM in 2014). PPC also has an established set of development platforms that you can build on, with libraries and pre-fab modules (pre fabricated modules, think components that you can use to quickly build what you need) that have been polished for two decades now. A developer who knows PPC from the Amiga days will naturally feel more at home with PPC. But sadly PPC is the past and modern development takes place almost exclusively on ARM and x86. Even x86 is said to have an expiration date now.

The only group that genuinely tried to bring Amiga OS to ARM has been the Aros team. They got their system compiled, implemented some rudimentary drivers (information on this has been thin to say the least) and had it booting natively on the Raspberry PI 3b. Sadly they lacked a USB stack (remember I mentioned pre-fab modules above? Well, this is a typical example. PPC devtools ship with modules like this out of the box) so things like mouse, keyboard and external peripherals wouldn’t work.

3

Aeros, the fastest Amiga you will ever play with. Running on the Raspberry PI 3b

And like always, which is the curse of Amiga, “something came up”, and the whole Raspberry PI / ARM initiative was left for dead. The details around this is sketchy, but the lead developer had a personal issue that forced him to set a new direction in life. And for some reason the other Aros developers have just continued with x86, even though a polished ARM version could have made them some money, and helped finance future development. It’s the same story, again and again.

But then something amazing happened! Out of the blue came Pascal Papara with a new take on Aros, namely AEROS. This is a distro after my own heart. Instead of wasting years trying to implement everything from scratch, Pascal took the Linux kernel and ran with it. So Aeros boots by virtue of the Linux Kernel, but jumps straight into Aros once the drivers has loaded. And the result? It is the fastest desktop you will ever experience on ARM. Seriously, it runs so fast and smooth on the Raspberry PI that you could easily mistake it for a $450 Intel i3.

Sadly Pascal has been more or less alone about this development. And truth be told he has molded it to suit his own needs rather than the consumer. Since his work includes a game machine and some Linux services, the whole Linux system is exposed to the Aros desktop. This is a huge mistake.

Using the Linux kernel to capitalize on the thousands of man hours invested in that, not to mention the linux driver database which is massive, is a great idea. It’s also the first thing that came into my mind when contemplating the issue.

But when running Aros on top of this, the Linux aspect of the system should be abstracted away. Much like what Apple did with Unix. You should hardly notice that Linux is there unless you open a shell and start to investigate. The Amiga filesystem should be the only filesystem you see when accessing things from the desktop, and a nice preferences option for showing / hiding mounted Linux drives.

My plans for an ARM based Amiga inspired OS

Building an OS is not a task for the faint of heart. Yes there is a lot of embedded / pre-fab based systems to pick from out there, but you also have to be sensible. You are not going to code a better kernel than Linus Torvalds, so instead of wasting years trying to catch up with something you cannot possibly catch up with – just grab the kernel and make it work for us.

The Linux kernel solves things such as process contexts, “userland” vs “kernel space” (giving the kernel the power to kill a task and reclaim resources), multitasking / threading, thread priorities, critical sections, mutexes and global event objects; it gives us IPC (inter process communication), disk IO, established and rock solid sound and graphics frameworks; and last but perhaps most important: free access to the millions of drivers in the Linux repository.

Screenshot

Early Amibian.js login dialog

You would have to be certified insane to ignore the Linux Kernel, thinking you will somehow be the guy (or group) that can teach Linus Torvalds a lesson. This is a man who has been writing kernel’s for 20+ years, and he does nothing else. He is surrounded by a proverbial army of developers that code, test, refactor and strive to deliver optimal performance, safety and quality assurance. So sorry if I push your buttons here, but you would be a moron to take him on. Instead, absorb the kernel and gain access to the benefits it has given Linux (technically the kernel is “Linux”, the rest is GNU – but you get what I mean).

With the Linux kernel as a foundation, as much as 50% of the work involved in writing our OS is finished already. You don’t have to invent a driver API. You dont have to invent a new executable format (or write your own ELF parser if you stick with the Linux executable). You can use established compilers like GCC / Clang and Freepascal. And you can even cherry pick some low-level packages for your own native API (like SDL, OpenGL and things that would take years to finish).

But while we want to build our house on rock, we don’t want it to be yet another Linux distro. So with the kernel in place and a significant part of our work done for us, that is also where the similarities end.

The end product is Amiga OS, which means that we need compatibility with the original Amiga rom libraries (read: api). Had we started from scratch that would have been a tremendous effort, which is also why Aros is so important. Because Aros gives us a blueprint of how they have implemented these API’s.

But our main source of inspiration is not Aros, but Amithlon. What we want to do is naturally to pipe as much as we can from the Amiga API’s back to the Linux kernel. Things like device detection, memory allocation, file IO, pipes, networking — our library files will be more thin wrappers that expose Amiga compatible calls; methods that calls the Linux Kernel to do the job. So our Amiga library files will be proxy objects whenever possible.

AmithlonQEmu

Amithlon, decades ahead of it’s time

The hard work is when we get to the window manager, or Intuition. Here we can’t cheat by pushing things back to Linux. We don’t want to install X either (although we can render our system into the X framebuffer if we like), so we have to code a window manager. This is not as simple as it sounds, because our system must operate with multiple cores, be multi threaded by design and tap into the grand scheme of things. Things like messages (which is used by applications to respond to input) must be established, and all the event codes from the original Amiga OS must be replicated.

So this work wont be easy, but with the Linux kernel as a foundation – the hardest task of all is taken care of. The magic of a kernel is that of process management and task switching. This is about as hard-core as you can get. Without that you can almost forget the rest. But since we base our system on the Linux kernel, we can focus 100% on the real task – namely to deliver a modern Amiga experience, one that is platform independent (read: conforms to standard Linux and can thus be recompiled and run anywhere Linux can run), preserves as much of the initial formula as possible – and can be successfully maintained far into the future.

By pushing as much of our work as possible into user-space (the process space where ordinary programs run, the kernel runs outside this space and is thus unaffected when a program crashes) and adhering to the Linux kernel beneath the bonnet, we have created a system that can be re-compiled anywhere Linux is. And it can be done so without any change to our codebase. Linux takes care of things like drivers, OpenGL, Sound — and presents to us a clean API that is identical on every platform. It doesn’t matter if it’s ARM, PPC, 68k, x86 or MIPS. As long as we follow the standards we are home free.

Last words

I hope all of this clears up the confusion that has surrounded the subject this week. Again, the misunderstanding that led to some unfortunate posts has been resolved. So there is no negativity, no drama and we are all on the same page.

amidesk

Early Amibian.js prototype, running 68k in the browser via uae.js optimized

Just remember that I have set some restrictions for my involvement here. I sincerely hope Hyperion and the Aros development group can focus on ARM, because the community needs this. While the Raspberry PI might seem too small a form-factor to run Aros, projects like Aeros have proven just how effective the Amiga formula is. I’m sure Hyperion could find a powerful ARM SOC in the price range of $120 and sell a complete package with profit for around $200.

What the Amiga community needs now, is not expensive hardware. The userbase has to be expanded horizontally across platforms. Amiga OS / Aros has much to offer the embedded market which today is dominated by overly complex Linux libraries. The Amiga can grow laterally as a more user-friendly alternative, much like Android did for the mobile market. Once the platform is growing and established – then custom hardware could be introduced. But right now that is not what we need.

I also hope that the Aros team drops whatever they are working on, fork Pascal Paparas codebase, and spend a few weeks polishing the system. Abstract away the Linux foundation like Apple have done, get those sexy 32 bit OS4 icons (Note: The icons used by Amiga OS 4 is available for free download from the designer’s website) and a nice theme that looks similar to OS 4 (but not too similar). Get Lazarus (the freepascal IDE) going and ship the system with a ready to use Pascal, C/C++ and Basic development environments. Bring back the fun in computing! The code is already there, use it!

page2-1036-full

Aeros interfaces directly with linux, I would propose a less direct approach

Just take something simple, like a compatible browser. It’s actually not that simple, both for reasons of complexity and how memory is handled by PPC. With a Linux foundation things like Chromium Embedded could be inked into the Amiga side of things and we would have a native, fast, established and up-to-date browser.

At the same time, since we have API level compatability, people can recompile their Aros and Morphos applications and they would run more or less unchanged.

I really hope that my little protest here, if nothing else, helps people realize that there are viable options readily at hand. Commodore is not coming back, and the only future this platform has – is the one we make. So people have to ask themselves how much they want a future.

If the OS gains momentum then there will be grounds for investors to look at custom hardware. They can then choose off the shelves parts that are inexpensive to cover the normal functionality you expect in a modern computer – which more resources can go into custom hardware that sets the system apart. But we cant start there. It has to be built up brick by brich, standing on the shoulders of giants.

OK, rant over 🙂

Smart Mobile Studio 3.0 and beyond

March 20, 2018 Leave a comment

cascade_03With Smart Mobile Studio 3.0 entering its second beta, Smart Pascal developers are set for a boost in quality, creativity and power. We have worked extremely hard on the product this past year, including a complete rewrite of all our visual controls (and I mean all). We also introduced a completely new theme engine, one that completely de-couples visual appearance from structural architecture (it also allows scripting inside the CSS theme files).

All of that could be enough for a version bump, but we didn’t stop there. Much of the sub-strata in Smart has been re-implemented. Focus has been on stability, speed and future growth. The system is now divided into a set of name-spaces (System, SmartCL, SmartNJ, Phonegap, and Espruino), making it easier to navigate between the units as well as expanding the codebase in the future.

To better understand the namespaces and why this is a good idea, let’s go through how our units are organized.

smart_namespace

The RTL is made to expand easily and preserve as much functionality as possible

  • The System namespace is the foundation. It contains clean, platform independent code. Meaning code that doesn’t rely on the DOM (browser) or runtime (node). Focus here is on universal code, and to establish common object-pascal classes.
  • Our SmartCL namespace contains visual code, meaning code and controls that targets the browser and the DOM. SmartCL rests on the System namespace and draws functionality from it. Through partial classes we also expand classes introduced in the system namespace. A good example is System.Time.pas and SmartCL.Time.pas. The latter expands the class TW3Dispatch with functionality that will only work in the DOM.
  • SmartNJ is our high-level nodejs namespace. Here you find classes with fairly complex behavior such as servers, memory buffers, processes and auxillary classes. SmartNJ draws from the system namespace just like SmartCL. This was done to avoid multiple implementations of streams, utility classes and common functions. Being able to enjoy the same functionality under all platforms is a very powerful thing.
  • Phonegap is our namespace for mobile devices. A mobile application is basically a normal visual application using SmartCL, but where you access extra functionality through phonegap. Things like access to a device’s photos, filesystem, dialogs and so on is all delegated via phonegap.
  • Espruino is a namespace for working with Espruino micro-controllers. This has been a very low-level affair so far, due to size limitation on these devices. But with our recent changes you can now, when you need to, tap into the system namespace for more demanding behavior.

As you can see there is a lot of cool stuff in Smart Mobile Studio, and our codebase is maturing nicely. With out new organization we are able to expand both horizontally and vertically without turning the codebase into a gigantic mess (the VCL being a prime example of how not to implement a multi-platform framework).

Common behavior

One of the coolest things we have added has to be the new storage device classes. As you probably know the browser has a somewhat “limited” storage mechanism. You are stuck with name-value pairs in the cache, or a filesystem that is profoundly frustrating to work with. To remedy this we took the time to implement a virtual filesystem (in memory filesystem) that emits data to the cache; we also implemented a virtual storage device stack on top of it, one for each target (!).

In short, if a target has IO capability, we have implemented a storage “driver” for it. So instead of you having to write 4-5 different storage mechanisms – you can now write the storage code once, and it works everywhere.

This is a pretty cool system because it doesn’t limit us to local device storage. We can have device classes that talk to Google-Storage, One-Drive, Dropbox and so on. It also opens up for custom storage solutions should you already have this pre-made on your server.

Database support, a quick overview

Databases have always been available in Smart Mobile Studio. We have units for WebSQL, IndexDB and SQLite. In fact, we even compiled SQLite3 from native C code to asm.js, meaning that the whole database engine is now pure JavaScript and no-longer dependant on W3C standards.

smart_db

Each DB engine is implemented according to a framework

Besides these we also have TW3Dataset which is a clean, Smart Pascal implementation of a single table dataset (somewhat inspired by Delphi’s TClientDataset). In our previous beta we upgraded TW3Dataset with a robust expression parser, meaning that you can now set filters just like Delphi does. And its all written in Smart Mobile Studio which means there are no dependencies.

 

And ofcourse, there is also direct connections to Embarcadero Datasnap servers, and Remobjects SDK servers. This is excellent if you have an already existing Delphi infrastructure.

A unified DB framework

If you were hoping for a universal DB framework in beta-2 of v3.0, sadly that will not be the case. The good news is that databases should make it into v3.2 at the latest.

Databases looks simple: table, rows and columns right? But since each database engine known to JavaScript is written different from the next, our model has to take height for these and be dynamic enough to deal with them.

The model we used with WebSQL is turning out to be the best way forward I feel, but its important to leave room for reflection and improvements.

So getting our DB framework established is a priority for us, and we have placed it on our timeline for (at the latest) v3.2. But im hoping to have it done by v3.1. So it’s a little ahead of us, but we need that time to properly evolve the framework.

Smart Desktop [a.k.a Amibian.js]

The feedback we have received on our Smart Desktop demos have been pretty overwhelming. It is also nice to know that our prototype is being used to deliver software to schools and educational centers. So our desktop is not going away!

smart_desktop

Fancy a game of Quake at 60+ fps? Web assembly rocks!

But we are not rushing into this without some thought first. The desktop will become a project type like I have written about many times before. So you will be able to create both the desktop and client applications for it. The desktop is suitable for software that requires a windowing environment (a bit like Sencha or similar frameworks). It is also brilliant for kiosk displays and as a remote application hub.

Our new storage device system came largely from Amibian, and with these now a part of our RTL we can clean up the prototype considerably!

Smart assembler

It may sound like an oxymoron, but a lab project we created while testing our parser framework (system.text.parser unit) turned into an exercise in compiler / assembler making. We implemented a virtual machine that runs instructions represented by bytecodes (fairly straight ahead stuff). It supports the most common assembler methods, vaguely inspired by the Motorolla 68k processor with a good dose of ARM thrown in for good measure.

smart_assembler

Yes that is a full parser, assembler and runtime model

If you ponder why on earth this would be interesting, consider the following: most web platforms allow for scripting by third-party developers. And by opening up for that these, the websites themselves become prone to attacks and security breaches. There is no denying that any JS based framework is very fragile when potentially hundreds of unknown developers are hacking away at it.

But what if you could offer third parties to write plugins using more traditional languages? Perhaps a dialect of pascal, a subset of basic or perhaps C#? Wouldnt that be much better? A language and (more importantly) runtime that you have 100% control over.

While our assembler, disassembler and runtime is still in its infancy (and meant as a demo and excercise), it has future potential. We also made the instructions in such a way that JIT compiling large chunks of it is possible – and the output (or codegen) can be replaced by for example web assembly.

Right now it’s just a curiosity that people can play with. But when we have more time I will implement high-level parsers and codegens that emit code via this assembler. Suddenly we have a language that runs under node.js, in the browser or any modern JS runtime engine – and its all done using nothing but Smart Mobile Studio.

Well, stay tuned for more!

Facebook, this must change

March 14, 2018 2 comments

Facebook has grown to be more than just a social platform where friends meet. You have groups and communities of every conceivable type, where people of every convictions engage and debate anything you can think of. Groups where people have opinions, are passionate and put ideas to the test.

It has been grand, but lately a negative trend (or technique) has evolved; and sadly Facebook don’t seem to get the full scope of its impact. For them that is.

Childish games

College student looks at sign on classroom door: Blame Shifting 101.

We did this as kids!

It reminds me of behaviour you could see in highschool, where someone would do something illegal, and then point the finger at those who tried to stop the act (also known as blame shifting). Today this has evolved into a type of “revenge” tactics, where individuals who lose an argument (regardless of what it may be) get back at others by falsely reporting them.

At first glance this looks silly enough. Go ahead and report me, I have nothing to hide right? Well it would be silly if Facebook actually took such complaints serious and actually looked at what was written with human eyes. Sadly they don’t, and without any consequences involved for people who maliciously report users out of sheer spite – the stage is set of the worst of trolls to do what they do best: cause mischief and mayhem for upstanding members.

This has reached such heights that we now see the proverbial “drive-by” reporting of people they don’t like or disagree with (especially in political and economic forums) and it goes un-checked by Facebook.

This is a very negative trend for the platform and has already caused considerable damage; To Facebook that is. Why? Well people just move on when the format puts trolls, group campers and reporting snipers (call them what you will) at equal odds with honest, responsible adults that engage in debate.

Group campers and trolls

I was just informed that I had been “reported” and consequently expelled for 7 days due to a violation of terms. I was quite shocked to read this, so I took the time to go through these terms. I was at a complete loss of which of their standards I had violated. And as it turned out, I had broken none of them. I would never dream of posting pornography, I have not made racist remarks (quite the opposite! In 2017 I kicked a total of 46 members from Delphi Developer for rubbish like that), nor am I a member of the anti-christ movement and I don’t go around looking for fights either.

What I had done however, was to catch two members of a group using fake profiles. And in debate with one of these, telling the individual that his trolling the group is neither welcome nor decent – his revenge was to report me (!).

troll

Not all sayings translate well to English

What really surprised me was how Facebook seem to take things at face value. There is no way that a human being could be behind such a ruling; at least not people fluent in Norwegian.

First they seem to employ a simple word check, no doubt looking for curse and swear items (using google translate or some other lookup service). If you pass that, they seem to check for references to a person or individual in conjunction with negative phrasing. Which, let’s be honest, is a somewhat grey area considering their services covers a whole planet with hundreds of cultures.

In this case the only conceivable negative phrase in my post was “Go troll under a bridge“, which is not an insult but an expression with roots in Norwegian folklore. In Norwegian lore trolls typically lived either up in the mountains or under a bridge. And you had to pay the troll not to eat you (a somewhat fitting description considering the situation).

This goes to character. Namely when the person (or fake profile) here did nothing but post statements designed cause problems for other members, then that is the very definition of a net-troll. So telling such an individual to troll under a bridge is the same as saying “stop it and get out” [loosely translated]. I could have just banned him, but I tend to give people the benefit of the doubt.

Facebook as a viable platform

I hope Facebook wakes up, because this type of “tactics” has grown and is being used more and more. And if you score a single point on the above criteria, regardless if the person who reported the incident is also the source — you are just banned for 7 days. Surely, the act of reporting someone who has not violated the terms should carry equal weight? But that is where Facebook just hides behind a wall of Q&A without any opportunity for actual dialog. They don’t seem to care if the report was false or a pure act of revenge – they just blindly accept it and moves on.

The result of this? Well, it’s sort of self-evident isn’t it? People will have to deploy the same tactics in order to survive and protect themselves from such attacks; and voila – you have the extreme rise of fake profiles which has just exploded on Facebook.

troll_platform

Viable platform? I am really starting to question this

Well im not going to create a false profile, because I have some terms of my own; commonly known as “principles“. I run several large groups on Facebook and have been nothing but an asset to their growth. And if they want to lose 7 days of high activity, that is their loss. I am also starting to question if FB is a viable platform at all when a guy running 3 large groups and two businesses there (with a 15 year membership history) can be so easily barred by a fake profile.

But sadly I will stop talking to people who get into arguments and just report + kick them from whatever group they are in. Its sad, but those are the results of the absolutely absurd practices of Facebook. So until their filters employ some logic to them, that’s the way things are.

You cannot run a business on kindergarten rules

I sincerely hope you put some effort and thought into how to solve problems like these. For example, scanning the past 3 notes posted by the reporter to see if there is grounds to ignore the report – or in fact ban the reporter for creating the situation to begin with.

All of this can be solved with a simple strike and value system. If you falsely report someone that’s a strike. If you camp in a group and get multiple reports (within a time-frame), you get automatically banned from that group. If you persistently ban someone (a.k.a bullying) that is another strike. Enough strikes and you get a 7 day warning (or harder depending on the violation).

It wouldn’t require much work to create a system where long-standing, responsible members who benefit the platform – are recognized over trolls that do nothing but ruin this. Seriously. I cannot believe that a planet wide social platform with millions of users, are deploying social rules from the late bronze age.

My thoughts go to the Monty Python sketch “She’s a witch!” set in the darkness of medieval europe. If someone says you are a witch, well then you must be one (sigh). Way to go Facebook, just way to go.

Oh well, I meant to brush up on my Google+ work anyways 🙂

Alternative pointers in Smart Mobile Studio

February 27, 2018 Leave a comment

Smart Mobile Studio already enjoy a rich and powerful set of memory handling classes and methods. If you have a quick look in the memory units (see below) you will find that Smart Mobile Studio really makes JavaScript sing and dance like no other.

As of writing in version 3.0 BETA the following units are dedicated to raw memory manipulation:

  • System.Memory
  • System.Memory.Allocation
  • System.Memory.Buffer
  • System.Memory.Views

Besides these, the unit System.Types.Convert represents the missing link. It contains the class TDataType which converts data between intrinsic (language level) data types and byte arrays.

Alternative pointers

While Smart has probably one of the best frameworks (if not THE best) for memory handling out there, including the standard library that ships with Node.js, the way it works is slightly different from Delphi’s and Freepascal’s approach.

Since JavaScript is reference based rather than pointer based, a marshaling offset mechanism is more efficient in terms of performance; So we modeled this aspect of Smart on how C# in particular organized its memory stuff.

But, is it possible to implement more Delphi like pointers? To some degree yes. The best would be to do this on compiler level, but even without such deep changes to the system you can actually implement a more Delphi-ish interface.

Here is an example of just such a system. It is small and efficient, but compared to the memory units in the RTL it’s much slower. This is also why we abandoned this way of handling memory in the first place. But perhaps someone will find it interesting, or it can help you port over code from Delphi to HTML5.

unit altpointers;

interface

uses
  W3C.TypedArray,
  System.Types,
  System.Types.Convert,
  System.Memory,
  system.memory.allocation,
  System.Memory.Buffer,
  System.Memory.Views;

type

  Pointer = variant;

  TPointerData = record
    Offset: integer;
    Buffer: JArrayBuffer;
    View:   JUint8Array;
  end;

function IncPointer(Src: Pointer; AddValue: integer): Pointer;
function DecPointer(Src: Pointer; DecValue: integer): Pointer;
function EquPointer(src, dst : Pointer): boolean;

// a := a + bytes
operator + (Pointer,   integer): Pointer uses IncPointer;

// a := a - bytes
operator - (Pointer,   integer): Pointer uses DecPointer;

// if a = b then
operator = (Pointer,   Pointer): boolean uses EquPointer;

function  Allocmem(const Size: integer): Pointer;
function  Addr(const Source: Pointer; const Offset: integer): Pointer;
procedure FreeMem(const Source: Pointer);
procedure MemSet(const Target: pointer; const Value: byte); overload;
procedure MemSet(const Target: pointer; const Values: array of byte); overload;
function  MemGet(const Source: pointer): byte; overload;
function  MemGet(const Source: pointer; ReadLength: integer): TByteArray; overload;

implementation

function MemGet(const Source: pointer): byte;
begin
  if (Source) then
  begin
    var SrcData: TPointerData;
    asm @SrcData = @Source; end;
    result := SrcData.View.items[SrcData.Offset];
  end else
  raise Exception.Create('MemGet failed, invalid pointer error');
end;

function MemGet(const Source: pointer; ReadLength: integer): TByteArray;
begin
  if (Source) then
  begin
    var SrcData: TPointerData;
    asm @SrcData = @Source; end;

    var Offset := SrcData.Offset;

    while ReadLength > 0 do
    begin
      result.add( SrcData.View.items[Offset] );
      inc(Offset);
      dec(ReadLength);

      if offset >= SrcData.View.byteLength then
        raise Exception.Create('MemGet failed, offset exceeds memory');
    end;
  end else
  raise Exception.Create('MemGet failed, invalid pointer error');
end;

procedure MemSet(const Target: pointer; const Value: byte);
begin
  var DstData: TPointerData;
  asm @DstData = @Target; end;
  dstData.View.items[DstData.Offset] := value;
end;

procedure MemSet(const Target: pointer; const Values: array of byte);
begin
  if Values.length > 0 then
  begin
    var DstData: TPointerData;
    asm @DstData = @Target; end;

    var offset := DstData.Offset;
    for var x := low(Values) to high(Values) do
    begin
      dstData.View.items[offset] := Values[x];
      inc(offset);
      if offset >= DstData.View.byteLength then
        raise Exception.Create('MemSet failed, offset exceeds memory');
    end;
  end;
end;

function EquPointer(src, dst : Pointer): boolean;
begin
  if (src) then
  begin
    if (dst) then
    begin
      var SrcData: TPointerData;
      var DstData: TPointerData;
      asm @SrcData = @Src; end;
      asm @DstData = @dst; end;
      result := SrcData.buffer = dstData.buffer;
    end;
  end;
end;

function IncPointer(Src: Pointer; AddValue: integer): Pointer;
begin
  if (Src) then
  begin
    // Check that there is an actual change.
    // If not, just return the same pointer
    if AddValue > 0 then
    begin
      // Map source data
      var SrcData: TPointerData;
      asm @SrcData = @Src; end;

      // Calculate new offset, using the current view
      // position as the present location.
      var NewOffset := srcData.Offset;
      inc(NewOffset, AddValue);

      // Make sure the new offset is within the range of the
      // memory buffer. Picky yes, but this is not native land
      if  (NewOffset >=0)
      and (NewOffset  0 then
    begin
      // Map source data
      var SrcData: TPointerData;
      asm @SrcData = @Src; end;

      // Calculate new offset, using the current view
      // position as the present location.
      var NewOffset := srcData.Offset;
      dec(NewOffset, DecValue);

      // Make sure the new offset is within the range of the
      // memory buffer. Picky yes, but this is not native land
      if  (NewOffset >=0)
      and (NewOffset  0 then
  begin
    var Data: TPointerData;
    Data.Offset := 0;
    Data.Buffer := JArrayBuffer.Create(Size);
    Data.View := JUint8Array.Create(Data.Buffer, 0, Size);
    asm
      @result = @data;
    end;
  end else
  raise Exception.Create('Allocmem failed, invalid size error');
end;

function Addr(const Source: Pointer; const Offset: integer): Pointer;
begin
  if (Source) then
  begin
    if offset > 0 then
    begin
      // Map source data
      var SrcData: TPointerData;
      asm @SrcData = @Source; end;

      // Check that offset is valid
      if (Offset >=0) and (offset < srcData.buffer.byteLength) then
      begin
        // Setup new Pointer data
        var Data: TPointerData;
        Data.Buffer := SrcData.Buffer;
        Data.View := SrcData.View;
        Data.Offset := Offset;
        asm
          @result = @data;
        end;
      end else
      raise Exception.Create('Addr failed, offset exceeds memory');
    end else
    raise Exception.Create('Addr failed, invalid offset error');
  end else
  raise Exception.Create('Addr failed, invalid pointer error');
end;

procedure FreeMem(const Source: Pointer);
begin
  if (source) then
  begin
    // Map source data
    var SrcData: TPointerData;
    asm @SrcData = @Source; end;

    // Flush reference and let the GC take care of it
    SrcData.Buffer := nil;
    SrcData.View := nil;
    SrcData.Offset := 0;
    asm
      srcData = {}
    end;
  end else
  raise Exception.Create('FreeMem failed, invalid pointer error');
end;

end.

Using the pointers

As you can probably see from the code there is no such thing as PByte, PWord or PLongword here. We use a clean uint8 typed array that we link to a memory buffer, so “pointer” here is fully byte based despite it’s untyped origins. In reality it just holds a TPointerData structure, but since this is done via asm sections, the compiler cant see it and treats it as a variant.

The operators add support for code like:

var buffer := allocmem(1024);
memset(buffer, $ff);
buffer := buffer + 1;
memset(buffer, $FA)

But using the overloaded memset procedure is a bit more efficient:

var buffer := allocmem(1024);
var bytes := TDataType.StringToBytes('this is awesome!');
memset(buffer, bytes);
buffer := buffer + bytes.length;
// write more data here

While fun to play with and perhaps useful in porting over older code, I highly recommend that you familiarize yourself with classes like TBinaryData that represents a fully managed buffer with a rich number of methods to use.

And ofcourse let us not forget TMemoryStream combined with TStreamWriter and TStreamReader. These will no doubt feel more at home both under HTML5 and Node.js

Note: WordPress formatting of pascal code is not the best. Click here to view the code as PDF.

Extract DLL member names in Delphi

February 16, 2018 2 comments

Long before dot net and Java I was doing a huge coding system for a large Norwegian company. They wanted a custom scripting engine and they wanted a way to embed bytecodes in dll files. Easy like apple pie (I sure know how to pick’em huh?).

The solution turned out to be simple enough, but this post is not about that, but rather about a unit I wrote as part of the solution. In order to recognize one dll from another, you obviously need the ability to examine a dll file. I mean, you could just load the dll and try to map the functions you need, but that will throw an exception if it’s the wrong dll.

So after a bit of googling around and spending a few hours on MDN, I sat down and wrote a unit for this. It allows you to load a dll and extract all the method names the library exposes. If nothing else it makes it easier to recognize your dll files.

Well enjoy!

unit dllexamine;

interface

uses
  WinAPI.Windows,
  WinAPI.ImageHlp,
  System.Sysutils,
  System.Classes;

  {
    Reference material for WinAPI functions
    =======================================

    MapAndLoad::
    https://msdn.microsoft.com/en-us/library/windows/desktop/ms680353(v=vs.85).aspx

    UnMapAndLoad:
    https://social.msdn.microsoft.com/search/en-US/windows?query=UnMapAndLoad&refinement=183

    ImageDirectoryEntryToData:
    https://msdn.microsoft.com/en-us/library/windows/desktop/ms680148(v=vs.85).aspx

    ImageRvaToVa:
    https://msdn.microsoft.com/en-us/library/windows/desktop/ms680218(v=vs.85).aspx
  }

  Type

  THexDllExamine = class abstract
  public
    class function Examine(const Filename: AnsiString;
      out Members: TStringlist): boolean; static;
  end;

  implementation

  class function THexDllExamine.Examine(const Filename: AnsiString;
    out Members: TStringlist): boolean;
  type
    TDWordArray = array [0..$FFFFF] of DWORD;
  var
    libinfo:      LoadedImage;
    libDirectory: PImageExportDirectory;
    SizeOfList: Cardinal;
    pDummy: PImageSectionHeader;
    i: Cardinal;
    NameRVAs: ^TDWordArray;
    Name: string;
  begin
    result := false;
    members := nil;

    if MapAndLoad( PAnsiChar(FileName), nil, @libinfo, true, true) then
    begin
      try
        // Get the directory
        libDirectory := ImageDirectoryEntryToData(libinfo.MappedAddress,
          false, IMAGE_DIRECTORY_ENTRY_EXPORT, SizeOfList);

        // Anything to work with?
        if libDirectory  nil then
        begin

          // Get ptr to first node for the image directory
          NameRVAs := ImageRvaToVa( libinfo.FileHeader,
            libinfo.MappedAddress,
            DWORD(libDirectory^.AddressOfNames),
            pDummy
          );

          // Traverse until end
          Members := TStringList.Create;
          try
            for i := 0 to libDirectory^.NumberOfNames - 1 do
            begin
              Name := PChar(ImageRvaToVa(libinfo.FileHeader,
                libinfo.MappedAddress, NameRVAs^[i], pDummy));
              Name := Name.Trim();
              if Name.Length > 0 then
                Members.Add(Name);
            end;
          except
            on e: exception do
            begin
              FreeAndNil(Members);
              exit;
            end;
          end;

          // We never get here if an exception kicks in
          result := members  nil;

        end;
      finally
        // Yoga complete, now breathe ..
        UnMapAndLoad(@libinfo);
      end;
    end;
  end;

end.

Smart Pascal assembler, it’s a reality

January 31, 2018 2 comments

After all these years of activity I guess there is no secret that I am a bit over-active at times. I am usually the most happy when I work on 2-3 things at the same time. I also do plenty of research to test theories and explore various technologies. So it’s never a dull moment – and this project has been no exception.

Bytecode based compilation

For that past 7 years I have worked close to compiler tech of various types and complexity on a daily basis. Script engines like DWScript, PAXScript, PascalScript, C# script, JavaScript (the list continues) – all of these have been used in projects either inhouse or for customers; and each serve a particular purpose.

Now while they are all fantastic engines and deliver fantastic results – I have had this “itch” to create something new. Something that approach the problem of interpreting, compiling and running code from a more low-level angle. One that is more standardized and not just a result of the inventors whim or particular style. Which in my view results in a system  that wont need years of updates and maintenance. I am a strong believer in simplicity, meaning that most of the time – a simple ad-hoc solution is the best.

It was this belief that gave birth to Smart Mobile Studio to begin with. Instead of spending a year writing a classical parser, tokenizer, AST and code emitter – we forked DWScript and used it to perform the tokenizing for us. We were also lucky to catch the interest of Eric (the maintainer) and the rest is history. Smart Mobile Studio was born and made with off the shelves parts; not boring. grey studies by men in lab coats.

The bytecode project started around the summer of 2017. I had thought about it for a while but this is when I finally took the time to sit down and pen my ideas for a portable virtual machine and bytecode based instruction set. A system that could be easily implemented in any language, from Basic to C/C++, without demanding the almost ridicules system specs and know-how of Java or the Microsoft CLR.

I labeled the system LDef, short for “language definition format”; I have written a couple of articles on the subject here on my blog, but I did not yet have enough finished to demo my ideas.

Time is always a commodity, and like everyone else the majority of my time is invested in my day job, working on Smart Mobile Studio. The rest is divided between my family, social obligations, working out and hobbies. Hence progress has been slow and sporadic.

But I finally have a working prototype so the LDEF parser, assembler, disassembler and runtime is no longer a theory but a functional virtual machine.

Power in simplicity

Without much fanfare I have finally reached the stage where I can demonstrate my ideas. It took a long time to get to this point, because before you can even think of designing a language or carve out a bytecode-format, you have to solve quite a few fundamental concepts. These must be in place before you even entertain the idea of starting on the virtual machine – or the project will simply end up as useless spaghetti that nobody understands or wants to work with.

  • Text parsing techniques must be researched properly
  • Virtual machine design must be worked out
  • A well designed instruction-set must be architected
  • Platform criteria must be met

Text parsing sounds easy. Its one of those topics where people reply”oh yeah, that’s easy” on auto pilot. But when you really dig into this subject you realize it’s anything but easy. At least if you want a parser that is fast, trustworthy – and more importantly: that can be ported to other dialects and languages with relatively ease (Delphi, FreePascal, C#, C/C++ are obvious targets). The ideas has to mature quite frankly.

One of my most central criteria when writing this system has been: no pointers in the core system. How people choose to inplement their version of LDEF for other languages is up to them (Delphi and FPC included), but the original prototype should be as clean and down to earth as possible.

Besides, languages like C# are not too keen on pointers anyways. You can use them but you have to mark your assemblies as “unsafe”. And why bother when var and const parameters offers you a safe and portable alternative? Smart Mobile Studio (or Smart Pascal, the dialect we use) doesn’t use pointers either; we compile to JavaScript after all where references is the name of the game. So avoiding pointers is more than central; it’s fundamental.

We want the system to be easy to port to any language, even Basic for that matter. And once the VM is ported, LDEF compiled libraries and assemblies can be loaded and used straight away.

The virtual CPU and it’s aggregates

The virtual machine architecture is the hard part. That’s where the true challenge resides. All the other stuff, be it source parsing, expressions management, building a model (AST), data types, generating jump tables, emitting bytecodes; All those tasks are trivial compared to the CPU and it’s aggregates.

The design and architecture of the cpu (or “runtime” or “virtual machine” since it consists of many parts) affects everything. It especially shapes the cpu instructions (what they do and how). But like mentioned the CPU is just one of many parts that makes up the virtual machine. What about variable handling? How should variables be allocated, addressed and dealt with? The way the VM deals with this will directly reflect how the byte code operates and how much code you need to initialize, populate and dispose of a variable.

Then you have more interesting questions like: how should the VM distinguish between global and local variable identities? We want the assembly code to be uniform like real machine code, we don’t want “special” instructions for global variables, and a whole different set of instructions for local variables. LDEF allows you to pass registers, variables, constants and a special register (DC) for data control as you wish. You are not bound to using registers only for math for instance.

I opted for an old trick from the Commodore days, namely “bit shift marking”. Local variables have the first bit in their ID set. While Global variables have the first bit zeroed. This allows us to distinguish between global and local variables extremely fast.

Here is a simple example that better demonstrates the technique. The id parameter is variable id read directly from the bytecode:

function TExample.GetVarId(const Id: integer;
  var IsGlobal: boolean): integer; inline;
begin
  IsGlobal := ((byte((Id shl 24) shr 24) shr 1) and 1) = 0;
  result := Id shr 1;
end;

This is just one of a hundred details you need to mentally work out before you even attempt the big one: namely how to deal with OOP and inheritance.

So far we have only talked about low-level bytecodes (ILASM as it’s called under the .net regime). In both Java and  dot net, object orientation is intrinsic to the VM. The runtime engine “knows” about objects, it knows about classes and methods and expect the bytecode files to be neatly organized class structures.

LDEF “might” go that way; but honestly I find it more tempting to implement OOP in ASM itself. So instead of the runtime having intrinsic knowledge of OOP, a high level compiler will have to emit a scheme for OOP instead. I still need to think and research what is best regarding this topic,

Pictures or it didn’t happen

The prototype is now 97% complete. And it will be uploaded so that people can play around with it. The whole system is implemented in Smart Pascal first (a Delphi and FreePascal version will follow) which means the whole system runs in your browser.

Like you would expect from any ordinary x86 assembler program (MASM, NASM, Gnu ASM, IAR [ARM] with others) the system consists of 4 parts:

  • Parser
  • Assembler
  • Disassembler
  • Runtime

So you can write source code directly in the browser, compile / assemble it – and then execute it on the spot. Then you can disassemble it and look at the results in-depth.

assembler

The virtual cpu

The virtual CPU sports a fairly common set of instructions. Unlike Java and .net the cpu has 16 data-aware registers (meaning the registers adopt the type of the value you assign to them, a bit like “variant” in Delphi and C++ builder). Variables allocated using the alloc() instruction can be used just like a register, all the instructions support both registers and variables as params – as well as defined constants, inline constants and strings.

  • R[0] .. R[16] ~ Data aware work registers
  • V[x] ~ Allocated variable
  • DC ~ Data control register

The following instructions are presently supported:

  • alloc [id, datatype]
    Allocate temporary variable
  • vfree [id]
    Release previously allocated variable
  • load [target, source]
    Move data from source to target
  • push [source]
    Push data from a register, variable onto the stack
  • pop [target]
    Pop a value from the stack into a register or variable
  • add [target, source]
    Add value of source to target
  • sub [target, source]
    Subtract source from target
  • mul [target, factor]
    Multiply target by factor
  • div [target, facor]
    Divide target by factor
  • mod [target, factor]
    Modulate target by factor
  • lsl [target, factor]
    Logical shift left, shift bits to the left by factor
  • lsr [target, factor]
    Logical shift right, shift bits to the right by factor
  • btst [target, bit]
    Test bit in target
  • bset [target, bit]
    Set bit in target
  • bclr [target, bit]
    Clear bit in target
  • and [target, source]
    And target with source
  • or [target, source]
    OR target with source
  • not [target]
    NOT value in target
  • xor [target]
    XOR value in target
  • cmp  [target, source]
    Compare value in target with source
  • noop
    No operation, used mostly for byte alignment
  • jsr [label]
    Jump sub-routine
  • bne [label]
    Branch not equal, conditional jump based on a compare
  • beq [label]
    Branch equal, conditional jump based on a compare
  • rts
    Return from a JSR call
  • sys [id]
    Call a standard library function

The virtual cpu can support instructions with any number of parameters, but the most common is either one or two.

I will document more as the prototype becomes available.

TextCraft 1.2 for Smart Pascal

January 26, 2018 Leave a comment

TextCraft is a fast, generic object-pascal text parsing framework. It provides you with the classes you need to write fast text parsers that builds effective data models.

The Textcraft framework was recently moved up to version 1.2 and has been ported from Delphi to both Freepascal and Smart Pascal (the dialect used by Smart Mobile Studio). This is probably the only parsing framework that spans 3 compilers.

Smart Pascal coders can download the framework unit here. This can be placed in their $Install/Library folder  (where $install is where Smart’s library and rtl folder is installed): BitBucket TextCraft Repository

Buffer, parser, model

Textcraft divides the job of parsing into 4 separate objects; each of them representing a concept familiar to people writing compilers; these are: buffer, parser, model and context. If you are parsing a programming language the “model” would be what people call the AST (short for “Abstract Symbol Tree”). This AST is later feed to the code generator, turning it into an executable program (Smart Pascal compiles to JavaScript so there really is no limit to the transformation, just level of complexity).

Note: Textcraft is not a compiler for any particular language, it is a generic text parsing framework that is language-agnostic. Meaning that it makes it easy for you to make parsers with it. We recently used it to parse command-line parameters for Freepascal, so it doesn’t have to be about languages.

The buffer

The buffer has one of the most demanding jobs in the framework. In other frameworks the buffer is often just a memory allocation with a simple read method; but in TextCraft the model is responsible for a lot more. It has to expose functions that makes text recognition simple and effective; it has to keep track of column and row position as you move through the buffer content – and much, much more. So in TextCraft the buffer is where text methodology is implemented in full.

The parser

Like mentioned the parser is responsible for using the buffer’s methods to recognize and make sense of a text. As it makes its way through the buffer content, it creates model-objects that represents each element. Typical for a language would be structures (records), classes, enums, properties and so on. Each of these will be registered in the AST data model.

The Model

The model is a construct. It is made up of as many mode-object instances as you need to express the text in symbolic form. It doesn’t matter if you are parsing a text document or source code, you would still have to define a model for it.

The model obviously reflect your needs. If you just need a superficial overview of the data then you create a simple model. If you need more elaborate information then you create that.

Note: When parsing a text document, a traditional organization would be to divide the model into: chapter, section, paragraph, line and individual words.

The Context

The context object is what links the parser to our model and buffer objects. By default the parser doesn’t know anything about the buffer or model. This helps us abstract away things that would otherwise turn our code into a haystack of references.

The way the context is used can be described like this:

When parsing complex data you often divide the job into multiple classes. Each class deals with one particular topic. For example: if parsing Delphi source code, you would write a class that parses records, a parser that handles classes, another that handles field declarations (and so on).

As a parser recognize mentioned objects, like say a record, it will create a record model object to hold the information. It will then add that to the context by pushing it onto its reference stack.

The first thing a parser does is to grab the model object from the reference to stack. This way the child parsers will always know where to store their model information. It doesn’t matter how deep or recursive something gets, the stack approach and passing the context object to the child parsers – will always make sure each parser “knows” where to store information.

Why is this important?

This is important because it’s cost-effective in computing terms. The TextCraft framework allows you to create parsers that can chew through complex data without turning your project into spaghetti.

So no matter if you are parsing phone-numbers, zip codes or complex C++ source code, TextCraft will make help you get the job done; in a way that is easy to understand and mentain.

Smart Mobile Studio: more cmd tools

January 24, 2018 Leave a comment

Being able to compile and work with projects from the command-line has been possible with Smart Mobile Studio almost since the beginning. But as projects grows, so does the need for more automation.

Toolbox

510242661The IDE contains a few interesting features, like the “Data to picture” function. This takes a datafile (or any file) and place the raw bytes into a png picture as pixels. This is a great way of loading data that the browser would otherwise block or ignore.

People have asked if we could perhaps turn these into command-line tools as well. And I have finally gotten around to doing just that. So our toolbox now contains 3 more command-line tools (not just the smsc compiler)

  • Superglue
  • DataToImage

Superglue

When you work with large JavaScript libraries they often consists of multiple files. This is great for JS developers and no different from how we use multiple unit-files to organize a project.

But it can be problematic when you deploy applications, because if the dependencies are heavy then your application will load slower. A typical example is ACE, the code editor we recently added to Smart. Its a fantastic editor, but it consists of a monstrous amount of files.

Superglue can import files based on a filter (like *.js) or a semi-colon delimited list. It will then merge these files together into a single file.

For example, let’s say you have 35 javascript files that makes up a library. And lets say you have downloaded and unpacked this to “C:\Temp” on your harddisk. To link all the JS files into a single file, you would type:

superglue -mode:filter -root:"C:\temp" -filter:"*.js" -sort -out:"C:\Smart\Libraries\MyLibrary\MyLibrary.js"

The above will enumerate all the files in “C:\Temp” and only keep those with a .JS file extension. It will sort the files since the -sort switch is set, and finally link all the files into a new, single file called MyLibrary.js (in another location).

So instead of shipping 35 files, which means 3d http loads, we ship one file and load the data in ourselves when the application starts.

DataToImage

As the name implies this is the same function that you find in the IDE. It takes a raw data file (actually, any file) and injects the bytes as pixels in a new PNG file. Code for extracting the data again already exist in the RTL – but I will brush up again on this when we add these tools to our toolbox.

Using this is simplicity itself:

datatoimage -input:"mysqldb.sq3" -output:"c:\smart\projects\mymobileapp\res\defaultdata.png"

The above takes a default sqlite database and stores it inside a picture. In the application we load the picture in, extract the data, and then use that as our default data — which is later stores in the browser cache. This saves us having to execute a ton of sql-statements to establish a DB from scratch in memory.

Better parsing

These tools are very simple. They dont take long to make, but they do need to be reliable. And they do need to be in place when you need them.

We actually ported over TextCraft, a parser we use both in Smart Mobile Studio and Delphi, so it would compile under Freepascal. There was a huge bug in the way Lazarus deals with parameters, so we ended up writing a fresh new command-line parser.

Future tools

We have a lot on our plate so I doubt we will focus on our toolbox much after these. They simplify library making and data injection for projects, and you can use a shell script to implement “make-files” that most people do these days.

However, one tool that would be very handy is a “project to xmlhelp” or similar. A command-line program that will scan your Smart project and emit a full overview of your classes, methods and properties in traditional xml-help format.

But we will see when time allows — at least making libraries and merging in data will be easier from now on 🙂

Fixed Header in Smart Applications

January 3, 2018 Leave a comment

Smart Mobile Studio gives you a lot of really cool visual controls to play with. One of them is a header control (also called a navigation panel by some) that traditionally show and hide it’s buttons (back and next) in response to form navigation.

One question that many people have asked is: how can I make a header that remains fixed and doesnt scroll with the forms? So no matter what form I navigate to, the header remains in place. Preferably easily accessed.

The Visual Application

Smart Visual Applications are more than just forms and buttons. The first thing that is created when you run a visual Smart Application, is naturally an instance of TApplication; this in turn creates a display control, and inside that again there is something called a “viewport”. Forms are always created inside the viewport.

If you are wondering why on earth we use two nested containers like this, that has to do with scrolling and keeping our controls isolated in one place. Forms are positioned horizontally inside the viewport. So whenever you are moving from Form1 to Form2, depending on the scroll-effect you have picked, the second form is lined up either before or after the current form. We then execute a CSS3 animation that smoothly scrolls the new form into view, or the previous form out of view – depending on how you look at it.

The display

The root display control, TW3Display, has only one job; and that is to house the view control. It also contains code to layout child controls vertically. Since there is typically only one control present – that means you don’t notice much of what TW3Display does.

The “trick” to a static header that remains un-affected by forms, is simply to create the header control with “Application.Display” as the parent. That is all you have to do. You could also create it on Application.Display.View, but then it would cause problems with scrolling. My point for mentioning that is to underline how the RTL has no special rules for it’s structure. All visual entities that make up your Smart Pascal application follow the same laws and are subject to the same rules as TW3Button or TW3Label might be.

Creating controls that don’t attach to a form

The vertical layout that TW3Display does automatically is very simple. It sorts the child elements based on their Y position and places them directly after each other. This means that all you have to do is create the header and then make sure you give it a negative Y position, and it will always remain fixed on top of the Viewport and it’s forms.

TW3Application has a virtual method called ApplicationStarting() that is perfect for what we want to achieve. As the name says this method fires when the application is starting, so this is perfect for creating controls that don’t attach to a form. It also has an accompanying ApplicationClosing() method where we can release the control.

So let’s start by creating our control. Each visual application has a “unit1” that is created automatically. This contains your application object. While TApplication is a bit anonymous under Delphi or Lazarus, under Smart it serves a more central role. It’s the place you expose global values that should be usable throughout the entire program.

unit Unit1;

interface

uses
  Pseudo.CreateForms, // auto-generated unit that creates forms during startup
  System.Types, SmartCL.System, SmartCL.Components, SmartCL.Forms,
  SmartCL.Application,
  SmartCL.Controls.Header,
  Form1;

type

  TApplication  = class(TW3CustomApplication)
  private
    FHeader:  TW3HeaderControl;
  protected
    procedure ApplicationStarting; override;
    procedure ApplicationClosing; override;
  public
    property  Header: TW3HeaderControl read FHeader;
  end;

implementation

procedure TApplication.ApplicationStarting;
begin
  inherited;
  FHeader := TW3HeaderControl.Create(Display);
  FHeader.SetBounds(0, -10, 100, 46);
end;

procedure TApplication.ApplicationClosing;
begin
  FHeader.free;
  inherited;
end;

end.

Let’s compile and see what we got so far!

static_01

As expected we now have a header outside the form region

Global access

SmartCL, which is the namespace (a collection of units organized under one name) where all visual, DOM based classes live, have a global function for getting the Application object. This is simply Application() and you have probably used it many times.

What is not so well-known is that Application() returns a stock TCustomApplication instance. In other words, if you inspect the instance you will find none of the properties you have defined in TApplication. This is because TApplication is unknown until the application is executed. So in order to access your actual application object, you need to typecast; like I do here:

procedure TForm1.InitializeObject;
begin
  inherited;
  {$I 'Form1:impl'}
  var app := TApplication(Application);
  app.Header.Title.Caption := 'This is my header';
end;

Let’s have a look at the result (note: I added a label as well, just so you don’t think you missed something):

static_02

Now this approach works fine for many types of objects. I tend to isolate my database instance there, static header, global storage — all of it can be neatly exposed via TApplication. Fast, simple and efficient.

The final step

The initial state for the static header should be that both buttons are hidden by default. So when you start the application it just shows a title, nothing more.

When you click something that cause navigation to form2 (or some other second form), the back-button should become visible once form2 has scrolled into view.

When the user click the back-button, the opposite should happen. The back button should be disabled while you navigate back to form1, then completely hidden once you have arrived.

I don’t think I need to demonstrate this. Obviously, if you have forms that leads to more forms – then you probably want to add a “navigation stack” to the application object – an array that holds the previously visited forms.

Then whenever someone hits the “back button” you just pop the previous form off the stack, and navigate to it.

Well, hope it helps!

 

 

PNG icons on Amiga OS 3.X

December 6, 2017 2 comments

A couple of days back I posted a couple of pictures of my Raspberry PI 3b based Amiga setup. This caused quite a stir on several groups and people were unsure what exactly I was posting. Is this Amiga OS 4? Is it Aros? Scalos? Or perhaps just a pimped up classic Amiga 3.x?

image2

The more the questions arose the more I realized that a lot of people dont really know what the PI can do. I dont blame them, between work, kids and mending a broken back it probably took me a year before I even entertained the idea of setting up a proper UAE environment. And as luck would have it, two good friends of mine Gunnar kristjánsson and Thomas Navarro Garcia, had already done the worst part: namely to produce a Linux distro that auto-boots into Workbench (or technically, into a full screen UAE environment).

Taking advantage of speed

Purists might not be happy about it, but the PI delivers some serious processing power when it comes to Amiga emulation. The version of UAE Thomas and Gunnar opted for is UAE4Arm, which is a special version that contains a hand-optimized JIT engine. This takes 68k code and generates ARM machine code “on the fly” and is thus able to run Amiga software much faster than traditional UAE variations like fs-uae.

But what should we do with all that extra speed? I mean, there is a limited number of tasks that benefits from the extra processing power of the PI (or an acellerator for that matter). Well, being a programmer the process of compilation is one aspect I really love the extra grunt. When using modern compilers like freepascal 3.x on a classic 68k amiga, there is no denying we need all the cpu power we can get. So compiling on the PI is a great boost over ordinary, real Amiga machines.

image3

Freepascal is great, although the old “turbo” ide is due for an overhaul

The second aspects is the infrastructure. And this is where we get to the pimping part. By default Workbench is optimized for low-color representation. Meaning that icons and backdrops will be 4-8 colors, fixed palette and fairly useless by modern standards. Since UAE4Arm has built in support for RTG (re-targetable graphics), which means 15, 16, 24 and 32 bit screen-modes (the same as any modern PC) then surely we can remedy the visuals right?

Well, I had a google around and found that there is an icon library that supports the latest png based icons. These are icons that contain 32bit graphics with support for alpha blending (transparency). This is the exact same icon system that is used in Amiga OS 4.

So what I did was download  the versionb 46.x icon library from Aminet. Since the PI emulates (in my config) a mc68040 cpu, i was able to use the 040 optimized binary. And in essence i just copied that into my “libs” folder (and removed the old one first just to be sure).

And voila, my Workbench was now able to show 32 bit png icons just like OS 4 is!

Getting some bling

With OS 4 style icons supported, where do I get some icons to play with? Well, again I went on Aminet and downloaded a ton of large icon packs. I also visited OS4Depot and downloaded some cool background pictures and even more icons.

Then it was the time consuming process or manually replacing the *.info files. All files that you can see via Workbench has an associated .info file with the same name. So if you have a program called “myprogram”, then the icon file will be “myprogram.info”.

And that’s basically it! I spent a saturday replacing icons and doing some mild tweaking in VisualPrefs (again on Aminet), and suddenly my old, grey workbench was alive with radient colors.

image1

I love it! It might not be perfect but i have seen Linux distros that looks worse!

What I find amazing is that even after 30 years the old Amiga OS 3.x can still suprise us! If nothing else it’s a testament to the flexible architecture the guys at Commodore knocked out, an architecture that thrives in extremely low memory situations – yet delivers in spades if you give it more to work with.

Doing some modern chores

One of the first things I installed on my PI was a copy of freepascal. This has been updated to version 3.1, which is just one revision behind the compiler used on Windows and OSX. This is a bit too nifty for standard Amiga machines. You need at least an A1200 with 64 megabyte ram to work with it. Although the size of the binaries is reasonable small if you stay clear of the somewhat bloated LCL framework.

So I was able to use my object pascal skills to create a unzip/zip command-line program in 15 minutes. Doing this on my Amibian box felt great, and I really enjoy the fresh new look of Workbench. In a perfect work OS4 would be 68k and the CPU’s would all be fpga’s that ran close to Intel i7 speeds, but alas – a humble PI will have to do for now.

Amibian

If you want to re-create my experiment then start by downloading Amibian. This is a clean Linux Distro and doesnt contain workbench. So after you have made an sd-card with Amibian you need to copy over workbench. I suggest you copy over the raw files and mount a linux folder as a drive. Using harddisk images is possible, but I dont trust them. And should an error occur you lose everything. So yeah, stick with folder-mounted drives if you want less frustration.

You can visit Amibian here: https://gunkrist79.wixsite.com/amibian