Archive

Posts Tagged ‘Delphi’

Hydra now supports Freepascal and Java

June 15, 2019 Leave a comment

In case you guys missed it, RemObjects Hydra 6.2 now supports FreePascal!

This means that you can now use forms and units from .net and Java from your Freepascal applications – and (drumroll) also mix and match between Delphi, .net, Java and FPC modules! So if you see something cool that Freepascal lacks, just slap it in a Hydra module and you can use it across language barriers.

I have used Hydra for years with Delphi, and being able to use .net forms and components in Delphi is pretty awesome. It’s also a great framework for building modular applications that are easier to manage.

64296152_10156282425590906_705072396930908160_n

Being able to tap into Freepascal is a great feature. Or the other way around, with Freepascal showing forms from Delphi, .net or Java.

For example, if you are moving to Freepascal, you can isolate the forms or controls that are not available under Freepascal in a Hydra module, and voila – you can gradually migrate.

If you are moving to Oxygene Pascal the same applies, you can implement the immediate logic under .net, and then import and use the parts that can’t easily be ported (or that you want to wait with).

The best of four worlds — You gotta love that!

Check out Hydra here:

https://hydra.remobjects.com/hydra/whatsnew/default.aspx

 

RemObjects Remoting SDK?

June 3, 2019 Leave a comment

Reading this you could be forgiven for thinking that I must promote RemObjects products, It’s my job now right? Well yes, but also no.

dataabstract-illustration-rework-ro-1100The thing is, I’m really not “traveling salesman” material by any stretch of the imagination. My tolerance for bullshit is ridiculously low, and being practical of nature I loath fancy products that cost a fortune yet deliver nothing but superficial fluff.

The reasons I went to work at RemObjects are many, but most of all it’s because I have been an avid supporter of their products since they launched. I have used and seen their products in action under intense pressure, and I have come to put some faith in their solutions.

Trying to describe what it’s like to write servers that should handle thousands of active user “with or without” RemObjects Remoting SDK is exhausting, because you end up sounding like a fanatic. Having said that, I feel comfortable talking about the products because I speak from experience.

I will try to outline some of the benefits here, but you really should check it out yourself. You can download a trial directly here: https://www.remotingsdk.com/ro/

Remoting framework, what’s that?

RemObjects Remoting framework (or “RemObjects SDK” as it was called earlier) is a framework for writing large-scale RPC (remote procedure call) servers and services. Unlike the typical solutions available for Delphi and C++ builder, including those from Embarcadero I might add, RemObjects framework stands out because it distinguishes between transport, host and message-format – and above all, it’s sheer quality and ease of use.

compo

RemObjects Remoting SDK ships with a rich selection of channels and message formats

This separation between transport, host and message-format makes a lot of sense, because the parameters and data involved in calling a server-method, shouldn’t really be affected by how it got there.

And this is where the fun begins because the framework offers you a great deal of different server types (channels) and you can put together some interesting combinations by just dragging and dropping components.

How about JSON over email? Or XML over pipes?

The whole idea here is that you don’t have to just work with one standard (and pay through the nose for the privilege). You can mix and match from a rich palette of transport mediums and message-formats and instead focus on your job; to deliver a kick-ass product.

And should you need something special that isn’t covered by the existing components, inheriting out your own channel or message classes is likewise a breeze. For example, Andre Mussche have some additional components on GitHub that adds a WebSocket server and client. So there is a lot of room for expanding and building on the foundation provided by RemObjects.

And this is where RemObjects has the biggest edge (imho), namely that their solutions shaves weeks if not months off your development time. And the central aspect of that is their integrated service designer.

Integration into the Delphi IDE

Dropping components on a form is all good and well, but the moment you start coding services that deploy complex data-types (records or structures) the amount of boilerplate code can become overwhelming.

The whole point of a remoting framework is that it should expose your services to the world. Someone working in .net or Java on the other side of the planet should be able to connect, consume and invoke your services. And for that to happen every minute detail of your service has to follow standards.

61855462_10156255129755906_1396051777802993664_o

The RemObjects Service Builder integrates directly into the Delphi IDE

When you install RemObjects SDK, it also integrates into the Delphi IDE. And one of the features it integrates is a complete, separate service designer. The designer can also be used outside of the Delphi IDE, but I cannot underline enough how handy it is to be able to design your services visually, right there and then, in the Delphi IDE.

This designer doesn’t just help you design your service description (RemObjects has their own RODL file-format, which is a bit like a Microsoft WSDL file), the core purpose is to auto-generate all the boilerplate code for you — directly into your Delphi project (!)

So instead of you having to spend a week typing boilerplate code for your killer solution, you get to focus on implementing the actual methods (which is what you are supposed to be doing in the first place).

DLL services, code re-use and multi-tenancy

The idea of multi-tenancy is an interesting one. One that I talked about with regards to Rad-Server both in Oslo and London before christmas. But Rad-Server is not the only system that allows for multi-tenancy. I was doing multi-tenancy with RemObjects SDK some 14 years ago (if not earlier).

Remember how I said the framework distinguishes between transport, message and host? That last bit, namely host, is going to change how you write applications.

When you install the framework, it registers a series of custom project types inside the Delphi IDE. So if you want to create a brand new RemObjects SDK server project, you can just do that via the ordinary File->New->Other menu option.

One of the project types is called a DLL Server. Which literally means you get to isolate a whole service library inside a single DLL file! You can then load in this DLL file and call the functions from other projects. And that is, ultimately, the fundamental principle for multi-tenancy.

And no, you don’t have to compile your project with external packages for this to work. The term “dll-server” can also be a bit confusing, because we are not compiling a network server into a DLL file, we are placing the code for a service into a DLL file. I used this project type to isolate common code, so I wouldn’t have to copy unit-files all over the place when delivering the same functionality.

It’s also a great way to save money. Don’t want to pay for that new upgrade? Happy with the database components you have? Isolate them in a DLL-Server and continue to use the code from your new Delphi edition. I have Delphi XE3 Database components running inside a RemObjects DLL-Server that I use from Delphi XE 10.3.

project_types

DLL server is awesome and elegantly solves real-life problems out of the box

In my example I was doing business-logic for our biggest customers. Each of them used the same database, but they way they registered data was different. The company I worked for had bought up these projects (and thus their customers with them), and in order to keep the customers happy we couldn’t force them to re-code their systems to match ours. So we had to come up with a way to upgrade our technology without forcing a change on them.

The first thing I did was to create a “DLL server” that dealt with the database. It exposed methods like openTable(), createInvoice(), getInvoiceById() and so on. All the functions I would need to work with the data without getting my fingers dirty with SQL outside the DLL. So all the nitty gritty of SQL components, queries and whatnot was neatly isolated in that DLL file.

I then created separate DLL-Server projects for each customer, implemented their service interfaces identical to their older API. These DLL’s directly referenced the database library for authentication and doing the actual work.

62174838_10156255134895906_9195165500563259392_n

When integrated with the IDE, you are greeted with a nice welcome window when you start Delphi. Here you can open examples or check out the documentation

Finally, I wrapped it all up in a traditional Windows system service, which contained two different server-channels and the message formats they needed. When the service was started it would simply load in the DLL’s and manually register their services and types with the central channel — and voila, it worked like a charm!

Rock solid

Some 10 years after I delivered the RemObjects based solution outlined above, I got a call from my old employer. They had been victim of a devastating cyber attack. I got a bit anxious as he went on and on about damages and costs, fearing that I had somehow contributed to the situation.

targets

But it turned out he called to congratulate me! Out of all the services in their server-park, mine were the only ones left standing when the dust settled.

The RemObjects payload balancer had correctly dealt with both DDOS and brute force attacks, and the hackers were left wanting at the gates.

New job, new office, new adventures

May 12, 2019 5 comments

It’s been roughly 4 weeks since I posted a status report on Amibian.js. I normally keep people up-to-date on facebook (the “Amiga Disrupt” and also “Delphi Developer” groups). It’s been a very hectic month so I fully understand that people are asking. So let’s look at where the project is at and where we are on the time-line.

For those that might not know, I decided to leave Embarcadero a couple of months ago. I will be working out may before I move on. I wanted to write about that myself in a clean fashion, but sadly the news broke on Facebook prematurely.

Long story short, I have been very fortunate to work at Embarcadero. I am not leaving because there is anything wrong or something like that. I was hired as SC for the EMEA regions, which basically made me the support and presenter for most of europe, parts of asia and the middle east. It’s been a great adventure, but ultimately I had to admit that my passion is coding and community work. Sales is a very important part of any company, but it’s not really my cup of tea; my passion has always been research and development.

So, come first of June and I start in a new position at RemObjects. A company that has deep roots with Delphi and C++ builder users – and a company that continues to produce a wealth of high-quality, high-performance frameworks for Delphi and C++ builder. RemObjects also has a strong focus on modern languages, and have a strong portfolio of new and exciting compilers and languages to offer. The Oxygene compiler should be no stranger to Delphi developers, a powerful object-pascal dialect that can target a variety of platforms and chipsets.

Since compiler technology and run-time systems has been my main focus for well over a decade now, I feel RemObjects is a better match.

Quartex Components

Quartex Components has been an officially registered Norwegian company for a while now, so perhaps not news. What is news is that it’s now directly connected with the development of the Quartex Media Desktop (codename “Amibian.js”). While Amibian.js is an open source endeavour, there will be both free and commercial products running on top of that platform. I have written at length about Cloud Forge in the past, so I wont re-hash that again. But 2020 will see a paradigm shift in how teams and companies approach software development.

quartex

Company logo professionally milled and on its way to my new office

I will also, once there is more time, continue to sell and support software license components.

Quartex Media Desktop

The “Amibian.js” project is moving along nicely. The deadline is Q4 2019, but im hoping to wrap up the core functionality before that. So we are on track and kicking ass 🙂

amibian_01

More and more elaborate functionality is being implemented for the desktop

Here is an overview of work done this month:

  • TSystemService application type has been created (node.js)
    • TApplication now holds IPC functions (inter process communication)
    • Running child processes + sending messages is now simplicity itself
    • Database drivers are 90% done. Delete() and DeleteTable() functionality needs to be implemented in a uniform way
  • Authentication is now a separate service
    • Service database layer is finished (using SQLite3 driver by default)
    • Authentication protocol has been designed
    • Server protocol and JSON message envelopes are done
    • Presently working on the client interface
  • LDEF bytecode assembler has been improved
    • Faster symbolic lookup
    • Smarter register recognition
    • Early support for stack-frames
    • Fixed bug in parser (comma-list parse)
  • QTX framework has seen a lot of work
    • Large parts of the RTL sub-strata has been implemented
    • UTF16 codec implemented
    • QTX versions of common controls:
      • TQTXButton
      • TQTXLabel
      • TQTXToolbar
        • TQTXToolButton
        • TQTXToolSeparator
        • TQTXToolElement
      • TQTXPanel
      • TQTXCheckBox
      • .. and much, much more
  • Desktop changes
    • Link Maker functionality has been added
    • Handshake process between desktop and child app now runs on a separate timer, ensuring better conformity and a more robust initialization
    • The Quartex Editor control has been optimized
      • All redraw calls are now synchronized
      • Canvas is created on demand, avoids flicker during initial redraw
      • Support for DEL key + behavior
      • Gutter is now rendered to an offscreen bitmap and blitted into the control’s canvas. The gutter is only fully rendered when cursor forces the view to change

I will continue to keep everyone up to date about the project. As you can understand, its a bit hectic right now so please be patient – it is turning into an EPIC environment!

Understanding a stack

May 9, 2019 Leave a comment

The concept of stacks is an old one, and together with linked-lists and queues – these form the most fundamental programming concepts a developer needs to master.

But, the stack most people use today in languages like object pascal and C++ are not actual stacks; they are more like “conveniently repurposed lists“. Not a huge issue I agree, but the misconception is enough to cause confusion when people dive into low-level programming.

Adventures in assembly-land

stackishIt might seem odd to focus on something as trivial as a stack, but I have my reasons. A friend of mine who is a brilliant coder with plenty of large projects behind him recently decided to have a go at assembly coding. He was doing fine and everything was great, until he started pushing and popping  things off the stack.

After a little chat I realized that the problem was not his code, but rather how he viewed the stack. He was used to high-level versions of stacks, which in most cases are just lists storing arbitrary sized data – so he was looking at the stack as a TList<item> expecting similar behavior. Superficially a real-stack and a list-stack work the same if all you do is clean push and pop operations, but the moment you start designing a stack-scheme and push more elaborate constructs (stack-frames), things can go wrong really fast.

The nature of a real stack

A “real” stack that is a part of a hardware SOC (system on a chip) has nothing to do with lists. It’s actually a solid chunk of memory with a register to keep track of an offset into this memory block.

Let’s for sake of argument say you have 4k of stack space right? It’s clean and contains nothing, so the SP (stack pointer, or offset) is zero. What happens when you push something to the stack? for example:

push EDX

The code above simply writes the content of the EDX register to whatever offset the SP contains. It then updates the SP with the size of the data (EDX is a 32bit register, so the SP is incremented by a longword or 4 bytes). In Delphi pseudocode what happens is something like:

var LAddr := FStackBuffer;
inc(LAddr, SP);
PLongword(LAddr)^ := EDX;
inc(SP, SizeOf(EDX));

The thing about a stack is that it doesn’t manage data-length for you. And that is a big difference to remember. It will push or pop data based on the size of the source (in this case the 32bit EDX register) you use.

If you push 1024 bytes of data to a list based stack, the list keeps track of the size and data for you. So when you pop the data from the stack, you get back that data regardless. But a “real” stack couldn’t care less — which is also why it’s so easy to screw up an entire program if you make a mistake.

In short: The length of what you push – must be matched when you pop the data back (!) If you push a longword, you MUST pop a longword later.

Benefits of a real stack

call stackThe benefit is that the cost of storing values on a stack is almost zero in terms of cpu operations. A list based stack is more expensive; it will allocate memory for a record-item to hold the information about the data, then it will allocate memory to hold the actual data (depends on the type naturally) and finally copy the data into the newly allocated buffer. Hundreds if not thousands of instructions can be involved here.

A real stack will just write whatever you pushed directly into the stack-memory at whatever offset SP is at. Once written it will add the length of the write to the SP – and that’s it! So it’s one of the oldest and fastest mechanisms for lining up data in a predictable way.

Again the rules are simple: when you pop something off the stack, the size must match whatever you used to push it there. So if you pushed a longword (EDX) you also have to make sure you use a 32-bit target when you pop the value back. If you use RDX, which is 64 bit then you will essentially steal 4 bytes from something else using that stack – and all hell will break loose down the line.

Stack schemes and frames

Im not going to dig too deeply into stack-frames here, but instead write a few words about stack-schemes and using the stack to persist data your code relies on. The lines blur between those two topics anyways.

The SP (stack pointer) is not just a simple offset you can read, you can also write and change it (it also serves as a pointer). You can also read from whatever memory the SP is pointing at without polling any data from the stack.

What language developers usually do, is that they design entire structures on the stack that are, when you get into the nitty-gritty, “offset based records”. For example, lets say you have a record that looks like this:

type
PMyRecord ) ^TMyRecord;
TMyRecord = record
  first: Pointer;
  second: integer;
  Third: array[0..255] of longword;
end;

Instead of allocating conventional ram to hold that record, people push it to the stack and then use offsets to read and update the values there. A bit like a super global variable if you like. This is why when you disassemble code, you find stuff like:

mov EDX, (SP)+4

If the above record was on the stack, that pseudo code would move the field “second” into the EDX register. Because that field is 4 bytes from the stack start (providing SP points to zero).

Every programming language has a stack scheme to keep track of things. Local variables, global variables, class instances, type RTTI — most of these things are allocated in conventional ram – but there is a “program record” on the stack that makes it easy to access that information quickly.

This “moving a whole record onto the stack” is basically what a stack-frame is all about. It used to be a very costly affair with a heavy cpu speed penalty. If you look in your Delphi compiler options you will see that there is a checkbox regarding this very topic. Delphi can be told to avoid stack-frames and do register allocation instead, which was super quick compared to stack-frames – but CPU’s today are largely optimized for stack-frame allocation as default, so I doubt there is much to gain by this in 2019.

Note: A stack frame is much more, but its out of scope for this post. Google it for more info.

To sum up

When doing high-level coding you don’t really need to bother with the nuances between a TStack<item> and a “real” stack. But if you plan on digging deeper and learning a few lines of assembly – learning the differences is imperative. Its boring stuff but as fundamental as wheels on a bicycle. There is no way to avoid it, so might as well jump in.

In its absolute raw form, here is roughly the same functionality for Delphi. This was written on the fly in 2 minutes while on the road, so its purely to give you a rough idea of behavior. I would add a secondary field to keep track of the end (next insertion point), that way SP can be changed without overwriting data on new pushes.

And yes, wrapping this in a TObject utterly defeats the purpose of low-level coding, but hopefully it gives you some idea of the differences 🙂

stack_01

stack_02

Delphi AST, XML and weekend experiments

April 29, 2019 1 comment

One of the benefits of the Delphi IDE is that it’s a very rich eco-system that component writers and technology partners can tap into for their own products. I know that writing your own components is not something everyone enjoy, but knowing that you can in-fact write tools that expands the IDE using just Delphi or C++ builder, opens up for some interesting tools.

Ye old compiler bible

Ye old compiler bible

Delphi has a long tradition of “IDE enhancement” software and elaborate third-party tools that automate or delivers some benefit right in the environment. RemObjects SDK is probably the best example of how flexible the IDE truly is. RemObjects SDK integrates a whole service designer, which will generate source-code for you, update the code if you change something – and even generate service manifests for you.

There are also other tools that show off the flexibility of the IDE, ranging from code migration to advanced code refactoring and optimization.

It was with the last bit, namely code refactoring, that a third-party open-source library received a lot of deserving attention a couple of years back. A package called DelphiAST. This is a low-level syntax parser that reads Delphi source-code, applies fundamental syntax checks, and transforms the code into XML. A wet dream for anyone interested in writing advanced tooling that operates directly on source-code level.

Delphi AST

Like mentioned above, DelphiAST is a parser. Its job is very simple: parse the code, perform language level syntax checking, and convert each aspect of the code to a valid XML element. We are not talking about stuffing source-code into a CDATA segment here, but rather breaking each statement into separate tags (begin, end, if, procedure, param) so you can apply filtering, transformations and everything XML has to offer.

Back when Roman first started on DelphiAST, I got thinking — could we follow this idea further, and apply XML transformation to produce something more interesting? Would it actually be possible to approach the notion of compiling from a whole new angle? Perhaps convert between languages in a more effective way?

The short answer is: yes, everything is possible. But as always there are caveats and obstacles to overcome.

First of all, DelphiAST despite its name doesn’t actually generate a fully functional abstract symbol tree (AST). It generates a data model that is very suitable for AST generation, but not an actual AST. Everything in a programming language that can be referenced, like a method, a class, a global variable, a local variable, a parameter – are all called “symbols”. And before you can even think about processing the code, a fast and reliable AST must be in place.

Who cares?

Before I continue, you might be wondering why re-inventing the wheel is even a thing here? Why would anyone research compilers in 2019 when the world is abundant with compilers for a multitude of languages?

Because the world of computing is about to be hit by a tsunami, that’s why.

Quartex Pascal

Quartex Pascal

In the next 8-10 years the world of computing will be turned on its head. NVIDIA and roughly 100 tech companies have invested in open-source CPU designs, making it very clear that playing by Intel’s rules and bleeding royalties will no longer be tolerated. IBM has woken up from its “patent induced slumber” and is set to push their P9 cpu architecture, targeting both the high-end server and embedded market (see my article last year on PPC). At the same time Microsoft and Apple have both signaled that they are moving to ARM (an estimate of 5 years is probably reasonable). Laptop beta’s are said to be already rolling, with a commercial version expected Q3 this year (I think it wont arrive before xmas, but who knows).

Intel has remained somewhat silent about any long-term plans, but everyone that keeps an eye on hardware knows they are working like mad on next-gen FPGA. A tech that has the potential to disrupt the whole industry. Work is also being done to bridge FPGA coding with traditional code; there is no way of predicting the outcome of that though.

Oh and AMD is usurping the Intel marketshare at a steady rate — so we are in for a fight to the death.

The rise of C/C++

Those that keep tabs on languages have no doubt noticed the spike in C/C++ popularity lately. And the cause of this is that developers are safeguarding themselves for the storm to come.  C as a language might not be the most beautiful out there, but truth be told, it’s tooling requires the least amount of work to target a new platform. When a new architecture is released, C/C++ is always the first language available. You wont see C#, Flutter or Rust shipping with the latest and greatest; It’s always GCC or Clang.

Note: GCC is not just C, it’s actually a family of languages, so ironically, Gnu Basic hits a platform at the same time.

Those that have followed my blog for the past 10 years, should be more than aware of my experiments. From compiling to Javascript, generating bytecodes – and right now, moving the whole development paradigm to the browser. Hopefully my readers also recognize why this is important.

But to make you understand why I am so passionate about my compiler experiments, let’s do a little thought experiment:

Rethinking tooling

Let’s say we take Delphi, implement a bytecode format and streamline the RTL to be platform agnostic. What would the consequences of that be?

Well, first of all the compiler process would be split in two. The traditional compilation process would still be there, but it would generate bytecodes rather than machine code. That part would be isolated in a completely separate process; a process that, just like with the Delphi IDE’s infrastructure, could be outsourced to component-writers and technology partners. This in turn would provide the community with a high degree of safety, since the community itself could approach new targets without waiting for Embarcadero.

Even more, such an architecture would not be limited to machine-code. There is no law that says “you must convert bytecodes to machine code”. Since C/C++ is the foundation that modern operating-systems rest on, generating C/C++ source-code that can be built by existing compilers is a valid strategy.

There is also another factor to include in all of this, and that is Linux. Borland was correct in their assessment of Linux (the Kylix project), but they failed miserably with regards to timing. They also gravely underestimated Linux user’s sense of quality, depending on Wine (a Windows virtualization framework) to even function. They also underestimated Freepascal and Lazarus, because Linux is something FPC does exceptionally well. Competing financially against free products wont work unless you bring outstanding abilities to the table. And Linux have development tools that rival Visual Studio in quality, yet costs nothing.

But no matter how financially tricky Linux might be, we have reached the point in time where Linux is becoming mainstream. 10 years ago I had to setup my own Linux machine. There were no retailers locally that shipped a Linux box. Today I can walk into two major chains and pick dedicated Linux machines. Ubuntu in particular is well established and delivers LTS.

So for me personally, compiler tech has never been more important. And even more important is the tooling being universal and unbound by any specific API or cpu instruction-set. Firemonkey is absolutely a step in the right direction, but I think it’s a disaster to focus on native UI’s beyond a system level binding. Because replicating the same level of support and functionality for ARM, P9, RISC 5 and whatever monstrosity Intel comes up with through FPGA will take forever.

Transformation based conversion

We have wandered far off topic now, so let’s bring it back to this weekends experiment.

In short, XML transformations to convert code does work, but the right tooling have to be there to make it viable. I implemented a poor-man’s symbol table, just collecting classes, types and methods – and yeah, works just fine. What worries me a bit though is the XML parser. Microsoft has put a lot of money into XML file handling on enterprise level. When working with massive XML files (read: gigabytes) you really can’t be bothered to load the file into conventional ram and then old-school traverse the XML character by character. Microsoft operates with pure memory mapping so that you can process gigabytes like they were megabytes — but sadly, there is nothing similar for Linux, Unix or Android, that abruptly ends the fascination for me.

The only place I see using XML transformations to process source-code, is when converting to another language on source-level.

So the idea, although technically sound, gives zero benefits over the traditional process. I am however very interested in using DelphiAST to analyze and convert Delphi code directly from the IDE. But that will have to be an experiment for 2020, im booked 24/7 with Quartex Media Desktop right now.

But it was great fun playing around with DelphiAST! I loved how clean and neat the codebase has become. So if you need to work with source-code, DelphiAST is just the ticket!

Edit: You dont have to emit the code as XML. DelphiAST is perfectly happy to act as a clean parser, just saying.

TTween library for Delphi now free

March 23, 2019 5 comments

I have asked for financial backing while creating libraries that people want and enjoy, and as promised they are released into open-source land afterwards.

HexLicense was open-sourced a while back, and this time it’s TTween library that is going back to the community.

Tweening?

You have probably noticed how mobile phone UI’s have smooth movements? like on iOS when you click “back” the whole display slides smoothly into view; or that elements move, grow and shrink using fancy, accelerated effects?

This type of animation is called tweening. And the TTween Library makes it super easy to do the same for your VCL applications.

tweeners

Check out this Youtube video to see how you can make your VCL apps scale their controls more smoothly

You can fork the project here: https://bitbucket.org/cipher_diaz/ttween/src/master/

To install the system as ordinary components, just open the “Tweening.dproj” file and install as normal. Remember to add the directory to your libraries path!

Support the cause

If you like my articles and want to see more libraries and techniques, then consider donating to the project here: https://www.paypal.me/quartexNOR

paypal

Those that donate $50 or more automatically get access to the Quartex Web OS repositories, including full access to the QTX replacement RTL (for DWScript and Smart Mobile Studio).

Thank you for your support, projects like Amibian.js and the Quartex Web OS would not exist without my backers!

Building a Delphi Database engine, part four

March 23, 2019 Leave a comment

This article is over six months late (gasp!). Work at Embarcadero have been extremely time consuming, and my free time has been bound up in my ex-patreon project. So that’s why I was unable to finish in a more predictable fashion.

But better late than never — and we have finally reached one of the more exciting steps in the evolution of our database engine design, namely the place where we link our metadata to actual data.

So far we have been busy with the underlying mechanisms, how to split up larger pieces of data, how to collect these pieces and re-assemble them, how to grow and scale the database file and so on.

We ended our last article with a working persistence layer, meaning that the codebase is now able to write the metadata to itself, read it back when you open the database, persist sequences (records) – and our humble API is now rich enough to handle tasks like scaling. At the present we only support growth, but we can add file compacting later.

Tables and records

In our last article’s code, the metadata exposed a Table class. This table-class in turn exposed an interface to our field-definitions, so that we have a way to define how a table should look before we create the database.

You have probably taken a look at the code (I hope so, or much of this won’t make much sense) and noticed that the record class (TDbLibRecord) is used both as a blueprint for a table (field definitions), as well as the actual class that holds the values.

If you look at the class again (TDbLibRecord can be found in the file dblib.records.pas), you will notice that it has a series of interfaces attached to it:

  • IDbLibFields
  • IStreamPersist

The first one, which we expose in our Table as the FieldDefs property, simply exposes functions for adding and working with the fields. While somewhat different from Delphi’s traditional TFieldDefinition class, it’s familiar enough. I don’t think anyone who has used Delphi with databases would be confused around it’s members:

  IDbLibFields = interface
    ['{0D6A9FE2-24D2-42AE-A343-E65F18409FA2}']
    function    IndexOf(FieldName: string):  integer;
    function    ObjectOf(FieldName: string): TDbLibRecordField;

    function    Add(const FieldName: string; const FieldClass: TDbLibRecordFieldClass): TDbLibRecordField;
    function    Addinteger(const FieldName: string): TDbLibFieldInteger;
    function    AddStr(const FieldName: string): TDbLibFieldString;
    function    Addbyte(const FieldName: string): TDbLibFieldbyte;
    function    AddBool(const FieldName: string): TDbLibFieldboolean;
    function    AddCurrency(const FieldName: string): TDbLibFieldCurrency;
    function    AddData(const FieldName: string): TDbLibFieldData;
    function    AddDateTime(const FieldName: string): TDbLibFieldDateTime;
    function    AddDouble(const FieldName: string): TDbLibFieldDouble;
    function    AddGUID(const FieldName: string):  TDbLibFieldGUID;
    function    AddInt64(const FieldName: string): TDbLibFieldInt64;
    function    AddLong(const FieldName: string): TDbLibFieldLong;
  end;

But, as you can see, this interface is just a small part of what the class is actually about. The class can indeed hold a list of fields, each with its own datatype – but it can also persist these fields to a stream and read them back again. You can also read and write a value to each field. So it is, for all means and purposes, a single record in class form.

The term people use for this type of class is: property bag, and it was a part of the Microsoft standard components (Active X / COM) for ages. Its probably still there, but I prefer my own take on the system.

In this article we are going to finish that work, namely the ability to define a table, create a database based on the metadata, insert a new record, read records, and push the resulting binary data to the database file. And since the persistency is already in place, opening the database and reading the record back is pretty straight forward.

So this is where the metadata stops being just a blue-print, and becomes something tangible and real.

Who owns what?

Before we continue, we have to stop and think about ownership. Right now the database file persists a global list of sequences. The database class itself has no interest in who owns each sequence, if a sequence belongs to a table, if it contains a picture, a number or whatever the content might be — it simply keeps track of where each sequence begins.

So the first order of the day is to expand the metadata for tables to manage whatever records belongs to that table. In short, the database class will focus on data within its scope, and the table instances will maintain their own overview.

So the metadata suddenly need to save a list of longwords with each table. You might say that this is wasteful, that the list maintained by the database should be eliminated and that each table should keep track of it’s own data. And while that is tempting to do, there is also something to be said about maintenance. Being able to deal with persisted data without getting involved with the nitty-gritty of tables is going to be useful when things like database compacting enters at the end of our tutorial.

Locking mechanism

Delphi has a very user-friendly locking mechanism when it comes to databases. A table or dataset is either in read, edit or insert mode – and various functions are allowed or prohibited depending on that state. And it would probably be wise to merge the engine with Delphi’s own TDatabase and TTable at some point – but right now im more interested in keeping things clean and simple.

When I write “locking mechanism” I am not referring to a file-lock, or memory lock. Had we used memory-mapped files the locking mechanism would have been more elaborate. What I mean with a lock, is basically placing a table in one of the states I mentioned above. The table needs to know what exactly you want to do. Are you adding a record? Are you editing an existing record? The table code needs to know this to safely bring you from one mode to the next.

Suddenly, you realize why each table needs that extra list, because how is the table going to allow methods like first, next, last and previous? The record-list dealt with by the database is just a generic, non-ordered ledger of sequences (a global scope list if you will). Are you going to read all records back when you open the database to figure out who owns what?

A call to First() will mean a completely different offset for each table. And the logical way to handle this, is to give each table it’s own cursor. A class that keeps track of what records belongs to the table, and also keeps track of whatever states the table is in.

The database cursor

Since we are not up against Oracle or MSSQL here, but exploring database theory, I have kept the cursor as simple as I possibly could. It is a humble class that looks like this:

db_cursor

The idea of-course is that the table defaults to “read” mode, meaning that you can navigate around, record by record, or jump to a specific record using the traditional RecNo property.

The moment you want to insert or edit a record, you call the Lock() method, passing along the locking you need (edit or insert). You can then either cancel the operation or call post() to push the data down to the file.

The Lock() method is a function (bool), making it easier to write code, as such:

  if Database.Table.Cursor.Lock(cmInsert) then
  begin
    with Database.GetTableByName('access_log').cursor do
    begin
      Fields.WriteInt('id', FUserId);
      Fields.WriteStr('name', FuserName);
      Fields.WriteDateTime('access', Now);
      Post();
    end;
  end else
  raise exception.create('failed to insert record');

Im sure the are better designs, and the classes and layout can absolutely be made better; but for our purposes it should be more than adequate.

Reloading record data

In the previous articles we focused on writing data. Basically taking a stream or a buffer, breaking it into pages, and then storing the pages (or blocks) around the file where there was available space.

We cleverly crafted the blocks so that they would contain the offset to the next block in a sequence, making it possible to read back a whole sequence of blocks by just knowing the first one (!)

A part of what the cursor does is also to read data back. Whenever the RecNo field changes, meaning that you are moving around the table-records using the typical Next(), Previous(), First() etc functions — if the cursor is in read mode (meaning: you are not inserting data, nor are you editing an existing record), you have to read the record into memory. Otherwise the in-memory fields wont contain the data for that record.

Creating a cursor

One note before you dive into the code: You have to create a cursor before you can use it! So just creating a table etc wont be enough. Here is how you go about doing this:db_cursor_create

Creating the cursor will be neatly tucked into a function for the table instance, we still have other issues to deal with.

What to expect next?

Next time we will be looking at editing a record, commiting changes and deleting records. And with that in place we have finally reached the point where we can add more elaborate functionality, starting with expression parsing and filters!

You can check out the code here: https://bitbucket.org/cipher_diaz/dbproject/src/master/

Support the cause

If you like my articles and want to see more libraries and techniques, then consider donating to the project here: https://www.paypal.me/quartexNOR

paypal

Those that donate $50 or more automatically get access to the Quartex Web OS repositories, including full access to the QTX replacement RTL (for DWScript and Smart Mobile Studio).

Thank you for your support, projects like Amibian.js and the Quartex Web OS would not exist without my backers!

/Jon

VMWare: A Delphi developers best friend

March 3, 2019 1 comment

Full disclosure: I am not affiliated with any particular virtualization vendor of any sorts. The reason I picked VMWare was because their product was faster when I compared the various solutions. So feel free to replace the word VMWare with whatever virtualization software suits your needs.

On Delphi Developer we get new members and questions about Delphi and C++ builder every day. It’s grown into an awesome community where we help each other, do business, find jobs and even become personal friends.

A part of what we do in our community, is to tip each other about cool stuff. It doesn’t have to be directly bound to Delphi or code either; people have posted open source graphic programs, video editing, database designers – as long as its open source or freeware its a great thing (we have a strict policy of no piracy or illegal copying).

Today we got talking about VMWare and how its a great time saver. So here goes:

Virtualization

Virtualization is, simply put, a form of emulation. Back in the mid 90s emulators became hugely popular because for the first time in history – we had CPU’s powerful enough to emulate other computers at full speed. This was radical because up until that point, you needed special hardware to do that. You had also been limited to emulating legacy systems with no practical business value.

vmware

VmWare Workstation is an amazing piece of engineering

Emulation has always been there, even back in the 80s with 16 bit computers. But while it was technically possible, it was more a curiosity than something an office environment would benefit from (unless you used expensive compute boards). We had to wait until the late 90s to see commercial-grade x86 emulation hitting the market, with Virtuozzo releasing Parallels in 1997 and VMWare showing up around 1998. Both of these companies grew out of the data-center culture and academia.

It’s also worth noting that modern CPU’s now support virtualization on  hardware level, so when you are “virtualizing” Windows the machine code is not interpreted or JIT compiled – it runs on the same CPU as your real system.

Why does it matter

Virtualization is not just for data-centers and server-farms, it’s also for desktop use. My personal choice was VMWare because I felt their product performed better than the others. But in all fairness it’s been a few years since I compared between systems, so that might be different today.

53145702_10156048129355906_2019146241329332224_o

A screengrab of my desktop, here showing 3 virtual machines running. I have 64 gigabyte memory and these 3 virtual machines consume around 24 gigabytes and uses 17% of the Intel i7 CPU power during compile. It hardly registers on the CPU stats when idle.

VMWare Workstation is a desktop application available for Windows, Linux and OS X. And it allows me to create virtual machines, or “emulations” if you like. The result is that I can run multiple instances of Windows on a single PC. The virtual machines are all sandbox in large hard-disk files, and you have to install Windows or Linux into these virtual systems.

The bonus though is fantastic. Once you have installed an operating-system, you can copy it, move it, do partial cloning (only changes are isolated in new sandboxes) and much, much more. The cloning functionality is incredibly powerful, especially for a developer.

It also gives you something called snap-shot support. A snapshot is, like the word hints to, a copy of whatever state your virtual-machine is in at that point in time. This is a wonderful feature if you remember to use it properly. I try to take snapshots before I install anything, be it larger systems like Delphi, or just utility applications I download. Should something go wrong with the tools your work depends on — you can just roll back to a previous snapshot (!)

A great time saver

Updates to development tools are always awesome, but there are times when things can go wrong. But if you remember to take a snapshot before you install a program, or before you install a component package — should something go wrong, then rolling back to a clean point is reduced to a mouse click.

I mean, imagine you update your development tools right? Suddenly you realize that a component package your software depends on doesn’t work. If you have installed your devtools directly on the metal, you suddenly have a lot of time-consuming work to do:

  • Re-install your older devtools
  • Re-install your components and fix broken paths

That wont be a problem if you only have 2-3 packages, but I have hundreds of components install on my rig. Just getting my components working can take almost a full work-day, and I’m not exaggerating (!).

With VMWare, I just roll back to when all was fine, and go about my work like nothing happened.

I made a quick, slapdash video to demonstrate how easy VmWare makes my Delphi and JS development. If you are not using virtualization I hope this video at least makes it a bit clearer why so many do.

vmware_youtube

Click the image to watch the video on YouTube

Five reasons to learn Delphi

February 8, 2019 5 comments

A couple of days ago I had a spectacular debate on Facebook. Like most individuals that are active in the IT community, my social media feed is loaded with advertisement for every trending IT concept you can imagine. Lately these adverts have been about machine learning and A.I. Or should I say, companies using those buzzwords to draw unwarranted attention to their products. I haven’t seen A.I used to sell shoes yet, but it’s only a matter of time before it happens.

Cloud Computing concept background with a lot of icons

Like any technology, Cloud is only as powerful as your insight

There is also this thing that: yes, a 14-year-old can put together an A.I chat robot in 15 minutes with product XYZ. But that doesn’t mean he or she understands what is happening beneath the user-interface. Surely the goal must be to teach those kids skills that will benefit them for a lifetime.

Those that know me also know that yes, I have this tendency to say what I mean, even when I really should keep my mouth shut. On the other hand that is also why companies and developers call me, because I will call bullshit and help them avoid it. That’s part of my job, to help individuals and companies that use Delphi to pick the right version for their need, get the components that’s right for their goals – and map out a strategy if they need some input on that.  I’ll even dive in and do some code conversion if they need it; goes with the territory.

Normally I just ignore advertizing that put “cloud” or “a.i” in their title, because it’s mostly click-bait designed for non-developers. But for some reason this one particular advert caught my eye. Perhaps it triggered the trauma of being subjected to early Java advertising during the late 90s’s, or maybe it released latent aggression from being psychologically waterboarded by Microsoft Silverlight. Who knows 🙂

The ad was about a Norwegian company that specialize in teaching young students how to become professional developers. You know the “become a guru in 3 weeks” type publisher? What baked my noodle was the fact that they didn’t offer a single course involving archetypical languages, and that they were spinning their material with promises that were simply not true. The only artificial intelligence involved was the advertizing engine at Facebook.

The thing is – the world has more than enough developers on desktop level. The desktop and web market is drowning in developers who has the capacity to download libraries, drop components on a form and hook up to a database. What the world really needs are more developers on archetypical languages. And if you don’t know what that is, then let me just do a quick summary before we carry on.

Archetypal languages

An archetypical programming language is one that is designed around how the computer actually works. As a consequence these languages and toolchains embody several of the following properties:

  • Pointers and raw memory access
  • Traditional memory management, no garbage collection
  • Procedural and object-orientation execution
  • Inline assembler
  • Little if no external dependencies
  • Static linking (embed pre-compiled code)
  • Compiled code can operate without an OS infrastructure
  • Suitable for kernel, driver, service, desktop, networking and cloud level development
  • Compiler that produce machine code for various chipsets

As of writing there are only two archetypical languages (actually 3, but assembly language is chipset specific so we will skip that here), namely C/C++ and Object Pascal. These are the languages you use to write all the other languages with. If you plan on writing your own operating-system from scratch, only C and Pascal is suitable. Which is why these are the only languages that have ever been used for making operating systems.

tiobi

Delphi is one of the 20 most used programming languages in the world. It ranked as #11 in 2017. Like all rankings it fluctuates depending on season and market changes.

Obviously i’m not suggesting that people learn Delphi or C++ builder to write their own OS – or that you must know assembly to make an invoice system; I’m simply stating that the insight and skill you get from learning Delphi and C/C++, even if all you do is write desktop applications – will make you a better developer on all levels.

Optimistic languages

Optimistic or humanized programming languages, have been around just as long as the archetypical ones. Basic is an optimistic language, C# and Java are optimistic languages, Go and Dart are equally optimistic languages. Script engines like node.js, python and Erlang (if you missed Scott Hanselman’s epic rant on the subject, you are in for a treat) are all optimistic. They are called optimistic because they trade security with functionality; sandboxing the developer from the harsh reality of hardware.

An optimistic language is typically designed to function according to “how human beings would like things to be” (hence the term optimistic). These languages rely heavily on existing infrastructure to even work, and each language tends to focus on specific tasks – only to branch out and become more general purpose over time.

There is nothing wrong with optimistic languages. Except when they are marketed to young students as being somehow superior or en-par with archetypical languages. That is a very dangerous thing to do – because teachers have a responsibility to prepare the students for real life. I can’t even count the number of times I have seen young developers fresh out of college get “that job”, only to realize that the heart of the business, the mission critical stuff, is written in Delphi or C/C++, which they never learned.

People have no idea just how much of the modern world rests on these languages.  It is almost alarming how it’s possible to be a developer in 2019 and have a blind spot with regards to these distinctions. Don’t get me wrong, it’s not the student’s fault, quite the opposite. And i’m happy that things are starting to change for the better (more about that further down).

The original full stack

So back to my little encounter; What happened was that I just commented something along the lines of “why not give the kids something that will benefit them for a lifetime”. It was just a drive-by comment on my part, and I should have just ignored it; And no sooner had I pressed enter, when a small army of internet warriors appeared to defend their interpretation of “full stack” in 2019. Oblivious to the fact that the exact same term was used around 1988-ish. I think it was Aztec or SAS-C that coined it. Doesn’t matter.

aztec-c

The original “full stack” holds a very different meaning in traditional development. While I don’t remember if it was Aztec-C or SAS-C, but the full stack was driver to desktop 🙂

Long story short, I ended up having a conversation with these teenagers about how technology has evolved over the past 35 years. Not in theory, but as one that has been a programmer since the C= 64 was released. I also introduced them to archetypal languages and pinpointed the distinction I made above. You cannot compare if you don’t know the difference.

I have no problems with other languages, I use several myself, and my point was simply that: if we are going to teach the next generation of programmers something, then let’s teach them the timeless principles and tools that our eco system rests on. We need to get Delphi and C/C++ back into the curriculum, because that in turn will help the students to become better developers. It doesn’t matter what they end up working with afterwards, because with the fundamental understanding in place they will be better suited. Period.

You will be a better Java developer if you first learn Delphi. You will be a better C# developer if you learn Delphi. Just like nature has layers of complexity, so does computing. And understanding how each layer works and what laws exist there – will have a huge impact on how you write high-level code.

All of this was good and well and the internet warriors seemed a bit confused. They weren’t prepared for an actual conversation. So what started a bit rough ended up as a meaningful, nice dialog.

And speaking of education: I’m happy to say that two universities in Norway now have students using Delphi again. Which is a step in the right direction! People are re-discovering how productive Object-Pascal is, and why the language remains the bread and butter of so many companies around the world.

Piracy, the hydra of problems

What affected me the most during my conversation with these young developers – was that they had almost no relationship to neither Delphi or C/C++. From an educational standpoint that is not just alarming, that is an intellectual emergency. The only knowledge they had of Delphi was hearsay and nonsense.

piracy

The source of the misrepresentation is piracy, openly so, of outdated versions that was never designed to run on modern operating systems. With the community edition people can enjoy a modern, high performance Delphi without resorting to illegal activities

But after a while I finally discovered where their information came from! Delphi 7 is being pirated en-mass even to this day. It’s for some strange reason very popular in Asia (most of the torrent IP’s ended up there when I followed up on this). So teenagers download Delphi 7 which is ancient by any standard, and the first thing they experience is incompatibility issues. Which is only to be expected because Delphi 7 was released a long, long time ago. But that’s the impression they are left with after downloading one of these cracked, illegal bundles.

I downloaded one of these “ready to use” bundles to have a closer look, and it contained at least 500 commercial components. You had the full TMS component collection, Developer Express, Remobjects SDK, ImageEN, FastReports, SecureBlackBox, Intraweb — tens of thousands of dollars worth of code. With one very obvious factor: both Delphi 7 and the components involved are severely outdated. Microsoft doesn’t even support Windows XP any more, it was written in the early bronze age.

So the reality of the situation was that these young developers had never seen a modern Delphi in their life. In their minds, Delphi meant Delphi 7 which they could download almost everywhere (which is illegal and riddled with viruses, stay well clear). No wonder there is confusion about the subject (!)

They were very happy to learn about the community edition, so in the end I at least got to wake them up to the awesome features that modern Delphi represents. The community edition has been a fantastic thing; the number of members joining Delphi-Developer on Facebook has nearly doubled since the community edition was released.

A few of the students went over to Embarcadero and downloaded the community edition, and their jaw dropped. They had never seen a development environment like this before!

Give me five good reasons to learn Delphi

delphi_boxIn light of this episode, thought I could share five reasons why Delphi and object-pascal remains my primary programming language.

I don’t have any problems dipping into JavaScript, Python or whatever the situation might call for – but when it comes to mission critical data processing and services that needs 24/7 up-time; or embedded solutions where CPU spikes simply cannot be tolerated. It’s Delphi I turn to.

These five reasons are also the same that I gave the teenagers. So here goes.


Great depth and wingspan

Object Pascal, from which Delphi is the trending dialect, is a fantastic language. At heart there is little difference between C/C++ and object pascal in terms of features, but the syntax of object pascal is more productive than C/C++ (IMHO).

Delphi and C++ builder actually share run-time libraries (there are two of them, the VCL which is Windows only, and Firemonkey which is platform independent). Developers often mix and match code between these languages, so components written in Delphi can be used in C++ builder, and libraries written in C can be consumed and linked into your Delphi executable.

One interesting factoid: people imagine Delphi to be old. But the C language is actually 3 years older than pascal. During their time these languages have evolved side by side, and Embarcadero (who makes Delphi and C++ builder) have brought all the interesting features you expect from a modern language into Delphi (things like generics, inline variables, anonymous procedures – it’s all in there). So this myth that Delphi is somehow outdated or unsuitable is just that – a myth.

foodchain

The eco-system of programming languages

And there is an added bonus! Just like C/C++, Delphi represents a curriculum and lineage that spans decades. Stop and think about that for a second. This is a language that has been evolved to solve technical challenges of every conceivable type for decades. This means that you can put some faith in what the language can deliver.

There are millions of Delphi developers in the world; an estimated 10 millions in fact. The language was ranked #11 on the TIOBI language index; it is under constant development with a clear roadmap and time-line – and is used by large and small companies as the foundation for their business. Even the Norwegian government rely on Delphi. The system that handles healthcare messages for the Norwegian population is pure Delphi.  That is data processing for 5.2 million individuals.

Object Pascal has not just stood the test of time, it has innovated it. Just like C/C++ object pascal has a wingspan and depth that reaches from assembler to system services, from database engines to visual desktop application – and from the desktop all the way to Cloud and essential web technology.

So the first good reason to learn Delphi is depth. Delphi covers the native stack, from kernel level drivers to high-speed database engines – to visual desktop applications. It’s also exceptionally well suited for cloud services (both Windows and Linux targets).


Easy to learn

I mention that Delphi is powerful and has the same depth as C/C++, but why then learn Delphi and not C++? Well, the language (object pascal) was especially tailored for readability. It was concluded that the human brain recognized words faster than symbols or glyphs – and thus it’s easier to read complex pascal code rather than complex C code. Individual taste notwithstanding.

Despite it's depth, Delphi is easy to learn and fun to master!

Despite its depth, Delphi is easy to learn and fun to master!

Object Pascal is also very declarative, with as little unknown factors as possible. This teaches people to write clean and orderly code.

And perhaps my favorite, a pascal code-file contains both interface and implementation. So you don’t have to write a second .h file which is common under C/C++.

If you already know OOP, be it Java, C#, Rust or whatever – learning Delphi will be a piece of cake. You already know about classes, interfaces, generics, operator overloading – and can pretty much skip forward to memory management, pointers and structures (records in pascal, struct in C).

Swing by Embarcadero Academy and take a course, or head over to Amazon and buy some good books on Delphi. Download the Community Edition of Delphi and you will be up and running in no-time.

Also remember to join Delphi Developer on Facebook, where thousands of active developers talk, help each other and share solutions 24/7.


Target multiple platforms

With Delphi and C++ builder it’s pretty easy to target multiple platforms these days. You can target Android, iOS, OS X, Windows and Linux from a single codebase.

One codebase, multiple targets

One codebase, multiple targets

I mean, are you going to write one version of your app in Java, a second one in C#, a third one in Objective C and a fourth in Dart? Because that’s the reality you face if plan on using the development tools provided by each operating-system manufacturer. That’s a lot of time, money and effort just to push your product out the door.

With Delphi you can hit all platforms at once, native code, reducing your time to market and ROI. People use Delphi for a reason.

You will also enjoy great performance from the LLVM optimized code Delphi emits on mobile platforms.


Rich codebase

The benefit of age is often said to be wisdom; I guess the computing equivalent is a large and rich collection of components, libraries and ad-hoc code that you can drop into your own projects or just study.

You can google just about any subject, and there will be code for Delphi. Github, BitBucket and Torry’s Delphi pages are packed with open-source frameworks covering everything from compiler cores, midi interfaces, game development to multi-threaded, machine clustered server solutions. Once you start looking, you will find it.

GitLab-vs-GitHub-vs-bitbucket-1

There is a rich constellation of code, components and libraries for Delphi and C++ builder around the internet.  Also remember dedicated sites like Torry’s

There is also a long list of technology partners that produce components and libraries for Delphi – and like mentioned earlier, you can link in C compiled code once you learn the ropes.

Oh, and when I mentioned databases earlier I wasnt just talking about the traditional databases. Delphi got you covered with those, no worries — im also talking about writing a database engine from scratch. There are several database engines that are implemented purely in Delphi. ElevateDB is one example.

Delphi also ships with Interbase and Interbase-light (embedded and mobile) so you have easy access to data storage solutions. There is also FireFAC that allows you to connect directly with established databases — and again, a wealth of free and commercial solutions.


Speed and technique

What I love about Delphi and C++ is that your code, or the way you write code, directly impacts your results. The art of optimization is rarely a factor in some of the new, optimistic languages. But in a native language you get to use traditional techniques that are time-less, or perhaps more interesting: explore ways of achieving the same with less.

As a native language Delphi and C/C++ produce fast executables. But I love how there is always room for your own techniques, your own components and your own libraries.

tomes

Techniques, like math, is timeless

Need to write a system driver? Well, suddenly speed becomes a very important factor. A garbage collector can be a disaster on that level, because it will kick-in on interval and cause CPU spikes. Perhaps you want to write a compiler, or need a solid scripting engine? How about linking the V8 JavaScript engine directly into your programs? All of this is quite simple with Delphi.

So with Delphi I get the best of both worlds, I get to use the scalpel when the needs are delicate, and I get the chain-saw to cut through tedious work. Things like property bindings are a god sent. This is a techniques where you can visually bind properties of any component together, almost like events, and create cause and effect chains. So if a value changes on a bound property, that triggers whatever is bound, and so on and so on — pretty awesome!

So you can create a complete database application, with grid and navigation, without writing a single line of code. That was just one simple example, you can do so much more out of the box – and it saves you so much time.

Yet when you really need to write high performance code, or build that killer framework that will set your company apart from the rest — you have that freedom!


So if you havent checked out RAD Studio, head over to Embarcadero and download a free trial. You will be amazed and realize just why Delphi and C++ builder are loved by so many.

Delphi “real life” webinars

February 1, 2019 Leave a comment

I got some great news for everyone!

For a while now we have been planning some Delphi community webinars. This will be a monthly webinar that has a slightly different format than what people are used to. The style of webinar will be live, laid back and with focus on real-life solutions that already exists, or that is being developed – talking directly to the developers and MVP’s involved.

webinars_0

There is so much cool happening in the Delphi, C++ builder and Sencha scene that I hardly know where to begin. But what better way to spread the good news than to talk directly with the people building the components, publishing the software, doing that book or rolling the frameworks?

In the group Delphi Developer on Facebook we have a very laid back style, one I hope to transpose onto the webinars. We keep things clean, have clear rules and the atmosphere is friendly and easy-going. There is room for jokes and off topic posts on the weekends, but above all: we are active, solution oriented developers.

Delphi Developer, although being small compared to the 6.5 million registered Delphi developers in the world (estimated object pascal use is closer to 10 million when factoring in alternative compilers), just reached 8000 active members. The growth rate for membership into our little corner of the world has really picked-up speed after the community edition. Seriously, it’s phenomenal to be a part of this. It’s more than doubled since 2017.

So there has never been a better time to do webinars on Delphi than right now 🙂

Making waves

delphi_box

Delphi has so much to offer

Two weeks ago I was informed that Delphi is once again being used by one of the largest Norwegian universities (!). That was an epic moment, because that is something we have worked hard to realize. I have been blogging, teaching and doing pro-bono work for a decade now to get the ball rolling – and seeing the community revitalize itself is spectacular!

I work like mad every day to help companies with strategies involving Delphi, showing them how they can use Delphi to strengthen their existing infrastructure; I connect developers to employers, do casual, drive-by headhunting, talk to component vendors — but education and awareness is what it’s all about. Your toolbox is only as useful as your knowledge of the tools. If you don’t know how or what a tool is, well then you probably wont use it much.

Making new developers aware of what Delphi is and what it can do is at the heart of this. Especially developers that work with other languages. The reality of 2019 is that companies use several languages to build their infrastructure, and it’s important that they understand how Delphi can co-exist and benefit their existing investment. So a fair share of my time is about educating developers from other eco-systems. Most of them are not prepared for the great depth and wingspan object pascal has, and are flabbergasted when the full scope of the product hits them. Only C++ and object pascal scales from kernel to cloud. That’s the real full stack right there.

Delphi: The secret in the sause

I keep up with whats happening in many different parts of development, and one of those is node.js and webassembly. Since everyone was strutting their stuff I figured I could as well, so I posted some videos about the Quartex Web Desktop I have been working on in my spare time (a personal project done in Object Pascal and compiled to JavaScript).

The result? The node.js groups on Facebook went nuts! Within minutes of posting I was bombarded by personal message requests, friend requests and even a marriage proposal. All of it from young web developers wanting to know “my secrets”.

Well, the secret is Delphi. I mean, I can sugarcoat it as much as I want, but without Delphi none of the auxiliary tools I use or made would exist. They are all made with Delphi – for Delphi developers. Smart Mobile Studio, the QTX framework, my libraries and tools – none of them would have seen the light of day if I never learned Delphi.

webdesktop

Node developers could not believe their eyes nor ears when they learned that this system was coded in Object Pascal, using a “off the shelf” compiler that is 100% Delphi; DWScript and Smart Mobile Studio is a pretty common addition to Delphi developers toolbox in 2019

What I’m trying to convey to young developers especially, is that if you take the time to learn Delphi, you can pick from so many third-party associated technologies that will help you create incredible software. ImageEN, AToZed, DevEx, TMS Component Suite, Greatis Software, FastReports, DWScript, Smart Mobile Studio; and that is just the tip of the iceberg (not to mention the amazing products by Boian Mitov, talk about powerful solutions!). As a bonus you have thousands of free components and units on Github, Torry’s and other websites.

That’s a pretty strong case. We are talking real-life business here, not dorm-room philosophical idealism. You have 800.000 receipts on average hitting your servers on a daily basis — and you have 20.000 cash machines in Norway alone that must function 24/7; you have no room for cpu spikes on the embedded board, nor can you tolerate latency problems or customers start walking. And you need it up and running yesterday. I can tell you right now having experienced that exact scenario, that had we used any other tool than Delphi – it would have sunk the company.

The point? After posting some videos and chatting a bit with the node.js devs, Delphi Developer got infused with a sizable chunk of young node.js developers eager to learn more about this “Delphi thing”. And they will become better node developers for it.

EDIT: I started this day (01.02.19) with a call from a university student. He was fed up with Java and C# because he wanted to learn native programming. He had noticed the node.js post and became curious. So I set him up with the community edition of both Delphi and C++ builder. When he masters the basics I will introduce him to inline assembler. There is a gap in modern education where Delphi used to sit, and no matter how much they try to fill it, bytecodes can’t replace solid knowledge of how a computer actually works.

So indeed! These webinars will be very fun to make. We got so many fantastic developers to invite, techniques to explore, components to demo and room for questions! The hard part right now is actually picking topics, because we have so much to choose from!

For example, did you know there is TV channel that is operated using Delphi software? It’s been running without a glitch for decades. Rock solid and high performance. How cool is that! Talk about real-life solution. Delphi is everywhere.

I’ll get back to you with more information in due time ~ Cheers!

Amibian.js under the hood

December 5, 2018 2 comments

Amibian.js is gaining momentum as more and more developers, embedded systems architects, gamers and retro computer enthusiasts discover the project. And I have to admit I’m pretty stoked about what we are building here myself!

intro

In a life-preserver no less 😀

But, with any new technology or invention there are two common traps that people can fall into: The first trap is to gravely underestimate a technology. JavaScript certainly invites this, because only a decade ago the language was little more than a toy. Since then JavaScript have evolved to become the most widely adopted programming language in the world, and runtime engines like Google’s V8 runs JavaScript almost as fast as compiled binary code (“native” means machine code, like that produced by a C/C++ compiler, Pascal compiler or anything else that produces programs that run under Linux or Windows).

It takes some adjustments, especially for traditional programmers that havent paid attention to where browsers have gone – but long gone are the days of interpreted JavaScript. Modern JavaScript is first parsed, tokenized and compiled to bytecodes. These bytecodes are then JIT compiled (“just in time”, which means the compilation takes place inside the browser) to real machine-code using state of the art techniques (LLVM). So the JavaScript of 2018 is by no means the JavaScript of 2008.

The second trap you can fall into – is to exaggerate what a new technology can do, and attach abilities and expectations to a product that simply cannot be delivered. It is very important to me that people don’t fall into either trap, and that everyone is informed about what Amibian.js actually is and can deliver – but also what it wont deliver. Rome was not built-in a day, and it’s wise to study all the factors before passing judgement.

I have been truly fortunate that people support the project financially via Patreon, and as such I feel it’s my duty to document and explain as much as possible. I am a programmer and I often forget that not everyone understands what I’m talking about. We are all human and make mistakes.

Hopefully this post will paint a clearer picture of Amibian.js and what we are building here. The project is divided into two phases: first to finish Amibian.js itself, and secondly to write a Visual Studio clone that runs purely in the browser. Since it’s easy to mix these things up, I’m underlining this easy – just in case.

What the heck is Amibian.js?

Amibian.js is a group of services and libraries that combined creates a portable operating-system that renders to HTML5. A system that was written using readily available web technology, and designed to deliver advanced desktop functionality to web applications.

The services that make up Amibian.js was designed to piggyback on a thin Linux crust, where Linux deals with the hardware, drivers and the nitty-gritty we take for granted. There is no point trying to write a better kernel in 2018, because you are never going to catch up with Linus Torvalds. It’s must more interesting to push modern web technology to the absolute limits, and build a system that is truly portable and distributed.

smart_ass

Above: Amibian.js is created in Smart Pascal and compiled to JavaScript

The service layer is written purely in node.js (JavaScript) which guarantees the same behavior regardless of host platform. One of the benefits of using off-the-shelves web technology is that you can physically copy the whole system from one machine to the other without any changes. So if you have a running Amibian.js system on your x86 PC, and copy all the files to an ARM computer – you dont even have to recompile the system. Just fire up the services and you are back in the game.

Now before you dismiss this as “yet another web mockup” please remember what I said about JavaScript: the JavaScript in 2018 is not the JavaScript of 2008. No other language on the planet has seen as much development as JavaScript, and it has evolved from a “browser toy” – into the most important programming language of our time.

So Amibian.js is not some skin-deep mockup of a desktop (lord knows there are plenty of those online). It implements advanced technologies such as remote filesystem mapping, an object-oriented message protocol (Ragnarok), RPCS (remote procedure call invocation stack), video codec capabilities and much more — all of it done with JavaScript.

In fact, one of the demos that Amibian.js ships with is Quake III recompiled to JavaScript. It delivers 120 fps flawlessly (browser is limited to 60 fps) and makes full use of standard browser technologies (WebGL).

utube

Click on picture above to watch Amibian.js in action on YouTube

So indeed, the JavaScript we are talking about here is cutting edge. Most of Amibian.js is compiled as “Asm.js” which means that the V8 runtime (the code that runs JavaScript inside the browser, or as a program under node.js) will JIT compile it to highly efficient machine-code.

Which is why Amibian.js is able to do things that people imagine impossible!

Ok, but what does Amibian.js consist of?

Amibian.js consists of many parts, but we can divide it into two categories:

  • A HTML5 desktop client
  • A system server and various child processes

These two categories have the exact same relationship as the X desktop and the Linux kernel. The client connects to the server, invokes procedures to do some work, and then visually represent the response This is identical to how the X desktop calls functions in the kernel or one of the Linux libraries. The difference between the traditional, machine code based OS and our web variation, is that our version doesn’t have to care about the hardware. We can also assign many different roles to Ambian.js (more about that later).

smartdesk

Enjoying other cloud applications is easy with Amibian.js, here is Plex, a system very much based on the same ideas as Amibian.js

And for the record: I’m trying to avoid a bare-metal OS, otherwise I would have written the system using a native programming language like C or Object-Pascal. So I am not using JavaScript because I lack skill in native languages, I am using JavaScript because native code is not relevant for the tasks Amibian.js solves. If I used a native back-end I could have finished this in a couple of months, but a native server would be unable to replicate itself between cloud instances because chipset and CPU would be determining factors.

The Amibian.js server is not a single program. The back-end for Amibian.js consists of several service applications (daemons on Linux) that each deliver specific features. The combined functionality of these services make up “the amibian kernel” in our analogy with Linux. You can think of these services as the library files in a traditional system, and programs that are written for Amibian.js can call on these to a wide range of tasks. It can be as simple as reading a file, or as complex as registering a new user or requesting admin rights.

The greatest strength of Amibian.js is that it’s designed to run clustered, using as many CPU cores as possible. It’s also designed to scale, meaning that it will replicate itself and divide the work between different instances. This is where things get’s interesting, because an Amibian.js cluster doesn’t need the latest and coolest hardware to deliver good performance. You can build a cluster of old PC’s in your office, or a handful of embedded boards (ODroid XU4, Raspberry PI’s and Tinkerboard are brilliant candidates).

But why Amibian.js? Why not just stick with Linux?

That is a fair question, and this is where the roles I mentioned above comes in.

As a software developer many of my customers work with embedded devices and kiosk systems. You have companies that produce routers and set-top boxes, NAS boxes of various complexity, ticket systems for trains and busses; and all of them end up having to solve the same needs.

What each of these manufacturers have in common, is the need for a web desktop system that can be adapted for a specific program. Any idiot can write a web application, but when you need safe access to the filesystem, unified API’s that can delegate signals to Amazon, Azure or your company server, things suddenly get’s more complicated. And even when you have all of that, you still need a rock solid application model suitable for distributed computing. You might have 1 ticket booth, or 10.000 nation wide. There are no systems available that is designed to deal with web-technology on that scale. Yet 😉

Let’s look at a couple of real-life scenarios that I have encountered, I’m confident you will recognize a common need. So here are some roles that Amibian.js can assume and help deliver a solution rapidly. It also gives you some ideas of the economic possibilities.

Updated: Please note that we are talking javascript here, not native code. There are a lot of native solutions out there, but the whole point here is to forget about CPU, chipset and target and have a system floating on top of whatever is beneath.

  • When you want to change some settings on your router – you login to your router. It contains a small apache server (or something similar) and you do all your maintenance via that web interface. This web interface is typically skin-deep, annoying to work with and a pain for developers to update since it’s connected to a native apache module which is 100% dependent on the firmware. Each vendor end up re-inventing the wheel over and over again.
  • When you visit a large museum notice the displays. A museum needs to display multimedia, preferably on touch capable devices, throughout the different exhibits. The cost of having a developer create native applications that displays the media, plays the movies and gives visual feedback is astronomical. Which is why most museums adopt web technology to handle media presentation and interaction. Again they re-invent the wheel with varying degree of success.
  • Hotels have more or less the exact same need but on a smaller scale, especially the larger hotels where the lobby have information booths, and each room displays a web interface via the TV.
  • Shopping malls face the same challenge, and depending on the size they can need anything from a single to a hundred nodes.
  • Schools and education spend millions on training software and programming languages every year. Amibian.js can deliver both and the schools would only pay for maintenance and adaptation – the product itself is free. Kids get the benefit of learning traditional languages and enjoying instant visual feedback! They can learn Basic, Pascal, JavaScript and C. I firmly believe that the classical languages will help make them better programmers as they evolve.

You are probably starting to see the common denominator here?

They all need a web-based desktop system, one that can run complex HTML5 based media applications and give them the same depth as a native operating-system; Which is pretty hard to achieve with JavaScript alone.

Amibian.js provides a rich foundation of more than 4000 classes that developers can use to write large, complex and media rich applications (see Smart Mobile Studio below). Just like Linux and Windows provides a wealth of libraries and features for native application development – Amibian.js aims to provide the same for cloud and embedded systems.

And as the name implies, it has roots in the past with the machine that defined multimedia, namely the Commodore Amiga. So the relation is more than just visually, Amibian.js uses the same system architecture – because we believe it’s one of the best systems ever designed.

If JavaScript is so poor, why should we trust you to deliver so much?

First of all I’m not selling anything. It’s not like this project is something that is going to make me a ton of cash. I ask for support during the development period because I want to allocate proper time for it, but when done Amibian.js will be free for everyone (LGPL). And I’m also writing it because it’s something that I need and that I havent seen anywhere else. I think you have to write software for yourself, otherwise the quality wont be there.

Secondly, writing Amibian.js in raw JavaScript with the same amount of functions and depth would take years. The reason I am able to deliver so much functionality quickly, is because I use a compiler system called Smart Mobile Studio. This saves months and years of development time, and I can use all the benefits of OOP.

Prior to starting the Amibian.js project, I spent roughly 9 years creating Smart Mobile Studio. Smart is not a solo project, many individuals have been involved – and the product provides a compiler, IDE (editor and tools), and a vast run-time library of pre-made classes (roughly 4000 ready to use classes, or building-blocks).

amibian_shell

Writing large-scale node.js services in Smart is easy, fun and powerful!

Unlike other development systems, Smart Mobile Studio compiles to JavaScript rather than machine-code. We have spent a great deal of time making sure we could use proper OOP (object-oriented programming), and we have spent more than 3 years perfecting a visual application framework with the same depth as the VCL or FMX (the core visual frameworks for C++ builder and Delphi).

The result is that I can knock out a large application that a normal JavaScript coder would spend weeks on – in a single day.

Smart Mobile Studio uses the object-pascal language, a dialect which is roughly 70% compatible with Delphi. Delphi is exceptionally well suited for writing large, data driven applications. It also thrives for embedded systems and low-level system services. In short: it’s a lot easier to maintain 50.000 lines of object pascal code, than 500.000 lines of JavaScript code.

Amibian.js, both the service layer and the visual HTML5 client application, is written completely using Smart Mobile Studio. This gives me as the core developer of both systems a huge advantage (who knows it better than the designer right?). I also get to write code that is truly OOP (classes, inheritance, interfaces, virtual and abstract methods, partial classes etc), because our compiler crafts something called a VMT (virtual method table) in JavaScript.

Traditional JavaScript doesn’t have OOP, it has something called prototypes. With Smart Pascal I get to bring in code from the object-pascal community, components and libraries written in Delphi or Freepascal – which range in the hundreds of thousands. Delphi alone has a massive library of code to pick from, it’s been a popular toolkit for ages (C is 3 years older than pascal).

But how would I use Amibian.js? Do I install it or what?

Amibian.js can be setup and used in 4 different ways:

  • As a true desktop, booting straight into Amibian.js in full-screen
  • As a cloud service, accessing it through any modern browser
  • As a NAS or Kiosk front-end
  • As a local system on your existing OS, a batch script will fire it up and you can use your browser to access it on https://127.0.0.1:8090

So the short answer is yes, you install it. But it’s the same as installing Chrome OS. It’s not like an application you just install on your Linux, Windows or OSX box. The whole point of Amibian.js is to have a platform independent, chipset agnostic system. Something that doesn’t care if you using ARM, x86, PPC or Mips as your CPU of preference. Developers will no doubt install it on their existing machines, Amibian.js is non-intrusive and does not affect or touch files outside its own eco-system.

But the average non-programmer will most likely setup a dedicated machine (or several) or just deploy it on their home NAS.

The first way of enjoying Amibian.js is to install it on a PC or ARM device. A disk image will be provided for supporters so they can get up and running ASAP. This disk image will be based on a thin Linux setup, just enough to get all the drivers going (but no X desktop!). It will start all the node.js services and finally enter a full-screen web display (based on Chromium Embedded) that renders the desktop. This is the method most users will prefer to work with Amibian.js.

The second way is to use it as a cloud service. You install Amibian.js like mentioned above, but you do so on Amazon or Azure. That way you can login to your desktop using nothing but a web browser. This is a very cost-effective way of enjoying Amibian.js since renting a virtual instance is affordable and storage is abundant.

The third option is for developers. Amibian.js is a desktop system, which means it’s designed to host more elaborate applications. Where you would normally just embed an external website into an IFrame, but Amibian.js is not that primitive. Hosting external applications requires you to write a security manifest file, but more importantly: the application must interface with the desktop through the window’s message-port. This is a special object that is sent to the application as a hand-shake, and the only way for the application to access things like the file-system and server-side functionality, is via this message-port.

Calling “kernel” level functions from a hosted application is done purely via the message-port mentioned above. The actual message data is JSON and must conform to the Ragnarok client protocol specification. This is not as difficult as it might sound, but Amibian.js takes security very seriously – so applications trying to cause damage will be promptly shut down.

You mention hosted applications, do you mean websites?

Both yes and no: Amibian.js supports 3 types of applications:

  • Ordinary HTML5/JS based applications, or “websites” as many would call them. But like I talked about above they have to establish a dialog with the desktop before they can do anything useful.
  • Hybrid applications where half is installed as a node.js service, and the other half is served as a normal HTML5 app. This is the coolest program model, and developers essentially write both a server and a client – and then deploy it as a single package.
  • LDEF compiled bytecode applications, a 68k inspired assembly language that is JIT compiled by the browser (commonly called “asm.js”) and runs extremely fast. The LDEF virtual machine is a sub-project in Amibian.js

The latter option, bytecodes, is a bit like Java. A part of the Amibian.js project is a compiler and runtime system called LDEF.

patron_asm2

Above: The Amibian.js LDEF assembler, here listing opcodes + disassembling a method

The first part of the Amibian.js project is to establish the desktop and back-end services. The second part of the project is to create the worlds first cloud based development platform. A full Visual Studio clone if you like, that allows anyone to write cloud, mobile and native applications directly via the browser (!)

Several languages are supported by LDEF, and you can write programs in Object Pascal, Basic and C. The Basic dialect is especially fun to work with, since it’s a re-implementation of BlitzBasic (with a lot of added extras). Amiga developers will no doubt remember BlitzBasic, it was used to create some great games back in the 80s and 90s. It’s well suited for games and multimedia programming and above all – very easy to learn.

More advanced developers can enjoy Object Pascal (read: Delphi) or a sub-set of C/C++.

And please note: This IDE is designed for large-scale applications, not simple snippets. The ultimate goal of Amibian.js is to move the entire development cycle to the cloud and away from the desktop. With Amibian.js you can write a cool “app” in BlitzBasic, run it right in the browser — or compile it server-side and deploy it to your Android Phone as a real, natively compiled application.

So any notion of a “mock desktop for HTML” should be firmly put to the side. I am not playing around with this product and the stakes are very real.

But why don’t you just use ChromeOS?

There are many reasons, but the most important one is chipset independence. Chrome OS is a native system, meaning that it’s core services are written in C/C++ and compiled to machine code. The fundamental principle of Amibian.js is to be 100% platform agnostic, and “no native code allowed”. This is why the entire back-end and service layer is targeting node.js. This ensures the same behavior regardless of processor or host system (Linux being the default host).

Node.js has the benefit of being 100% platform independent. You will find node.js for ARM, x86, Mips and PPC. This means you can take advantage of whatever hardware is available. You can even recycle older computers that have lost mainstream support, and use them to run Amibian.js.

A second reason is: Chrome OS might be free, but it’s only as open as Google want it to be. ChromeOS is not just something you pick up and start altering. It’s dependence on native programming languages, compiler toolchains and a huge set of libraries makes it extremely niche. It also shields you utterly from the interesting parts, namely the back-end services. It’s quite frankly boring and too boxed in for any practical use; except for Google and it’s technology partners that is.

I wanted a system that I could move around, that could run in the cloud, on cheap SBC’s. A system that could scale from handling 10 users to 1000 users – a system that supports clustering and can be installed on multiple machines in a swarm.

A system that anyone with JavaScript knowledge can use to create new and exciting systems, that can be easily expanded and serve as a foundation for rich media applications.

What is this Amiga stuff, isn’t that an ancient machine?

In computing terms yes, but so is Unix. Old doesn’t automatically mean bad, it actually means that it’s adapted and survived challenges beyond its initial design. While most of us remember the Amiga for its games, I remember it mainly for its elegant and powerful operating-system. A system so flexible that it’s still in use around the world – 33 years after the machine hit the market. That is quite an achievement.

image2

The original Amiga OS, not bad for a 33-year-old OS! It was and continues to be way ahead of everyone else. A testament to the creativity of its authors

Amibian.js as the name implies, borrows architectural elements en-mass from Amiga OS. Quite simply because the way Amiga OS is organized and the way you approach computing on the Amiga is brilliant. Amiga OS is much more intuitive and easier to understand than Linux and Windows. It’s a system that you could learn how to use fully with just a couple of days exploring; and no manuals.

But the similarities are not just visual or architectural. Remember I wrote that hosted applications can access and use the Amibian.js services? These services implement as much of the original ROM Kernel functions as possible. Naturally I can’t port all of it, because it’s not really relevant for Amibian.js. Things like device-drivers serve little purpose for Amibian.js, because Amibian.js talks to node.js, and node talks to the actual system, which in turn handles hardware devices. But the way you would create windows, visual controls, bind events and create a modern, event-driven application has been preserved to the best of my ability.

But how does this thing boot? I thought you said server?

If you have setup a dedicated machine with Amibian.js then the boot sequence is the same as Linux, except that the node.js services are executed as background processes (daemons or services as they are called), the core server is initialized, and then a full-screen HTML5 view is set up that shows the desktop.

But that is just for starting the system. Your personal boot sequence which deals with your account, your preferences and adaptations – that boots when you login to the system.

When you login to your Amibian.js account, no matter if it’s just locally on a single PC, a distributed cluster, or via the browser into your cloud account — several things happen:

  1. The client (web-page if you like) connects to the server using WebSocket
  2. Login is validated by the server
  3. The client starts loading preferences files via the mapped filesystem, and then applies these to the desktop.
  4. A startup-sequence script file is loaded from your account, and then executed. The shell-script runtime engine is built into the client, as is REXX execution.
  5. The startup-script will setup configurations, create symbolic links (assigns), mount external devices (dropbox, google drive, ftp locations and so on)
  6. When finished the programs in the ~/WbStartup folder are started. These can be both visual and non-visual.

As you can see Amibian.js is not a mockup or “fake” desktop. It implements all the advanced features you expect from a “real” desktop. The filesystem mapping is especially advanced, where file-data is loaded via special drivers; drivers that act as a bridge between a storage service (a harddisk, a network share, a FTP host, Dropbox or whatever) and the desktop. Developers can add as many of these drivers as they want. If they have their own homebrew storage system on their existing servers, they can implement a driver for it. This ensures that Amibian.js can access any storage device, as long as the driver conforms to the driver standard.

In short, you can create, delete, move and copy files between these devices just like you do on Windows, OSX or the Linux desktop. And hosted applications that run inside their own window can likewise request access to these drivers and work with the filesystem (and much more!).

Wow this is bigger than I thought, but what is this emulation I hear about? Can Amibian.js really run actual programs?

Amibian.js has a JavaScript port of UAE (Unix Amiga Emulator). This is a fork of SAE (scripted Amiga Emulator) that has been heavily optimized for web. Not only is it written in JavaScript, it performs brilliantly and thus allows us to boot into a real Amiga system. So if you have some floppy-images with a game you love, that will run just fine in the browser. I even booted a 2 gigabyte harddisk image 🙂

But Amiga emulation is just the beginning. More and more emulators are ported to JavaScript; you have Nes, SNes, N64, PSX I & II, Sega Megadrive and even a NEO GEO port. So playing your favorite console games right in the browser is pretty straight forward!

But the really interesting part is probably QEmu. This allows you to run x86 instances directly in the browser too. You can boot up in Windows 7 or Ubuntu inside an Amibian.js window if you like. Perhaps not practical (at this point) but it shows some of the potential of the system.

I have been experimenting with a distributed emulation system, where the emulation is executed server-side, and only the graphics and sound is streamed back to the Amibian.js client in real-time. This has been possible for years via Apache Guacamole, but doing it in raw JS is more fitting with our philosophy: no native code!

I heard something about clustering, what the heck is that?

Remember I wrote about the services that Amibian.js has? Those that act almost like libraries on a physical computer? Well, these services don’t have to be on the same machine — you can place them on separate machines and thus its able to work faster.

47470965_10155861938320906_4959664457727868928_n

Above: The official Amibian.js cluster, 4 x ODroid XU4s SBC’s in a micro-rack

A cluster is typically several computers connected together, with the sole purpose of having more CPU cores to divide the work on. The cool thing about Amibian.js is that it doesn’t care about the underlying CPU. As long as node.js is available it will happily run whatever service you like – with the same behavior and result.

The official Amibian.js cluster consists of 5 ODroid XU4/S SBC (single board computers). Four of these are so-called “headless” computers, meaning that they don’t have a HDMI port – and they are designed to be logged into and software setup via SSH or similar tools. The last machine is a ODroid XU4 with a HDMI out port, which serves as “the master”.

The architecture is quite simple: We allocate one whole SBC for a single service, and allow the service to copy itself to use all the CPU cores available (each SBC has 8 CPU cores). With this architecture the machine that deals with the desktop clients don’t have to do all the grunt work. It will accept tasks from the user and hosted applications, and then delegate the tasks between the 4 other machines.

Note: The number of SBC’s is not fixed. Depending on your use you might not need more than a single SBC in your home setup, or perhaps two. I have started with 5 because I want each part of the architecture to have as much CPU power as possible. So the first “official” Amibian.js setup is a 40 core monster shipping at around $250.

But like mentioned, you don’t have to buy this to use Amibian.js. You can install it on a single spare X86 PC you have, or daisy chain a couple of older PC’s on a switch for the same result.

Why Headless? Don’t you need a GPU?

The headless SBC’s in the initial design all have GPU (graphical processing unit) as well as audio capabilities. What they lack is GPIO pins and 3 additional USB ports. So each of the nodes on our cluster can handle graphics at blistering speed — but that is ultimately not their task. They serve more as compute modules that will be given tasks to finish quickly, while the main machine deals with users, sessions, traffic and security.

The 40 core cluster I use has more computing power than northern europe had in the early 80s, that’s something to think about. And the pricetag is under $300 (!). I dont know about you but I always wanted a proper mainframe, a distributed computing platform that you can login to and that can perform large tasks while I do something else. This is as close as I can get on a limited budget, yet I find the limitations thrilling and fun!

Part of the reason I have opted for a clustered design has to do with future development. While UAE.js is brilliant to emulate an Amiga directly in the browser – a more interesting design is to decouple the emulation from the output. In other words, run the emulation at full speed server-side, and just stream the display and sounds back to the Amibian.js display. This would ensure that emulation, of any platform, runs as fast as possible, makes use of multi-processing (read: multi threading) and fully utilize the network bandwidth within the design (the cluster runs on its own switch, separate from the outside world-wide-web).

I am also very interested in distributed computing, where we split up a program and run each part on different cores. This is a topic I want to investigate further when Amibian.js is completed. It would no doubt require a re-design of the LDEF bytecode system, but this something to research later.

Will Amibian.js replace my Windows box?

That depends completely on what you use Windows for. The goal is to create a self-sustaining system. For retro computing, emulation and writing cool applications Amibian.js will be awesome. But Rome was not built-in a day, so it’s wise to be patient and approach Amibian.js like you would Chrome OS. Some tasks are better suited for native systems like Linux, but more and more tasks will run just fine on a cloud desktop like Amibian.js.

Until the IDE and compilers are in place after phase two, the system will be more like an embedded OS. But when the LDEF compiler and IDE is in place, then people will start using it en-mass and produce applications for it. It’s always a bit of work to reach that point and create critical mass.

tomes

Object Pascal is awesome, but modern, native development systems are quite demanding

My personal need has to do with development. Some of the languages I use installs gigabytes onto my PC and you need a full laptop to access them. I love Amibian.js because I will be able to work anywhere in the world, as long as a browser and normal internet line is available. In my case I can install a native compiler on one of the nodes in the cluster, and have LDEF emit compatible code; voila, you can build app-store ready applications from within a browser environment.

 

I also love that I can set-up a dedicated platform that runs legacy applications, games – and that I can write new applications and services using modern, off the shelve languages. And should a node in the cluster break down, I can just copy the whole system over to a new, affordable SBC and keep going. No super expensive hardware to order, no absurd hosting fees, and finally a system that we all can shape and use in a plethora of systems. From a fully fledged desktop to a super advanced NAS or Router that use Amibian.js to give it’s customers a fantastic experience.

And yes, I get to re-create the wonderful reality of Amiga OS without the absurd egoism that dominates the Amiga owners to this day. I don’t even know where to begin with the present license holders – and I am so sick of the drama that rolling my own seemed the only reasonable path forward.

Well — I hope this helps clear up any misconceptions about Amibian.js, and that you find this as interesting as I do. As more and more services are pushed cloud-side, the more relevant Amibian.js will become. It is perfect as a foundation for large-scale applications, embedded systems — and indeed, as a solo platform running on embedded devices!

I cant wait to finish the services and cluster this sucker on the ODroid rack!

If you find this project interesting, head over to my Patreon website and get involved! I could really use your support, even if it’s just a $5 “high five”. Visit the project at: http://www.patreon.com/quartexNow

Admin woes on Delphi Developer

November 17, 2018 8 comments

For well over 10 years I have been running different interest groups on Facebook. While Delphi Developer is without a doubt the one that receives most attention from myself and my fellow moderators, I also run the Quartex Components group and lately, Amiga Disrupt. The latter dedicated to my favorite hobby, namely retro computing.

I have to say, it’s getting harder to operate these groups under the current Facebook regime. I applaud them for implementing a moral codex, that is both fair and good, but that also means that their code must be able to distinguish between random acts of hate and bullying, and moderator operations.

A couple of days ago I posted an update picture from Amibian.js. This is a picture of my vmware development platform, with pascal code, node.js and the HTML5 desktop running. You would  have be completely ignorant of technology to not recognize the picture as having to do with software development.

amibian_shell

This picture was flagged as hateful, and was enough to get an admin’s account frozen for 30 days

Sadly facebook contains all sorts of people, and for some reason even grown men will get into strange, ideological debates about what constitutes retro-computing. In this case the user was a die-hard original-amiga fan, who on seeing my post about amibian.js went on a spectacular rant. Listing in alphabetical and chronological order, the depths of depravity that people have stooped to in implementing 68k as Javascript.

Well, I get 2-3 of these comments a week and the rules for the group is crystal clear: if you post comments like that, or comments that are racist, hateful or otherwise regarded as a provocative to the general group standard — you are given a single warning and then you are out.

So I gave him a warning that such comments are not welcome; He immediately came back with a even worse response – and that was the end of that.

But before I managed to kick the user, he reported a picture of Amibian as hateful. Again, we are talking about a screen-dump from VMWare with pascal code. No hate, no poor choice of images – nothing that would violate ordinary Facebook standards.

The result? Facebook has now frozen my account for 30 days (!)

Well I’m not even going to bother being upset, because this is not the first time. When people seem to willfully seek out conflict, only to use the FB’s reporting tools as weapons of revenge — well, there is not much I can do.

Anyways, Gunnar, Glenn, Peter and Dennis got you covered – and I’ll see you in a month. I think it’s time i contact FB in Oslo and establish separate management profiles.

Delphi Developer Demo Competition votes

November 3, 2018 Leave a comment

A month ago we setup a demo competition on Delphi Developer. It’s been a few years since we did this, and demo competitions are always fun no matter what, so it was high time we set this up!

all_prices

This years prizes are awesome!

Initially we had a limit of at least 10 contestants for the competition to go through, but I will make an exception this time. The prices are great and worth a good effort. I was a bit surprised by the low number of contestants since more than 60 developers signed our poll about the event. So I was hoping for at least 20 to be honest.

I think the timing was a bit off, we are closer to the end of the year and most developers are working under deadlines. So next year I think I’ll move the date to June or July.

Be that as it may – a demo competition is a tradition by now, so we proceed to the voting process!

The contestants

The contestants this year are:

  • Christian Hackbart
  • Mogens Lundholm
  • steven Chesser
  • Jens Borrisholt
  • Paul Nicholls

Note: Dennis is a moderator on Delphi Developer, as such he cannot partake in the voting process.

The code

Each contestant has submitted a project to the following repositories (in the same order as the names above), so make sure you check out each one and inspect them carefully before casting your vote.

Voting

We use the poll function built-into Facebook, so just visit us at Delphi Developer to cast your vote! You can only vote once and there is a 1 week deadline on this (so votes are done on the 10th this month.

Delphi Developer Competition

September 28, 2018 Leave a comment

The Delphi Developer group on Facebook has been around for a few years, and in that time we have held two very interesting demo competitions. The last competition we held was for Smart Pascal (Smart Mobile Studio) only, but we are extending it to include the dialects supported by our group; meaning Delphi, Smart Pascal, Freepascal and Remobjects Oxygene!

Embarcadero shipped over some extra goodies for us, so the competition this year is indeed a magical one. The top 3 contestants all get the official Embarcadero T-Shirt. We also throw in 10 Sencha ball-pens for each of the top 3 contestants; this is in addition to the actual prizes listed below (!)

The #1 winner not only get the 100€ FPGA devkit (see prizes below), he or she walks off with a high-quality, stainless steel Embarcadero branded coffee mug that holds half a litre of breakfast! (I seriously wanted to keep this for myself).

all_prices

The prizes in all their glory!

Submission rules are:

  • Source submission (GPL, LGPL) + binary
  • No dependencies on commercial libraries or components
  • Submissions must be available through GIT or BitBucket
  • Submission must include everything it needs to be compiled

Submission categories are:

  • Graphical demo (demo-scene style)
  • Games and multimedia
  • General purpose (utility programs)

Use the following Google form to register:

The purpose of the submissions is to show off both the language and your skills. Back in 2013 we got a ton of really cool demo-scene stuff, demonstrating timeless techniques; everything from bouncing meta-balls, gouraud shaded vectors, sinus scroll-texts and webgl landscape flight. We also had a fantastic fractal explorer program, bitmap rotozoom generator – and two great games! Which both made it onto AppStore and Google Play!

First prize

first_price.png

The winner walks off with some exciting stuff!

The first prize this year is something really, really special. The winner walks off with a spiffing Altera Cyclone IV FPGA starter board. This is a spectacular FPGA kit that allows you to upload a wide range of ready-to-rock FPGA core’s, as well as your own logic designs.

But to make it more accessible we added a retro daughter board, this gives you VGA, audio, keyboard, mouse, MicroSD, serial and two old school joystick ports. The daughterboard is needed if you plan on using some of the retro-cores out there. I personally love the Amiga core (shock, I know) but you can run anything from a humble Spectrum to Sega Megadrive, SNES, Atari ST/E, Neo-Geo and many others.

While the daughter-board makes this wonderful for retro-computing and gaming, fpga is first and foremost a tool for engineering. It ships with a USB-Blaster which allows you to connect it directly to your PC and it will be recognized as a device. FPGA modeling applications will pick this up and you can test out designs “live”, or just place a core on the SD-card and edit the boot config.

The kit sells for roughly 100€ with a case, but getting both the motherboard and the retro daughter-board is difficult. These things are sold separately, and the daughter board is produced in small numbers by dedicated hackers. So winning a kit that is pre-assembled, soldered and ready to go is quite a prize!

If you are even remotely interested in FPGA programming, this should give you goosebumps!

Second prize

tinker

The most powerful SBC I have ever used

The silver medal is the powerful Asus Tinkerboard, this is probably the most powerful SBC you can get below 100€. It delivers 10 times the firepower a Raspberry PI 3b can muster – and is superbly suited for Android development, Smart Mobile Studio kiosk systems and much, much more.

Of all the board I have tested and own this is the one with enough CPU grunt (even the mighty ODroid XU4 can’t touch this) to rival a low-end x86 laptop. You have to fork out for a SnapDragon IV to beat the Tinkerboard.

I have two of these around the house myself, one as a game console running Emulation Station (emulates PSX 1, 2 and 3 games), and another under my TV with Kodi and a 2 terabyte movie collection.

Third prize

Last but not least the bronze medal is a Raspberry PI 3b. The PI should be no stranger to programmers today, it more or less defines the IOT revolution and has, by far, the biggest collection of software available of all SBC (single board computers) available today.

Raspberry_Pi_3_Large

The device that represents the IOT phenomenon

The PI is a wonderful starter board for Delphi developers who want to play with hardware under android. It’s also a fantastic board for Smart and FPC development.

I use a PI to test node.js services written in Smart Mobile Studio.

Dates

We start the clock on the 1st of october and submission must be delivered by the 31st. So you have a full month to code something cool!

Remember comments

While not always possible, try to write clean code. Part of the point here is to use these demos as an educational source.

We wont reject non-commented code, but please try to avoid 20k lines of spaghetti.

Hints and tips

Delphi has brilliant support for DirectX and OpenGL, so taking advantage of hardware acceleration should not be a problem. FMX is largely powered by the GPU and has 3d rendering and modeling as an integral feature – so Delphi developers have a slight advantage there.

16_bit_smb2_smm_wip_by_trackmasterfan341-da3nch3

Tilesets are graphics-blocks that can be used to create large game levels with a map-editor

If you want to use DIB’s under vanilla WinAPI there is always Graphics32, a wonderful and exceptionally detailed library for fast graphics.

Music: Most demo-scene code use mod music (actually today people play MP3’s as well), and there are good wrappers for player libraries like Bass. It’s always a nice touch to add a spot of music (and literally millions of free mod tracks freely available). So give your demo some flair by adding a kick-ass mod track, or impress us by writing a score yourself?

In the world of demo coding anything goes! Bring out that teenage spirit and go wild, create wonderful graphical effects, vector objects, scrolling texts, games or whatever tickles your fancy. If you need inspiration, check out the demo scene videos on YouTube (if that is what you would like to submit of course). A kick-ass database application, X server renderer, paint program or a compiler — it’s all good!

Make people go WOW that is cool!

Tile graphics: which is often used in games and demos, can be found almost anywhere. If you google “tileset” or “game tiles” you should get more than you need. Brilliant for parallax scrolling. Why not give Super Mario a run for its money? Show the next generation how to code a platform game! Check out the Tiled map-editor, this has a JSON export filter for you Smart Pascal coders.

screenshot-objects

Tiled is a powerful map editor. There is also mappy, which I believe have a Delphi player

OK guys, the game is a-foot! May the best coder win!

Smart Mobile Studio presentation in Oslo

September 28, 2018 Leave a comment

Yesterday evening I traveled to Oslo and held a presentation on Smart Mobile Studio. The response was very positive and I hope that everyone who attended left with some new ideas regarding JavaScript, the direction the world of software is heading – and how Smart Mobile Studio can be of service to Delphi.

Smart Pascal is especially exciting in concert with Rad-Server, where it opens the doors to Node based, platform independent services and sub clustering. With relatively little effort Rad-Server can absorb the wealth that node has to offer through Smart – but on your terms, and under Delphi’s control. The best of both worlds.

You get the stability and structure that makes Delphi so productive, and then infuse that with the flamboyance, flair and async brilliance that JavaScript represents.

More important than technology is the community! It’s been a few years since I took part in the Oslo Delphi Club’s meetups, so it was great to chat with Halvard Vassbotten, Trond Grøntoft, Alf Christoffersen, Torgeir Amundsen and Robin Bakker face to face again. I also had the pleasure of meeting some new Delphi developers.

prespic

Presentation at ABG Sundal Collier’s offices in Oslo

Thankfully the number of attendees were a moderate 14, considering this was my first presentation ever. Last time I visited was when our late Paweł Głowacki presented FMX, and the turnout was in the ballpark of a hundred. So it was an easy-going, laid-back atmosphere throughout the evening.

Conflict of interest?

Some might wonder why a person working for Embarcadero will present Smart Mobile Studio, which some still regard as competition. Smart is not in competition with Delphi and never will be. It is written by Delphi developers for Delphi developers as a means to bridge two worlds. It’s a project of loyalty and passion. We continue because we love what it enables us to do.

The talks on Smart that I am holding now, including the november talk in London, were booked before I started at Embarcadero (so it’s not a case of me promoting Smart in leu of Embarcadero). I also made it perfectly clear when I accepted the job that my work on Smart will continue in my spare time. And Embarcadero is fine with that. So I am free to spend my after-work hours and weekend time as I see fit.

smart_desktop

The Smart Desktop, codename Amibian.js, is a solid foundation for building large-scale web front-ends. Importing Sencha’s JS API’s can be done via our TypeScript wizard

So, after my presentation in London in november Smart Mobile Studio presentations (at least hosted by me) can only take place during weekends. Which is fair and the way it should be.

Recording the English version

Since the presentation last evening was in Norwegian, there was little point in recording it. Norway have a healthy share of Delphi developers, but a programming language available internationally must be presented in English.

techA couple of months back, before I started working for Embarcadero I promised to do a video presentation that would be available on Delphi Developer and YouTube. I very much like to keep that promise. So I will re-do the presentation in English as soon as possible. I would have done it today after work, but buying tech from the US have changed quite dramatically in just a couple of years.

In short: I haven’t received the remaining equipment I ordered for professional video recording and audio podcasting (which is a part of my Patreon offering as well), as such there will be no live video-feed /slash/ webinar – and questions will be limited to either the comment-section on Delphi Developer; or perhaps more appropriate, the Smart Mobile Studio Forums.

I’m hoping to get the HD camera, mic-table-arm and various bits-and-bobs i ordered from the US sometime next week. I have no idea why FedEx have become so difficult lately, but the package is apparently at LaGuardia, and I have to send receipts that document that these items are paid for before they ship them abroad (so the package manifest listing me as the customer, my address, phone number and receipt from the seller is somehow not enough). This is a first for me.

Interestingly they also stopped a package from Embarcadero with giveaways for my upcoming Delphi presentation in Sweden – at which point I had to send them a copy of my work contract to prove that I indeed work for an American company.

But a promise is a promise, so come rain or shine it will be done. Worst case scenario we can put Samsung’s claims to the test and hook up a mic + photo lens and see if their commercials have any merit.

Help&Doc, documentation made easy

September 13, 2018 Leave a comment

I have been flamed so much lately for not writing proper docs for Smart Mobile Studio, that I figured it was time to get this under wraps. Now in my defence I’m not the only one on the Smart Pascal team, sure I have the most noise, but Smart is definitely not a solo operation.

So the irony of getting flamed for lack of docs, having perpetually lobbied for docs at every meeting since 2014; well that my friend is mother nature at her finest. If you stick your neck out, she will make it her personal mission to mess you up.

So off I went in search of a good documentation system ..

The mission

My dilemma is simple: I need to find a tool that makes writing documentation simple. It has to be reliable, deal with cross chapter links, handle segments of code without ruining the formatting of the entire page – and printing must be rock solid.

dims

Writing documentation in Open Office feels very much like this

If you are pondering why I even mention printing in this digital age, it’s because I prefer physical media. Writing a solid book, be it a mix between technical reference and user’s guide, can’t compare to a blog post. You need to let the material breathe for a couple of days between sessions to spot mistakes. I usually print things out, let it rest, then go over it with an old fashion marker.

Besides, my previous documentation suite couldn’t do PDF printing. I’m sure it could, just not around me. Whenever I picked Microsoft PDF printer as the output, it promptly committed suicide. Not even an exception, nothing, just “poff” and it terminated. The first time this happened I lost half a days work. The third time I uninstalled it, never to look back.

Another thing I would like to see, is that the program deals with graphics more efficiently than Google Docs, and at the very least more intuitively than Open Office (Oo in short). Now before you argue with me over Oo, let me just say that I’m all for Open-Office, it has a lot of great features. But in their heroic pursuit of cloning Microsoft to death, they also cloned possibly the worst layout mechanisms ever invented; namely the layout engine of Microsoft Word 2001.

Let’s just say that scaling and perspective is not the best in Open Office. Like Microsoft Word back in the day, it faithfully favours page-breaks over perspective based scaling. It will even flip the orientation if you don’t explicitly tell it not to.

Help & Doc

As far as I know, there are only two documentation suite’s on the market related with Delphi and coding. At least when it comes to producing technical manuals, help files and being written in Delphi.

First you have the older and perhaps more established Help & Manual. This is followed by the younger but equally capable Help & Doc. I ended up with the latter.

main_window

Help & Doc’s main window, clean and pleasing to the eye

Both suite’s have more in common than similar names (which is really confusing), they offer pretty much the exact same functionality. Except Help & Doc is considerably cheaper and have a couple features that developers favour. At least I do, and I imagine the needs of other developers will be similar.

Being older, Help & Manual have worked up more infrastructure , something which can be helpful in larger organizations. But their content-management strategy is (at least to me) something of a paradox. You need more than .NET documentation and shared editing to justify the higher price -and having to install a CMS to enjoy shared editing? It might make sense if you are a publisher, ghostwriter or if you have a large department with 5+ people doing nothing but documentation; but competing against Google Documents in 2018? Sorry, I don’t see that going anywhere.

For me, Help & Doc makes more sense because it remains true to its basic role: to help you create documentation for your products. And it does that very, very well.

server_window

Help & Doc has a built-in server for testing web documentation with minimum of fuzz

I also like that Help & Doc are crystal clear about their origins. Help & Manual have anonymized their marketing to tap into .Net and Java; they are not alone, quite a few companies try to hide the fact that their flagship product is written in object pascal. So you get a very different vibe from these two websites and their products.

The basics

Much like the competition, Help & Doc offers a complete WYSIWYG text editor with support for computed fields. So you can insert fields that hold variable data, like time, date (and various pieces of  a full datetime), project title, author name [and so on]. I hope to see script support at some point here, so that a script could provide data during PDF/Web generation.

The editor is responsive and well written, supports tables, margins and formatting like you expect from a modern editor. Not really sure how much I need to write about a text editor, most Delphi and C++ developers have high standards and I suspect they have used RichView, which is a well-known, high quality component.

One thing I found very pleasing is that fonts are not hidden away but easily accessible; various text styles hold a prominent place under the Write tab on top of the window. This is very helpful because you don’t have to apply a style to see how it will look, you can get an idea directly from the preview.

styles_window

Very nice, clear and one click away

Being able to insert conditional sections is something I found very easy. It’s no doubt part of other offerings too, but I have never really bothered to involve myself. But with so many potential targets, mobile phones, iPads, desktops, Kindle – suddenly this kind of functionality becomes a thing.

insert_condition

Adding conditional sections is easy

For example if you have documentation for a component, one that targets both Delphi, .NET and COM (yes people still use COM believe it or not) you don’t need 3 different copies of the same documentation – with only small variations between them. Using the conditional operations you can isolate the differences.

With Apple OSX, iOS and Android added to the compiler target (for Delphi), the need to separate Apple only instructions on how to use a library [for example], and then only include that for the Apple output is real. Windows and Linux can have their own, unique sections — and you don’t need to maintain 3 nearly similar documentation projects.

When you combine that with script support, Help & Doc is flexing some powerful muscles. I’m very much impressed and don’t regret getting this over the more expensive Help and Manual. Perhaps it would be different if I was writing casual books for a publisher, or if I made .NET components (oh the humanity!) and desperately needed to please Visual Studio. But for a hard-core Delphi and object pascal developer, Help & Doc has everything I need – and then some!

Wait, what? Script support?

Scripting docs

One of the really cool things about Help & Doc is that it supports pascal scripting. You can do some pretty amazing things with a script, and being able to iterate through the documentation in classical parent / child relationships is very helpful.

script_window

The central role of Object Pascal is not exactly hidden in Help & Doc

If you are wondering why a script engine would even be remotely interesting for an author, consider the following: you maintain 10-12 large documentation projects, and at each revision there will be plenty of small and large changes. Things like class-names getting a new name. If you have mentioned a class 300 times in each manual, changing a single name is going to be time-consuming.

This is where scripting is really cool because you can write code that literates through the documentation, chapter by chapter, section by section, paragraph by paragraph – and automatically replace all of them in a second.

snap01

Metablaster was a desktop search engine I made in 1999. I used scripts to target each search engine

I haven’t spent a huge amount of time with the scripting API Help & Doc offers yet (more busy writing), but I imagine that a plugin framework is a natural step in its evolution. I made a desktop search engine once, back between 1999 and 2005 (just after the bronze age) where we bolted Pascal Script into the system, then implemented each search engine parser as a script. This was very flexible and we could adapt to changes faster than our competitors.

While I can only speculate and hope the makers of Help & Doc reads this, creating an API that gives you a fair subset of Delphi (streams, files, string parsing et-al) that is accessible from scripts, and then defining classes for import scripts, export scripts, document processing scripts; that way developers can write their own import code to support a custom format (medical documentation springs to mind as an example). Likewise developers could write export code.

This is a part of the software I will explore more in the weeks to come!

Verdict – is it worth it?

As of writing you get Help & Doc professional at 249 €, and you can pick up the standard edition for 99€. Not exactly an earth shattering price for the mountain of work involved in creating such an elaborate system. If you factor in how much time it saves you: yes, why on earth would you even think twice!

new_window

Using Help & Doc is very easy, here we are creating a new doc with a few chapters

I have yet to find a function that their competition offers that would change my mind. As a developer who is part of a small team, or even as a solo developer – documentation has to be there. I can list 10.000 reasons why Smart never got the documentation it deserves, but at least now I can scratch one of them off my list. Writing 500 A4 pages in markdown would have me throwing myself into the fjords at -35 degrees celsius.

And being the rogue that I am, should I find intolerable bugs you will be sure to hear about them — but I have nothing to complain about here.

Its one of the most pleasant pieces of software I have used in a long time.

Human beings and licenses

Before I end this article, I also want to mention that Help & Doc has a licensing system that surprised me. If you buy 2 licenses for example, you get to link that with a computer. So you have very good control over your ownership. Should you run out of licenses, well then you either have to relocate an existing license or get a new one. You are not locked out and they don’t frag you with compliance threats.

licenses

Doesn’t get much easier than this

I use VMWare a lot and sometimes forget that I’m running a clone on top of a clone, and believe me I have gotten some impressive emails in the past. I think the worst was Xamarin Mono which actually deactivated my entire environment until I called them and explained I was doing live debugging between two VMWare instances.

So very cool to see that you can re-allocate an existing license to whatever device you want without problems.

To sum up: worth every penny!

HexLicense, Patreon and all that

September 6, 2018 Comments off

Apparently using modern service like Patreon to maintain components has become a point of annoyance and confusion. I realize that I formulated the initial HexLicense post somewhat vague and confusing, in retrospect I will admit that and also take critique for not spending a little more time on preparations.

Having said that, I also corrected the mistake quickly and clarified the situation. I feel some of the comments have been excessively critical for something that, ultimately, is a service to the community. But I’ll roll with the punches and let’s just put this issue to bed.

From the top please

fromthetopI have several products and frameworks that naturally takes time to maintain and evolve. And having to maintain websites, pay for tax and invoicing services, pay for hosting (and so on), well it consumes a lot of hours. Hours that I can no longer afford to spend (my work at Embarcadero must come first, I have a family to support). So Patreon is a great way to optimize a very busy schedule.

Today developers solve a lot of the business strain by using Patreon. They make their products open source, but give those that support and help fund the development special perks, such as early access, special builds and a more direct line of control over where the different projects and sub-projects are heading.

The public repository that everyone has access to is maintained by pushing the code on interval, meaning that the public “free stuff” (LGPL v3 license) will be some months behind the early-access that patrons enjoy. This is common and the same approach both large and small teams go about things in 2018. Quite radical compared to what we “old-timers” are used to, but that’s how things work now. I just go with flow and try to do the most amount of good on the journey.

Benefits of Patreon

The benefits are many, but first and foremost it has to do with time. Developer don’t have to maintain 3-4 websites, pay for invoicing services on said products, pay hosting fees and rent support forums — instead focus is on getting things done. So instead of an hour here and there, you can (based on the level of support) allocate X hours within a week or weekend that are continuous.

4a128ea6852444fbfc89022be4132e9b

Patreon solves two things: time and cost

Everyone wins. Those that support and help fund the projects enjoy early access and special builds. The community at large wins because the public repository is likewise maintained, albeit somewhat behind the cutting edge code patrons enjoy. And the developers wins because he or she doesn’t have to run around like a mad chicken maintaining X number of websites -wasting more time doing maintenance than building cool new features.

 

And above all, pricing goes down. By spreading the cost over a larger base of interest, people get access to code that used to cost $200 for $35. The more people that helps out, the more the cost can be reduced per tier.

To make it crystal clear what the status of my frameworks and component packages are, here is a carbon copy from HexLicense.com

For immediate release

Effective immediately HexLicense is open-source, released under the GNU Lesser General Public License v3. You can read the details of that license by clicking here.

Patreon model

Patreon_logo.svgIn order to consolidate the various projects I maintain, I have established a Patreon account. This means that people can help fund further development on HexLicense, LDEF, Amibian and various Delphi libraries as a whole. This greatly simplifies things for everyone.

I will be able to allocate time based on a broader picture, I also don’t need to pay for invoicing services, web hosting and more. This allows me to continue to evolve the components and code, but without so many separate product identities to maintain.

Patreon supporters will receive updates before anyone else and have direct access to the latest code at all times. The public bitbucket repository will be updated on interval, but will by consequence be behind the Patreon updates.

Further security

One of the core goals on Patreon is the evolution of a bytecode compiler. This should be of special interest to HexLicense users. Being able to compile modules that hackers will be unable to debug gives you a huge advantage. The engine is designed so that the instruction-set can be randomized for a particular build. Making it unique for your application.

patron_asm1

The LDEF assembler prototype running under Smart Mobile Studio

Well, I want to thank everyone involved. It has been a great journey to produce so many components, libraries and solutions over the years – but now it’s time for me to cut down on the number of projects and focus on core technology.

HexLicense with the update license files will be uploaded to BitBucket shortly.

Sincerly

Jon Lennart Aasenden

 

 

Getting organized: register a Delphi user group or club!

August 28, 2018 Leave a comment

It’s been a hectic week at Delphi Developer, but a highly productive one! I am very happy that so many developers have responded and help with the organizational work. Because Delphi and C++ builder developers must get organized. If you want to see lasting, positive results, this has to happen. There are wast quantities of individuals, groups and companies that use Delphi and C++ builder around the world. Yet we all sit in our own bubble, thinking we are alone. It’s time to change that.

“we have decades of experience and technical expertise. And that is worth protecting”

In 2016 I was contacted by a Norwegian HR company (read: head hunters) and offered a Delphi position as at a local business. Turned out the business had struggled to find Delphi programmers for over six months. When I told them about Oslo Delphi club and showed them the 7500 members we have in Delphi Developer on Facebook, they were gobsmacked. The human resource company was equally oblivious to the sheer number of Developers just in Norway, let alone internationally.

Part of what I do today as an Embarcadero SC, is to front human-resource companies with clear information as to where they can look for competent Delphi developers. But in order to deliver that effectively, we first have to establish a map.

Put your local club or interest group on the map!

Last friday (24.08.2018) I published an open document on Delphi Developer. This is a document open and available to everyone, with the sole purpose of making it easier for developers to find clubs and interest groups in their region (jobs are often found through acquaintances, so connecting to a local group is important). It will also simplify how we as a community can approach human resource companies. Our document is growing but we still need more! So please take five minutes to add your local user group.

Ebusiness Concept

The Delphi and C++ builder community is large, but we need representation with HR

Delphi and C++ builder is seeing a stable and healthy growth. It has taken a lot of hard work and effort to get where we are today, both by Embarcadero and developers that use RAD Studio as their business backbone.

My hope is that everyone who read this can allocate few minutes, just five minutes to add to our document. So if you know of a Delphi or C++ builder user group, perhaps a club or organization? Then please check the document (Note: The document is pinned as an announcement on top of the Facebook group feed, but members can also reach it directly by clicking here) and add the club if it’s not already there.

Note: Please make sure that the information is correct. Call the club or group if possible. Remember, this document is for everyone. We want to maintain the document and keep it available 24/7.

Building bridges

The work members are doing for the community is quite important. It determines where we can go next. In fact, I will contact each and every club to establish communication and co-operation. There is much to debate, such as capacity for tutoring, courseware, primary contact for new users and more. If need be I will personally travel so we can meet face to face. I am deadly serious about this, because there is no other way to build critical mass. Our group alone have thousands of members whom have invested a lot of money in software, components, formal training and education; we have decades of experience and technical expertise. And that is worth protecting.

Getting organized to safeguard our education, our language of preference, our jobs and ultimately to nurture our future is a worthy cause. I hope I have everyone’s blessing in this — but I can’t do everything alone. It is impossible for me to know if there are 3 Delphi clubs in Venezuela, 4 in Canada and 15 in India. We need to get them pinned on a map and formulate a strategy for lasting, positive results.

turn-the-page-look-to-the-future-660x330

The past is experience, the future is opportunities

I want to thank each and every one that has added to the document. Thank you so much, this will help our community more than you think. It might seem as a small step, but that first step is the most important of them all. All great things start as an idea, but when you apply force and determination – it becomes reality.

I am extremely lucky because this work is now a part of my job. My work includes a bit of everything: studies, authoring, coding, consulting and presentations. But the part I love the most is to connect people.

Real life results

If you think the document in question is a waste of time, think again!

4a128ea6852444fbfc89022be4132e9bLast week we had 3 rather frustrated members that desperately needed a job. After calming the situation down I made some calls and was able to find remote work for all of them.

It is a wonderful feeling when you can help someone. It is also what community is all about. The more organized we get, the better it will be for everyone. LinkedIn is great but networking without an infrastructure that responds can bear no fruits. And that is where Delphi Developer comes in. We are very much alive and kicking.

So with less than a week of organization behind us, we found and delivered jobs as a direct consequence of the Delphi Developer Facebook Group.

 

Building a Delphi Database engine, part two

August 16, 2018 Leave a comment

In the first episode of this tutorial we looked at some of the fundamental ideas behind database storage. We solved the problem of storing arbitrary length data by dividing the database file into conceptual parts; we discovered how we could daisy-chain these parts together to form a sequence; and we looked at how this elegantly solves reclaiming lost space and recycling that for new records. Last but not least we had a peek at the bit-buffer that helps us keep track of what blocks are taken, so we can easily grow and shrink the database on demand.

In this article we will be getting our hands dirty and put our theories into practise. The goal today is to examine the class that deals with these blocks or parts; While we could maybe get better performance by putting everything into a monolithic class, but the topics are best kept separate while learning the ropes. So let’s favour OOP and class encapsulation for this one.

The DbLib framework

Prior to writing this tutorial I had to write the code you would need. It would be a terrible mistake to run a tutorial with just theories to show for it. Thankfully I have been coding for a number of years now, so I had most of the code in my archives. To make life easier for you I have unified the code into a little framework.

This doesn’t mean that you have to use the framework. The units I provide are there to give you something tangible to play with. I have left ample room for optimization and things that can be done differently on purpose.

I have set up a bitbucket git repository for this tutorial, so your first business of the day is to download or fork our repository:

https://bitbucket.org/cipher_diaz/dbproject/src/master/

The database file class

The first installment of this tutorial ended with a few words on the file header. This is the only static or fixed data segment in our file. And it must remain fixed because we need a safe space where we can store offsets to our most important sequences, like the root sequence.

The root sequence is simply put the data that describes the database, also known as metadata. So all those items I listed at the start of the previous article, things like table-definitions, index definitions, the actual binary table data (et al), well we have to keep track of these somewhere right?

Well, that’s where the file header comes in. The header is the keeper of all secrets and is imperative for the whole engine.

The record list

Up until this point we have covered blocks, sequences, the file header, the bit-buffer that keeps track of available and reserved blocks — but what about the actual records?

db_file_sequenceWhen someone performs a high-level insert operation, the binary data that makes up the record is written as a sequence; that should be crystal clear by now. But having a ton of sequences stored in a file is pretty useless without a directory or list that remembers them. If we have 10.000 records (sequences) in a file – then we must also keep track of 10.000 offsets right? Otherwise, how can we know where record number 10, 1500 or 9000 starts?

Conceptually, metadata is not actual data. Metadata is a description of data, like a table definition or index definition. The list that holds all the record offsets is real data; As such I don’t want to store it together with the metadata but keep it separate. The bit buffer that keeps track of block availability in the file is likewise “real” data, so I would like to keep that in a separate sequence too.

When we sit down and define our file-header record, which is a structure that is always at the beginning of the file (or stream), we end up with something like this:

  • Unique file signature: longword
  • Version minor, major, revision: longword (byte, byte, word)
  • Database name: 256 bytes [holds utf8 encoded text]
  • Encryption cipher: integer
  • Compression id: integer
  • root-sequence: longword
  • record-list-sequence: longword
  • bit-buffer-sequence: longword

If you are wondering about the encryption and compression fields, don’t overthink it. It’s just a place to store something that identifies whatever encryption or compression we have used. If time allows we will have a look at zlib and RC4, but even if we don’t it’s good to define these fields for future expansion.

The version longword is actually more important than you think. If the design of your database and header changes dramatically between versions, you want to check the version number to make absolutely sure you can even handle the file. I have placed this as the second field in the record, 4 bytes into the header, so that it can be read early. The moment you have more than one version of your engine you might want to write a routine that just reads the first 8 bytes of the file and check compatibility.

What are those buffers?

node

The buffer classes are Delphi implementation of Node.JS buffers, including insert and remove functionality

Having forked the framework, you suddenly have quite a few interesting units. But you can also feel a bit lost if you don’t know what the buffer classes do, so I want to start with those first.

The buffer classes are alternatives to streams. Streams are excellent but they can be quite slow if you are doing intense read-write operations. More importantly streams lack two fundamental features for DB work, namely insert and remove. For example, lets say you have a 100 megabyte file – and then you want to remove 1 megabyte from the middle of this file. It’s not a complex operation but you still need to copy the trailing data backwards as quick as possible before scaling the stream size. The same is true if you want to inject data into a large file. It’s not a huge operation, but it has to be 100% accurate and move data as fast as possible.

I could have just inherited from TStream, but I wanted to write classes that were faster, that had more options and that were easier to expand in the future. The result of those experiments were the TBuffer classes.

So mentally, just look at TDbBuffer, TDbBufferMemory and TDbBufferFile as streams on steroids. If you need to move data between a stream and a buffer, just create a TDbLibStreamAdapter instance and you can access the buffer like a normal TStream decendent.

Making a file of blocks

With enough theory behind us, let’s dig into the codebase and look at the class that deals with a file as blocks, or parts. Open up the unit dblib.partaccess.pas and you will find the following class:

  TDbLibPartAccess = class(TObject)
  private
    FBuffer:    THexBuffer;
    FheadSize:  integer;
    FPartSize:  integer;
  protected
    function GetPartCount: integer; inline;
  public
    property Buffer: THexBuffer read FBuffer;
    property ReservedHeaderSize: integer read FheadSize;
    property PartSize: integer read FPartSize;
    property PartCount: integer read GetPartCount;
    procedure ReadPart(const PartIndex: Integer; var aData); overload;
    procedure ReadPart(const PartIndex: Integer; const Data: THexBuffer); overload;
    procedure WritePart(const PartIndex: Integer; const Data; const DataLength: Integer); overload;
    procedure WritePart(Const PartIndex: Integer; const Data: THexBuffer); overload;

    procedure AppendPart(const Data; DataLength: Integer); overload;
    procedure AppendPart(const Data: THexBuffer); overload;

    function CalcPartsForData(const DataSize: Int64): integer; inline;
    function CalcOffsetForPart(const PartIndex: Integer): Int64; inline;

    constructor Create(const DataBuffer: THexBuffer;
      const ReservedHeaderSize: Integer; const PartSize: Integer); virtual;
  End;

As you can see this class is pretty straight forward. You pass a buffer (either memory or file) via the constructor together with the size of the file-header. This helps the class to avoid writing to the first section of the file by mistake. Whenever the method CalcOffsetForPart() is called, it will add the size of the header to the result, shielding the header from being over-written.

The other methods should be self-explanatory; you have various overloads for writing a sequence part (block), appending them to the database file, reading them – and all these methods are offset based; meaning you give it the part-number and it calculates where that part is physically located inside the file.

One important method is the CalcPartsForData() function. This is used before splitting a piece of data into a sequence. Let’s say you have 1 megabyte to data you want to store inside the database file, well then you first call this and it calculates how many blocks you need.

Once you know how many blocks you need, the next step is to check the bit-buffer (that we introduced last time) if the file has X number of free blocks. If the file is full, well then you either have to grow the file to fit the new data – or issue an error message.

See? It’s not that complex once you have something to build on!

Proof reading test, making sure what we write is what we read

With the scaffolding in place, let’s write a small test to make absolutely sure that the buffer and class logistics check out ok. We are just going to do this on a normal form (this is the main project in the bitbucket project folder), so you don’t have to type this code. Just fork the code from the URL mentioned at the top of this article and run it.

Our test is simple:

  • Define our header and part records, doesn’t have to be accurate at this point
  • Create a database file buffer (in memory) with size for header + 100 parts
  • Create a TDblibPartAccess class, feed in the sizes as mentioned above
  • Create a write buffer the same size as part/block record
  • Fill that buffer with some content we can easily check
  • Write the writebuffer content to all the parts in the file
  • Create a read buffer
  • Read back each part and compare content with the write buffer

If any data is written the wrong way or overlapping, what we read back will not match our write buffer. This is a very simple test to just make sure that we have IO fidelity.

Ok, lets write some code!

unit mainform;

interface

uses
  Winapi.Windows, Winapi.Messages, System.SysUtils,
  System.Variants, System.Classes, Vcl.Graphics,
  Vcl.Controls, Vcl.Forms, Vcl.Dialogs, Vcl.StdCtrls,
  dblib.common,
  dblib.buffer,
  dblib.partaccess,
  dblib.buffer.memory,
  dblib.buffer.disk,
  dblib.encoder,
  dblib.bitbuffer;

const
  CNT_PAGESIZE = (1024 * 10);

type

  TDbVersion = packed record
    bvMajor:  byte;
    bvMinor:  byte;
    bvRevision: word;
  end;

  TDbHeader = packed record
    dhSignature:  longword;     // Signature: $CAFEBABE
    dhVersion:    TDbVersion;   // Engine version info
    dhName:       shortstring;  // Name of database
    dhMetadata:   longword;     // Part# for metadata
  end;

  TDbPartData = packed record
    ddSignature:  longword;
    ddRoot:       longword;
    ddNext:       longword;
    ddBytes:      integer;
    ddData: packed array [0..CNT_PAGESIZE-1] of byte;
  end;

  TfrmMain = class(TForm)
    btnWriteReadTest: TButton;
    memoOut: TMemo;
    procedure btnWriteReadTestClick(Sender: TObject);
  private
    { Private declarations }
    FDbFile:    TDbLibBufferMemory;
    FDbAccess: TDbLibPartAccess;
  public
    { Public declarations }
    constructor Create(AOwner: TComponent); override;
    destructor  Destroy; override;
  end;

var
  frmMain: TfrmMain;

implementation

{$R *.dfm}

{ TfrmMain }

constructor TfrmMain.Create(AOwner: TComponent);
begin
  inherited;
  // Create our database file, in memory
  FDbFile := TDbLibBufferMemory.Create(nil);

  // Reserve size for our header and 100 free blocks
  FDBFile.Size := SizeOf(TDbHeader) + ( SizeOf(TDbPartData) * 100 );

  // Create our file-part access class, which access the file
  // as a "block" file. We pass in the size of the header + part
  FDbAccess := TDbLibPartAccess.Create(FDbFile, SizeOf(TDbHeader), SizeOf(TDbPartData));
end;

destructor TfrmMain.Destroy;
begin
  FDbAccess.Free;
  FDbFile.Free;
  inherited;
end;

procedure TfrmMain.btnWriteReadTestClick(Sender: TObject);
var
  LWriteBuffer:  TDbLibBufferMemory;
  LReadBuffer: TDbLibBufferMemory;
  LMask: ansistring;
  x:  integer;
begin
  memoOut.Lines.Clear();

  LMask := 'YES!';

  // create a temporary buffer
  LWriteBuffer := TDbLibBufferMemory.Create(nil);
  try
    // make it the same size as our file-part
    LWriteBuffer.Size := SizeOf(TDbPartData);

    // fill the buffer with our test-pattern
    LWriteBuffer.Fill(0, SizeOf(TDbPartData), LMask[1], length(LMask));

    // Fill the dbfile by writing each part, using our
    // temporary buffer. This fills the file with our
    // little mask above
    for x := 0 to FDbAccess.PartCount-1 do
    begin
      FDbAccess.WritePart(x, LWriteBuffer);
    end;

    LReadBuffer := TDbLibBufferMemory.Create(nil);
    try
      for x := 0 to FDBAccess.PartCount-1 do
      begin
        FDbAccess.ReadPart(x, LReadBuffer);

        if LReadBuffer.ToString  LWriteBuffer.ToString then
          memoOut.Lines.Add('Proof read part #' + x.ToString() + ' = failed')
        else
          memoOut.Lines.Add('Proof read part #' + x.ToString() + ' = success');
      end;
    finally
      LReadBuffer.Free;
    end;

  finally
    LWriteBuffer.Free;
  end;
end;

end.

The form has a button and a memo on it, when you click the button we get the following result:

writeread

Voila, we have IO fidelity!

Finally things are starting to become more interesting! We still have a way to go before we can start pumping records into this thing, but at least we have tangible code to play with.

In our next installment we will implement the sequence class, which takes the TDbLibPartAccess class and augments it with functionality to read and write sequences. We will also include the bit-buffer from our first article and watch as the siluette of our database engine comes into view.

Again, this is not built for speed but for education.

Until next time.