1 Introduction

The Ada Utility Library provides a collection of utility packages which includes:

This document describes how to build the library and how you can use the different features to simplify and help you in your Ada application.

2 Installation

This chapter explains how to build and install the library.

2.1 Before Building

Before building the library, you will need:

First get, build and install the XML/Ada and then get, build and install the Ada Utility Library.

2.2 Configuration

The library uses the configure script to detect the build environment, check whether XML/Ada, AWS, Curl support are available and configure everything before building. If some component is missing, the configure script will report an error or it will disable the feature. The configure script provides several standard options and you may use:

In most cases you will configure with the following command:

./configure

Building to get a shared library can sometimes be a real challenge. With GNAT 2018, you can configure as follows:

./configure --enable-shared

But with some other versions of the Ada compiler, you may need to add some linker options to make sure that the generated shared library is useable. Basically, it happens that the -ldl is not passed correctly when the shared library is created and when it is used you end up with missing symbols such as dlopen, dlclose, dlsym and dlerror. When this happens, you can fix by re-configuring and adding the missing option with the following command:

./configure --enable-shared --enable-link-options-util=--no-as-needed,-ldl,--as-needed

2.3 Build

After configuration is successful, you can build the library by running:

make

After building, it is good practice to run the unit tests before installing the library. The unit tests are built and executed using:

make test

And unit tests are executed by running the bin/util_harness test program.

2.4 Installation

The installation is done by running the install target:

make install

If you want to install on a specific place, you can change the prefix and indicate the installation direction as follows:

make install prefix=/opt

2.5 Using

To use the library in an Ada project, add the following line at the beginning of your GNAT project file:

with "utilada";

If you use only a subset of the library, you may use the following GNAT projects:

GNAT project Description
utilada_core Provides: Util.Concurrent, Util.Strings, Util.Texts,
Util.Locales, Util.Refs, Util.Stacks, Util.Listeners
Util.Executors
utilada_base Provides: Util.Beans, Util.Commands, Util.Dates,
Util.Events, Util.Files, Util.Log, Util.Properties,
Util.Systems
utilada_sys Provides: Util.Encoders, Util.Measures,
Util.Processes, Util.Serialize, Util.Streams
utilada_lzma Provides: Util.Encoders.Lzma, Util.Streams.Buffered.Lzma
utilada_aws Provides HTTP client support using AWS
utilada_curl Provides HTTP client support using CURL
utilada_http Provides Util.Http
utilada Uses all utilada GNAT projects except the unit test library
utilada_unit Support to write unit tests on top of Ahven or AUnit

3 Files

The Util.Files package provides various utility operations around files to help in reading, writing, searching for files in a path. To use the operations described here, use the following GNAT project:

with "utilada_base";

3.1 Reading and writing

To easily get the full content of a file, the Read_File procedure can be used. A first form exists that populates a Unbounded_String or a vector of strings. A second form exists with a procedure that is called with each line while the file is read. These different forms simplify the reading of files as it is possible to write:

Content : Ada.Strings.Unbounded.Unbounded_String;
Util.Files.Read_File ("config.txt", Content);

or

List : Util.Strings.Vectors.Vector;
Util.Files.Read_File ("config.txt", List);

or

procedure Read_Line (Line : in String) is ...
Util.Files.Read_File ("config.txt", Read_Line'Access);

Similarly, writing a file when you have a string or an Unbounded_String is easily written by using Write_File as follows:

Util.Files.Write_File ("config.txt", "full content");

3.2 Searching files

Searching for a file in a list of directories can be accomplished by using the Iterate_Path, Iterate_Files_Path or Find_File_Path.

The Find_File_Path function is helpful to find a file in some PATH search list. The function looks in each search directory for the given file name and it builds and returns the computed path of the first file found in the search list. For example:

Path : String := Util.Files.Find_File_Path ("ls",
                                            "/bin:/usr/bin",
                                            ':');

This will return /usr/bin/ls on most Unix systems.

3.3 Rolling file manager

The Util.Files.Rolling package provides a simple support to roll a file based on some rolling policy. Such rolling is traditionally used for file logs to move files to another place when they reach some size limit or when some date conditions are met (such as a day change). The file manager uses a file path and a pattern. The file path is used to define the default or initial file. The pattern is used when rolling occurs to decide how to reorganize files.

The file manager defines a triggering policy represented by Policy_Type. It controls when the file rolling must be made.

To control how the rolling is made, the Strategy_Type defines the behavior of the rolling.

To use the file manager, the first step is to create an instance and configure the default file, pattern, choose the triggering policy and strategy:

Manager : Util.Files.Rolling.File_Manager;
Manager.Initialize ("dynamo.log", "dynamo-%i.log",
                    Policy => (Size_Policy, 100_000),
                    Strategy => (Rollover_Strategy, 1, 10));

After the initialization, the current file is retrieved by using the Get_Current_Path function and you should call Is_Rollover_Necessary before writing content on the file. When it returns True, it means you should call the Rollover procedure that will perform roll over according to the rolling strategy.

4 Logging

The Util.Log package and children provide a simple logging framework inspired from the Java Log4j library. It is intended to provide a subset of logging features available in other languages, be flexible, extensible, small and efficient. Having log messages in large applications is very helpful to understand, track and fix complex issues, some of them being related to configuration issues or interaction with other systems. The overhead of calling a log operation is negligible when the log is disabled as it is in the order of 30ns and reasonable for a file appender has it is in the order of 5us. To use the packages described here, use the following GNAT project:

with "utilada_base";

4.1 Using the log framework

A bit of terminology:

4.2 Logger Declaration

Similar to other logging framework such as Java Log4j and Log4cxx, it is necessary to have an instance of a logger to write a log message. The logger instance holds the configuration for the log to enable, disable and control the format and the appender that will receive the message. The logger instance is associated with a name that is used for the configuration. A good practice is to declare a Log instance in the package body or the package private part to make available the log instance to all the package operations. The instance is created by using the Create function. The name used for the configuration is free but using the full package name is helpful to control precisely the logs.

with Util.Log.Loggers;
package body X.Y is
  Log : constant Util.Log.Loggers.Logger := Util.Log.Loggers.Create ("X.Y");
end X.Y;

4.3 Logger Messages

A log message is associated with a log level which is used by the logger instance to decide to emit or drop the log message. To keep the logging API simple and make it easily usable in the application, several operations are provided to write a message with different log level.

A log message is a string that contains optional formatting markers that follow more or less the Java MessageFormat class. A parameter is represented by a number enclosed by {}. The first parameter is represented by {0}, the second by {1} and so on. Parameters are replaced in the final message only when the message is enabled by the log configuration. The use of parameters allows to avoid formatting the log message when the log is not used.

The example below shows several calls to emit a log message with different levels:

 Log.Error ("Cannot open file {0}: {1}", Path, "File does not exist");
 Log.Warn ("The file {0} is empty", Path);
 Log.Info ("Opening file {0}", Path);
 Log.Debug ("Reading line {0}", Line);

The logger also provides a special Error procedure that accepts an Ada exception occurrence as parameter. The exception name and message are printed together with the error message. It is also possible to activate a complete traceback of the exception and report it in the error message. With this mechanism, an exception can be handled and reported easily:

 begin
    ...
 exception
    when E : others =>
       Log.Error ("Something bad occurred", E, Trace => True);
 end;

4.4 Log Configuration

The log configuration uses property files close to the Apache Log4j and to the Apache Log4cxx configuration files. The configuration file contains several parts to configure the logging framework:

Here is a simple log configuration that creates a file appender where log messages are written. The file appender is given the name result and is configured to write the messages in the file my-log-file.log. The file appender will use the level-message format for the layout of messages. Last is the configuration of the X.Y logger that will enable only messages starting from the WARN level.

log4j.rootCategory=DEBUG,result
log4j.appender.result=File
log4j.appender.result.File=my-log-file.log
log4j.appender.result.layout=level-message
log4j.logger.X.Y=WARN

By default when the layout is not set or has an invalid value, the full message is reported and the generated log messages will look as follows:

[2018-02-07 20:39:51] ERROR - X.Y - Cannot open file test.txt: File does not exist
[2018-02-07 20:39:51] WARN  - X.Y - The file test.txt is empty
[2018-02-07 20:39:51] INFO  - X.Y - Opening file test.txt
[2018-02-07 20:39:51] DEBUG - X.Y - Reading line ......

When the layout configuration is set to data-level-message, the message is printed with the date and message level.

[2018-02-07 20:39:51] ERROR: Cannot open file test.txt: File does not exist
[2018-02-07 20:39:51] WARN : The file test.txt is empty
[2018-02-07 20:39:51] INFO : X.Y - Opening file test.txt
[2018-02-07 20:39:51] DEBUG: X.Y - Reading line ......

When the layout configuration is set to level-message, only the message and its level are reported.

ERROR: Cannot open file test.txt: File does not exist
WARN : The file test.txt is empty
INFO : X.Y - Opening file test.txt
DEBUG: X.Y - Reading line ......

The last possible configuration for layout is message which only prints the message.

Cannot open file test.txt: File does not exist
The file test.txt is empty
Opening file test.txt
Reading line ......

4.4.1 Console appender

The Console appender recognises the following configurations:

Name Description
layout Defines the format of the message printed by the appender.
level Defines the minimum level above which messages are printed.
stderr When ‘true’ or ‘1’, use the console standard error,
by default the appender uses the standard output

4.4.2 File appender

The File appender recognises the following configurations:

Name Description
layout Defines the format of the message printed by the appender.
level Defines the minimum level above which messages are printed.
File The path used by the appender to create the output file.
append When ‘true’ or ‘1’, the file is opened in append mode otherwise
it is truncated (the default is to truncate).
immediateFlush When ‘true’ or ‘1’, the file is flushed after each message log.
Immediate flush is useful in some situations to have the log file
updated immediately at the expense of slowing down the processing
of logs.

4.4.3 Rolling file appender

The RollingFile appender recognises the following configurations:

Name Description
layout Defines the format of the message printed by the appender.
level Defines the minimum level above which messages are printed.
fileName The name of the file to write to. If the file, or any of its parent
directories, do not exist, they will be created.
filePattern The pattern of the file name of the archived log file. The pattern
can contain ‘%i’ which are replaced by a counter incremented at each
rollover, a ‘%d’ is replaced by a date pattern.
append When ‘true’ or ‘1’, the file is opened in append mode otherwise
it is truncated (the default is to truncate).
immediateFlush When ‘true’ or ‘1’, the file is flushed after each message log.
Immediate flush is useful in some situations to have the log file
updated immediately at the expense of slowing down the processing
of logs.
policy The triggering policy which drives when a rolling is performed.
Possible values are: none, size, time, size-time
strategy The strategy to use to determine the name and location of the
archive file. Possible values are: ascending, descending, and
direct. Default is ascending.
policyInterval How often a rollover should occur based on the most specific time
unit in the date pattern. This indicates the period in seconds
to check for pattern change in the time or size-time policy.
policyMin The minimum value of the counter. The default value is 1.
policyMax The maximum value of the counter. Once this values is reached older
archives will be deleted on subsequent rollovers. The default
value is 7.
minSize The minimum size the file must have to roll over.

A typical rolling file configuration would look like:

log4j.rootCategory=DEBUG,applogger,apperror
log4j.appender.applogger=RollingFile
log4j.appender.applogger.layout=level-message
log4j.appender.applogger.level=DEBUG
log4j.appender.applogger.fileName=logs/debug.log
log4j.appender.applogger.filePattern=logs/debug-%d{YYYY-MM}/debug-%{dd}-%i.log
log4j.appender.applogger.strategy=descending
log4j.appender.applogger.policy=time
log4j.appender.applogger.policyMax=10
log4j.appender.apperror=RollingFile
log4j.appender.apperror.layout=level-message
log4j.appender.apperror.level=ERROR
log4j.appender.apperror.fileName=logs/error.log
log4j.appender.apperror.filePattern=logs/error-%d{YYYY-MM}/error-%{dd}.log
log4j.appender.apperror.strategy=descending
log4j.appender.apperror.policy=time

With this configuration, the error messages are written in the error.log file and they are rotated on a day basis and moved in a directory whose name contains the year and month number. At the same time, debug messages are written in the debug.log file.

5 Property Files

The Util.Properties package and children implements support to read, write and use property files either in the Java property file format or the Windows INI configuration file. Each property is assigned a key and a value. The list of properties are stored in the Util.Properties.Manager tagged record and they are indexed by the key name. A property is therefore unique in the list. Properties can be grouped together in sub-properties so that a key can represent another list of properties. To use the packages described here, use the following GNAT project:

with "utilada_base";

5.1 File formats

The property file consists of a simple name and value pair separated by the = sign. Thanks to the Windows INI file format, list of properties can be grouped together in sections by using the [section-name] notation.

test.count=20
test.repeat=5
[FileTest]
test.count=5
test.repeat=2

5.2 Using property files

An instance of the Util.Properties.Manager tagged record must be declared and it provides various operations that can be used. When created, the property manager is empty. One way to fill it is by using the Load_Properties procedure to read the property file. Another way is by using the Set procedure to insert or change a property by giving its name and its value.

In this example, the property file test.properties is loaded and assuming that it contains the above configuration example, the Get ("test.count") will return the string "20". The property test.repeat is then modified to have the value "23" and the properties are then saved in the file.

with Util.Properties;
...
   Props : Util.Properties.Manager;
   ...
      Props.Load_Properties (Path => "test.properties");
      Ada.Text_IO.Put_Line ("Count: " & Props.Get ("test.count");
      Props.Set ("test.repeat", "23");
      Props.Save_Properties (Path => "test.properties");

To be able to access a section from the property manager, it is necessary to retrieve it by using the Get function and giving the section name. For example, to retrieve the test.count property of the FileTest section, the following code is used:

   FileTest : Util.Properties.Manager := Props.Get ("FileTest");
   ...
      Ada.Text_IO.Put_Line ("[FileTest] Count: "
                            & FileTest.Get ("test.count");

When getting or removing a property, the NO_PROPERTY exception is raised if the property name was not found in the map. To avoid that exception, it is possible to check whether the name is known by using the Exists function.

   if Props.Exists ("test.old_count") then
      ... --  Property exist
   end if;

5.3 Reading JSON property files

The Util.Properties.JSON package provides operations to read a JSON content and put the result in a property manager. The JSON content is flattened into a set of name/value pairs. The JSON structure is reflected in the name. Example:

{ "id": "1",                                 id         -> 1
  "info": { "name": "search",                info.name  -> search
            "count": "12",                   info.count -> 12
            "data": { "value": "empty" }},   info.data.value  -> empty
  "count": 1                                 count      -> 1
}

To get the value of a JSON property, the user can use the flatten name. For example:

 Value : constant String := Props.Get ("info.data.value");

The default separator to construct a flatten name is the dot (.) but this can be changed easily when loading the JSON file by specifying the desired separator:

 Util.Properties.JSON.Read_JSON (Props, "config.json", "|");

Then, the property will be fetch by using:

 Value : constant String := Props.Get ("info|data|value");

5.4 Property bundles

Property bundles represent several property files that share some overriding rules and capabilities. Their introduction comes from Java resource bundles which allow to localize easily some configuration files or some message. When loading a property bundle a locale is defined to specify the target language and locale. If a specific property file for that locale exists, it is used first. Otherwise, the property bundle will use the default property file.

A rule exists on the name of the specific property locale file: it must start with the bundle name followed by _ and the name of the locale. The default property file must be the bundle name. For example, the bundle dates is associated with the following property files:

dates.properties           Default values (English locale)
dates_fr.properties        French locale
dates_de.properties        German locale
dates_es.properties        Spain locale

Because a bundle can be associated with one or several property files, a specific loader is used. The loader instance must be declared and configured to indicate one or several search directories that contain property files.

with Util.Properties.Bundles;
...
   Loader : Util.Properties.Bundles.Loader;
   Bundle : Util.Properties.Bundles.Manager;
   ...
   Util.Properties.Bundles.Initialize (Loader,
                                       "bundles;/usr/share/bundles");
   Util.Properties.Bundles.Load_Bundle (Loader, "dates", "fr", Bundle);
   Ada.Text_IO.Put_Line (Bundle.Get ("util.month1.long");

In this example, the util.month1.long key is first searched in the dates_fr French locale and if it is not found it is searched in the default locale.

The restriction when using bundles is that they don’t allow changing any value and the NOT_WRITEABLE exception is raised when one of the Set operation is used.

When a bundle cannot be loaded, the NO_BUNDLE exception is raised by the Load_Bundle operation.

5.5 Advance usage of properties

The property manager holds the name and value pairs by using an Ada Bean object.

It is possible to iterate over the properties by using the Iterate procedure that accepts as parameter a Process procedure that gets the property name as well as the property value. The value itself is passed as an Util.Beans.Objects.Object type.

6 Date Utilities

The Util.Dates package provides various date utilities to help in formatting and parsing dates in various standard formats. It completes the standard Ada.Calendar.Formatting and other packages by implementing specific formatting and parsing. To use the packages described here, use the following GNAT project:

with "utilada_base";

6.1 Date Operations

Several operations allow to compute from a given date:

The Date_Record type represents a date in a split format allowing to access easily the day, month, hour and other information.

Now        : Ada.Calendar.Time := Ada.Calendar.Clock;
Week_Start : Ada.Calendar.Time := Get_Week_Start (Now);
Week_End   : Ada.Calendar.Time := Get_Week_End (Now);

6.2 RFC7231 Dates

The RFC 7231 defines a standard date format that is used by HTTP headers. The Util.Dates.RFC7231 package provides an Image function to convert a date into that target format and a Value function to parse such format string and return the date.

  Now  : constant Ada.Calendar.Time := Ada.Calendar.Clock;
  S    : constant String := Util.Dates.RFC7231.Image (Now);
  Date : Ada.Calendar.time := Util.Dates.RFC7231.Value (S);

A Constraint_Error exception is raised when the date string is not in the correct format.

6.3 ISO8601 Dates

The ISO8601 defines a standard date format that is commonly used and easily parsed by programs. The Util.Dates.ISO8601 package provides an Image function to convert a date into that target format and a Value function to parse such format string and return the date.

  Now  : constant Ada.Calendar.Time := Ada.Calendar.Clock;
  S    : constant String := Util.Dates.ISO8601.Image (Now);
  Date : Ada.Calendar.time := Util.Dates.ISO8601.Value (S);

A Constraint_Error exception is raised when the date string is not in the correct format.

6.4 Localized date formatting

The Util.Dates.Formats provides a date formatting and parsing operation similar to the Unix strftime, strptime or the GNAT.Calendar.Time_IO. The localization of month and day labels is however handled through Util.Properties.Bundle (similar to the Java world). Unlike strftime and strptime, this allows to have a multi-threaded application that reports dates in several languages. The GNAT.Calendar.Time_IO only supports English and this is the reason why it is not used here.

The date pattern recognizes the following formats:

Format Description
%a The abbreviated weekday name according to the current locale.
%A The full weekday name according to the current locale.
%b The abbreviated month name according to the current locale.
%h Equivalent to %b. (SU)
%B The full month name according to the current locale.
%c The preferred date and time representation for the current locale.
%C The century number (year/100) as a 2-digit integer. (SU)
%d The day of the month as a decimal number (range 01 to 31).
%D Equivalent to %m/%d/%y
%e Like %d, the day of the month as a decimal number,
but a leading zero is replaced by a space. (SU)
%F Equivalent to %Y-%m-%d (the ISO 8601 date format). (C99)
%G The ISO 8601 week-based year
%H The hour as a decimal number using a 24-hour clock (range 00 to 23).
%I The hour as a decimal number using a 12-hour clock (range 01 to 12).
%j The day of the year as a decimal number (range 001 to 366).
%k The hour (24 hour clock) as a decimal number (range 0 to 23);
%l The hour (12 hour clock) as a decimal number (range 1 to 12);
%m The month as a decimal number (range 01 to 12).
%M The minute as a decimal number (range 00 to 59).
%n A newline character. (SU)
%p Either “AM” or “PM”
%P Like %p but in lowercase: “am” or “pm”
%r The time in a.m. or p.m. notation.
In the POSIX locale this is equivalent to %I:%M:%S %p. (SU)
%R The time in 24 hour notation (%H:%M).
%s The number of seconds since the Epoch, that is,
since 1970-01-01 00:00:00 UTC. (TZ)
%S The second as a decimal number (range 00 to 60).
%t A tab character. (SU)
%T The time in 24 hour notation (%H:%M:%S). (SU)
%u The day of the week as a decimal, range 1 to 7,
Monday being 1. See also %w. (SU)
%U The week number of the current year as a decimal
number, range 00 to 53
%V The ISO 8601 week number
%w The day of the week as a decimal, range 0 to 6,
Sunday being 0. See also %u.
%W The week number of the current year as a decimal number,
range 00 to 53
%x The preferred date representation for the current locale
without the time.
%X The preferred time representation for the current locale
without the date.
%y The year as a decimal number without a century (range 00 to 99).
%Y The year as a decimal number including the century.
%z The timezone as hour offset from GMT.
%Z The timezone or name or abbreviation.

The following strftime flags are ignored:

Format Description
%E Modifier: use alternative format, see below. (SU)
%O Modifier: use alternative format, see below. (SU)

SU: Single Unix Specification C99: C99 standard, POSIX.1-2001

See strftime (3) and strptime (3) manual page

To format and use the localize date, it is first necessary to get a bundle for the dates so that date elements are translated into the given locale.

 Factory     : Util.Properties.Bundles.Loader;
 Bundle      : Util.Properties.Bundles.Manager;
 ...
    Load_Bundle (Factory, "dates", "fr", Bundle);

The date is formatted according to the pattern string described above. The bundle is used by the formatter to use the day and month names in the expected locale.

 Date : String := Util.Dates.Formats.Format (Pattern => Pattern,
                                             Date    => Ada.Calendar.Clock,
                                             Bundle  => Bundle);

To parse a date according to a pattern and a localization, the same pattern string and bundle can be used and the Parse function will return the date in split format.

 Result : Date_Record := Util.Dates.Formats.Parse (Date    => Date,
                                                   Pattern => Pattern,
                                                   Bundle  => Bundle);

7 Ada Beans

A Java Bean(http://en.wikipedia.org/wiki/JavaBean) is an object that allows to access its properties through getters and setters. Java Beans rely on the use of Java introspection to discover the Java Bean object properties.

An Ada Bean has some similarities with the Java Bean as it tries to expose an object through a set of common interfaces. Since Ada does not have introspection, some developer work is necessary. The Ada Bean framework consists of:

The benefit of Ada beans comes when you need to get a value or invoke a method on an object but you don’t know at compile time the object or method. That step being done later through some external configuration or presentation file.

The Ada Bean framework is the basis for the implementation of Ada Server Faces and Ada EL. It allows the presentation layer to access information provided by Ada beans.

To use the packages described here, use the following GNAT project:

with "utilada_base";

7.1 Objects

The Util.Beans.Objects package provides a data type to manage entities of different types by using the same abstraction. The Object type allows to hold various values of different types.

An Object can hold one of the following values:

Several operations are provided to convert a value into an Object.

with Util.Beans.Objects;
  Value : Util.Beans.Objects.Object
     := Util.Beans.Objects.To_Object (String '("something"));
  Value := Value + To_Object (String '("12"));
  Value := Value - To_Object (Integer (3));

The package provides various operations to check, convert and use the Object type.

Name Description
Is_Empty Returns true if the object is the empty string or empty list
Is_Null Returns true if the object does not contain any value
Is_Array Returns true if the object is an array
Get_Type Get the type of the object
To_String Converts the object to a string
To_Wide_Wide_String Convert to a wide wide string
To_Unbounded_String Convert to an unbounded string
To_Boolean Convert to a boolean
To_Integer Convert to an integer
To_Long_Integer Convert to a long integer
To_Long_Long_Integer Convert to a long long integer
To_Float Convert to a float
To_Long_Float Convert to a long float
To_Long_Long_Float Convert to a long long float
To_Duration Convert to a duration
To_Bean Convert to an access to the Read_Only_Bean’Class

Conversion to a time or enumeration is provided by specific packages.

The support for enumeration is made by the generic package Util.Beans.Objects.Enums which must be instantiated with the enumeration type. Example of instantiation:

 with Util.Beans.Objects.Enum;
 ...
    type Color_Type is (GREEN, BLUE, RED, BROWN);
    package Color_Enum is
       new Util.Beans.Objects.Enum (Color_Type);

Then, two functions are available to convert the enum value into an Object or convert back the Object in the enum value:

 Color : Object := Color_Enum.To_Object (BLUE);
 C : Color_Type := Color_Enum.To_Value (Color);

7.2 Object maps

The Util.Beans.Objects.Maps package provides a map of objects with a String as key. This allows to associated names to objects. To create an instance of the map, it is possible to use the Create function as follows:

with Util.Beans.Objects.Maps;
...
   Person : Util.Beans.Objects.Object := Util.Beans.Objects.Maps.Create;

Then, it becomes possible to populate the map with objects by using the Set_Value procedure as follows:

Util.Beans.Objects.Set_Value (Person, "name",
                              To_Object (Name));
Util.Beans.Objects.Set_Value (Person, "last_name",
                              To_Object (Last_Name));
Util.Beans.Objects.Set_Value (Person, "age",
                              To_Object (Age));

Getting a value from the map is done by using the Get_Value function:

Name : Util.Beans.Objects.Object := Get_Value (Person, "name");

It is also possible to iterate over the values of the map by using the Iterate procedure or by using the iterator support provided by the Util.Beans.Objects.Iterators package.

7.3 Object vectors

The Util.Beans.Objects.Vectors package provides a vector of objects. To create an instance of the vector, it is possible to use the Create function as follows:

with Util.Beans.Objects.Vectors;
...
   List : Util.Beans.Objects.Object := Util.Beans.Objects.Vectors.Create;

7.4 Datasets

The Datasets package implements the Dataset list bean which defines a set of objects organized in rows and columns. The Dataset implements the List_Bean interface and allows to iterate over its rows. Each row defines a Bean instance and allows to access each column value. Each column is associated with a unique name. The row Bean allows to get or set the column by using the column name.

 with Util.Beans.Objects.Datasets;
 ...
    Set : Util.Beans.Objects.Datasets.Dataset_Access
        := new Util.Beans.Objects.Datasets.Dataset;

After creation of the dataset instance, the first step is to define the columns that composed the list. This is done by using the Add_Column procedure:

 Set.Add_Column ("name");
 Set.Add_Column ("email");
 Set.Add_Column ("age");

To populate the dataset, the package only provide the Append procedure which adds a new row and calls a procedure whose job is to fill the columns of the new row. The procedure gets the row as an array of Object:

 procedure Fill (Row : in out Util.Beans.Objects.Object_Array) is
 begin
    Row (Row'First) := To_Object (String '("Yoda"));
    Row (Row'First + 1) := To_Object (String '("Yoda@Dagobah"));
    Row (Row'First + 2) := To_Object (Integer (900));
 end Fill;
 Set.Append (Fill'Access);

The dataset instance is converted to an Object by using the To_Object function. Note that the default behavior of To_Object is to take the ownership of the object and hence it will be released automatically.

 List : Util.Beans.Objects.Object
    := Util.Beans.Objects.To_Object (Set);

7.5 Object iterator

Iterators are provided by the Util.Beans.Objects.Iterators package. The iterator instance is created by using either the First or Last function on the object to iterate.

with Util.Beans.Objects.Iterators;
...
   Iter : Util.Beans.Objects.Iterators.Iterator
      := Util.Beans.Objects.Iterators.First (Object);

The iterator is used in conjunction with its Has_Element function and either its Next or Previous procedure. The current element is obtained by using the Element function. When the object being iterated is a map, a key can be associated with the element and is obtained by the Key function.

while Util.Beans.Objects.Iterators.Has_Element (Iter) loop
   declare
      Item : Object := Util.Beans.Objects.Iterators.Element (Iter);
      Key  : String := Util.Beans.Objects.Iterators.Key (Iter);
   begin
      ...
      Util.Beans.Objects.Iterators.Next (Iter);
   end;
end loop;

7.6 Bean Interface

An Ada Bean is an object which implements the Util.Beans.Basic.Readonly_Bean or the Util.Beans.Basic.Bean interface. By implementing these interface, the object provides a behavior that is close to the Java Beans: a getter and a setter operation are available.

8 Command Line Utilities

The Util.Commands package provides a support to help in writing command line applications. It allows to have several commands in the application, each of them being identified by a unique name. Each command has its own options and arguments. The command line support is built around several children packages.

The Util.Commands.Drivers package is a generic package that must be instantiated to define the list of commands that the application supports. It provides operations to register commands and then to execute them with a list of arguments. When a command is executed, it gets its name, the command arguments and an application context. The application context can be used to provide arbitrary information that is needed by the application.

The Util.Commands.Parsers package provides the support to parse the command line arguments.

The Util.Commands.Consoles package is a generic package that can help for the implementation of a command to display its results. Its use is optional.

8.1 Command arguments

The Argument_List interface defines a common interface to get access to the command line arguments. It has several concrete implementations. This is the interface type that is used by commands registered and executed in the driver.

The Default_Argument_List gives access to the program command line arguments through the Ada.Command_Line package.

The String_Argument_List allows to split a string into a list of arguments. It can be used to build new command line arguments.

8.2 Command line driver

The Util.Commands.Drivers generic package provides a support to build command line tools that have different commands identified by a name. It defines the Driver_Type tagged record that provides a registry of application commands. It gives entry points to register commands and execute them.

The Context_Type package parameter defines the type for the Context parameter that is passed to the command when it is executed. It can be used to provide application specific context to the command.

The Config_Parser describes the parser package that will handle the analysis of command line options. To use the GNAT options parser, it is possible to use the Util.Commands.Parsers.GNAT_Parser package.

8.3 Command line parsers

Parsing command line arguments before their execution is handled by the Config_Parser generic package. This allows to customize how the arguments are parsed.

The Util.Commands.Parsers.No_Parser package can be used to execute the command without parsing its arguments.

The Util.Commands.Parsers.GNAT_Parser.Config_Parser package provides support to parse command line arguments by using the GNAT Getopt support.

8.4 Example

First, an application context type is defined to allow a command to get some application specific information. The context type is passed during the instantiation of the Util.Commands.Drivers package and will be passed to commands through the Execute procedure.

type Context_Type is limited record
   ... --  Some application specific data
end record;
package Drivers is
  new Util.Commands.Drivers
    (Context_Type  => Context_Type,
     Config_Parser => Util.Commands.Parsers.GNAT_Parser.Config_Parser,
     Driver_Name   => "tool");

Then an instance of the command driver must be declared. Commands are then registered to the command driver so that it is able to find them and execute them.

 Driver : Drivers.Driver_Type;

A command can be implemented by a simple procedure or by using the Command_Type abstract tagged record and implementing the Execute procedure:

procedure Command_1 (Name    : in String;
                     Args    : in Argument_List'Class;
                     Context : in out Context_Type);
type My_Command is new Drivers.Command_Type with null record;
procedure Execute (Command : in out My_Command;
                   Name    : in String;
                   Args    : in Argument_List'Class;
                   Context : in out Context_Type);

Commands are registered during the application initialization. And registered in the driver by using the Add_Command procedure:

Driver.Add_Command (Name => "cmd1",
                    Description => "",
                    Handler => Command_1'Access);

A command is executed by giving its name and a list of arguments. By using the Default_Argument_List type, it is possible to give to the command the application command line arguments.

Ctx   : Context_Type;
Args  : Util.Commands.Default_Argument_List (0);
...
Driver.Execute ("cmd1", Args, Ctx);

9 Serialization of data structures in CSV/JSON/XML

9.1 Introduction

The Util.Serialize package provides a customizable framework to serialize and de-serialize data structures in CSV, JSON and XML. It is inspired from the Java XStream library.

9.2 Record Mapping

The serialization relies on a mapping that must be provided for each data structure that must be read. Basically, it consists in writing an enum type, a procedure and instantiating a mapping package. Let’s assume we have a record declared as follows:

type Address is record       
  City      : Unbounded_String;
  Street    : Unbounded_String;
  Country   : Unbounded_String;
  Zip       : Natural;
end record;  

The enum type shall define one value for each record member that has to be serialized/deserialized.

 type Address_Fields is (FIELD_CITY, FIELD_STREET, FIELD_COUNTRY, FIELD_ZIP);

The de-serialization uses a specific procedure to fill the record member. The procedure that must be written is in charge of writing one field in the record. For that it gets the record as an in out parameter, the field identification and the value.

procedure Set_Member (Addr  : in out Address;
                      Field : in Address_Fields;
                      Value : in Util.Beans.Objects.Object) is
begin
   case Field is
     when FIELD_CITY =>
       Addr.City := To_Unbounded_String (Value);

     when FIELD_STREET =>
       Addr.Street := To_Unbounded_String (Value);

     when FIELD_COUNTRY =>
       Addr.Country := To_Unbounded_String (Value);
     
     when FIELD_ZIP =>
        Addr.Zip := To_Integer (Value);
   end case;    
end Set_Member; 

The procedure will be called by the CSV, JSON or XML reader when a field is recognized.

The serialization to JSON or XML needs a function that returns the field value from the record value and the field identification. The value is returned as a Util.Beans.Objects.Object type which can hold a string, a wide wide string, a boolean, a date, an integer or a float.

function Get_Member (Addr  : in Address;
                     Field : in Address_Fields) return Util.Beans.Objects.Object is
begin
   case Field is
      when FIELD_CITY =>
         return Util.Beans.Objects.To_Object (Addr.City);

      when FIELD_STREET =>
         return Util.Beans.Objects.To_Object (Addr.Street);

      when FIELD_COUNTRY =>
         return Util.Beans.Objects.To_Object (Addr.Country);

      when FIELD_ZIP =>
         return Util.Beans.Objects.To_Object (Addr.Zip);

   end case;
end Get_Member;

A mapping package has to be instantiated to provide the necessary glue to tie the set procedure to the framework.

package Address_Mapper is
  new Util.Serialize.Mappers.Record_Mapper
     (Element_Type        => Address,    
      Element_Type_Access => Address_Access,
      Fields              => Address_Fields,
      Set_Member          => Set_Member);  

Note: a bug in the gcc compiler does not allow to specify the !Get_Member function in the generic package. As a work-arround, the function must be associated with the mapping using the Bind procedure.

9.3 Mapping Definition

The mapping package defines a Mapper type which holds the mapping definition. The mapping definition tells a mapper what name correspond to the different fields. It is possible to define several mappings for the same record type. The mapper object is declared as follows:

Address_Mapping : Address_Mapper.Mapper;  

Then, each field is bound to a name as follows:

Address_Mapping.Add_Mapping ("city", FIELD_CITY);
Address_Mapping.Add_Mapping ("street", FIELD_STREET);
Address_Mapping.Add_Mapping ("country", FIELD_COUNTRY);
Address_Mapping.Add_Mapping ("zip", FIELD_ZIP);

Once initialized, the same mapper can be used read several files in several threads at the same time (the mapper is only read by the JSON/XML parsers).

9.4 De-serialization

To de-serialize a JSON object, a parser object is created and one or several mappings are defined:

Reader : Util.Serialize.IO.JSON.Parser;
...
   Reader.Add_Mapping ("address", Address_Mapping'Access);

For an XML de-serialize, we just have to use another parser:

Reader : Util.Serialize.IO.XML.Parser;
...
   Reader.Add_Mapping ("address", Address_Mapping'Access);

For a CSV de-serialize, we just have to use another parser:

Reader : Util.Serialize.IO.CSV.Parser;
...
   Reader.Add_Mapping ("", Address_Mapping'Access);

The next step is to indicate the object that the de-serialization will write into. For this, the generic package provided the !Set_Context procedure to register the root object that will be filled according to the mapping.

Addr : aliased Address;
...
  Address_Mapper.Set_Context (Reader, Addr'Access);

The Parse procedure parses a file using a CSV, JSON or XML parser. It uses the mappings registered by Add_Mapping and fills the objects registered by Set_Context. When the parsing is successful, the Addr object will hold the values.

  Reader.Parse (File);

9.5 Parser Specificities

9.5.1 XML

XML has attributes and entities both of them being associated with a name. For the mapping, to specify that a value is stored in an XML attribute, the name must be prefixed by the **@** sign (this is very close to an XPath expression). For example if the city XML entity has an id attribute, we could map it to a field FIELD_CITY_ID as follows:

Address_Mapping.Add_Mapping ("city/@id", FIELD_CITY_ID);

9.5.2 CSV

A CSV file is flat and each row is assumed to contain the same kind of entities. By default the CSV file contains as first row a column header which is used by the de-serialization to make the column field association. The mapping defined through Add_Mapping uses the column header name to indicate which column correspond to which field.

If a CSV file does not contain a column header, the mapping must be created by using the default column header names (Ex: A, B, C, …, AA, AB, …). The parser must be told about this lack of column header:

   Parser.Set_Default_Headers;

10 HTTP

The Util.Http package provides a set of APIs that allows applications to use the HTTP protocol. It defines a common interface on top of CURL and AWS so that it is possible to use one of these two libraries in a transparent manner.

10.1 Client

The Util.Http.Clients package defines a set of API for an HTTP client to send requests to an HTTP server.

10.1.1 GET request

To retrieve a content using the HTTP GET operation, a client instance must be created. The response is returned in a specific object that must therefore be declared:

Http     : Util.Http.Clients.Client;
Response : Util.Http.Clients.Response;

Before invoking the GET operation, the client can setup a number of HTTP headers.

Http.Add_Header ("X-Requested-By", "wget");

The GET operation is performed when the Get procedure is called:

Http.Get ("http://www.google.com", Response);

Once the response is received, the Response object contains the status of the HTTP response, the HTTP reply headers and the body. A response header can be obtained by using the Get_Header function and the body using Get_Body:

Body : constant String := Response.Get_Body;

11 Streams

The Util.Streams package provides several types and operations to allow the composition of input and output streams. Input streams can be chained together so that they traverse the different stream objects when the data is read from them. Similarly, output streams can be chained and the data that is written will traverse the different streams from the first one up to the last one in the chain. During such traversal, the stream object is able to bufferize the data or make transformations on the data.

The Input_Stream interface represents the stream to read data. It only provides a Read procedure. The Output_Stream interface represents the stream to write data. It provides a Write, Flush and Close operation.

To use the packages described here, use the following GNAT project:

with "utilada_sys";

11.1 Buffered Streams

The Output_Buffer_Stream and Input_Buffer_Stream implement an output and input stream respectively which manages an output or input buffer. The data is first written to the buffer and when the buffer is full or flushed, it gets written to the target output stream.

The Output_Buffer_Stream must be initialized to indicate the buffer size as well as the target output stream onto which the data will be flushed. For example, a pipe stream could be created and configured to use the buffer as follows:

with Util.Streams.Buffered;
with Util.Streams.Pipes;
...
   Pipe   : aliased Util.Streams.Pipes.Pipe_Stream;
   Buffer : Util.Streams.Buffered.Output_Buffer_Stream;
   ...
      Buffer.Initialize (Output => Pipe'Unchecked_Access,
                         Size => 1024);

In this example, the buffer of 1024 bytes is configured to flush its content to the pipe input stream so that what is written to the buffer will be received as input by the program. The Output_Buffer_Stream provides write operation that deal only with binary data (Stream_Element). To write text, it is best to use the Print_Stream type from the Util.Streams.Texts package as it extends the Output_Buffer_Stream and provides several operations to write character and strings.

The Input_Buffer_Stream must also be initialized to also indicate the buffer size and either an input stream or an input content. When configured, the input stream is used to fill the input stream buffer. The buffer configuration is very similar as the output stream:

with Util.Streams.Buffered;
with Util.Streams.Pipes;
...
   Pipe   : aliased Util.Streams.Pipes.Pipe_Stream;
   Buffer : Util.Streams.Buffered.Input_Buffer_Stream;
   ...
      Buffer.Initialize (Input => Pipe'Unchecked_Access, Size => 1024);

In this case, the buffer of 1024 bytes is filled by reading the pipe stream, and thus getting the program’s output.

11.2 Texts

The Util.Streams.Texts package implements text oriented input and output streams. The Print_Stream type extends the Output_Buffer_Stream to allow writing text content.

The Reader_Stream type extends the Input_Buffer_Stream and allows to read text content.

11.3 File streams

The Util.Streams.Files package provides input and output streams that access files on top of the Ada Stream_IO standard package.

11.4 Pipes

The Util.Streams.Pipes package defines a pipe stream to or from a process. It allows to launch an external program while getting the program standard output or providing the program standard input. The Pipe_Stream type represents the input or output stream for the external program. This is a portable interface that works on Unix and Windows.

The process is created and launched by the Open operation. The pipe allows to read or write to the process through the Read and Write operation. It is very close to the popen operation provided by the C stdio library. First, create the pipe instance:

with Util.Streams.Pipes;
...
   Pipe : aliased Util.Streams.Pipes.Pipe_Stream;

The pipe instance can be associated with only one process at a time. The process is launched by using the Open command and by specifying the command to execute as well as the pipe redirection mode:

For example to run the ls -l command and read its output, we could run it by using:

Pipe.Open (Command => "ls -l", Mode => Util.Processes.READ);

The Pipe_Stream is not buffered and a buffer can be configured easily by using the Input_Buffer_Stream type and connecting the buffer to the pipe so that it reads the pipe to fill the buffer. The initialization of the buffer is the following:

with Util.Streams.Buffered;
...
   Buffer : Util.Streams.Buffered.Input_Buffer_Stream;
   ...
   Buffer.Initialize (Input => Pipe'Unchecked_Access, Size => 1024);

And to read the process output, one can use the following:

 Content : Ada.Strings.Unbounded.Unbounded_String;
 ...
 Buffer.Read (Into => Content);

The pipe object should be closed when reading or writing to it is finished. By closing the pipe, the caller will wait for the termination of the process. The process exit status can be obtained by using the Get_Exit_Status function.

 Pipe.Close;
 if Pipe.Get_Exit_Status /= 0 then
    Ada.Text_IO.Put_Line ("Command exited with status "
                          & Integer'Image (Pipe.Get_Exit_Status));
 end if;

You will note that the Pipe_Stream is a limited type and thus cannot be copied. When leaving the scope of the Pipe_Stream instance, the application will wait for the process to terminate.

Before opening the pipe, it is possible to have some control on the process that will be created to configure:

All these operations must be made before calling the Open procedure.

11.5 Sockets

The Util.Streams.Sockets package defines a socket stream.

11.6 Raw files

The Util.Streams.Raw package provides a stream directly on top of file system operations read and write.

11.7 Encoder Streams

The Util.Streams.Buffered.Encoders is a generic package which implements an encoding or decoding stream through the Transformer interface. The generic package must be instantiated with a transformer type. The stream passes the data to be written to the Transform method of that interface and it makes transformations on the data before being written.

The AES encoding stream is created as follows:

package Encoding is
  new Util.Streams.Buffered.Encoders (Encoder => Util.Encoders.AES.Encoder);

and the AES decoding stream is created with:

package Decoding is
  new Util.Streams.Buffered.Encoders (Encoder => Util.Encoders.AES.Decoder);

The encoding stream instance is declared:

  Encode : Util.Streams.Buffered.Encoders.Encoder_Stream;

The encoding stream manages a buffer that is used to hold the encoded data before it is written to the target stream. The Initialize procedure must be called to indicate the target stream, the size of the buffer and the encoding format to be used.

 Encode.Initialize (Output => File'Access, Size => 4096, Format => "base64");

The encoding stream provides a Produces procedure that reads the encoded stream and write the result in another stream. It also provides a Consumes procedure that encodes a stream by reading its content and write the encoded result to another stream.

11.8 Base16 Encoding Streams

The Util.Streams.Base16 package provides streams to encode and decode the stream using Base16.

11.9 Base64 Encoding Streams

The Util.Streams.Base64 package provides streams to encode and decode the stream using Base64.

11.10 AES Encoding Streams

The Util.Streams.AES package define the Encoding_Stream and Decoding_Stream types to encrypt and decrypt using the AES cipher. Before using these streams, you must use the Set_Key procedure to setup the encryption or decryption key and define the AES encryption mode to be used. The following encryption modes are supported:

The encryption and decryption keys are represented by the Util.Encoders.Secret_Key limited type. The key cannot be copied, has its content protected and will erase the memory once the instance is deleted. The size of the encryption key defines the AES encryption level to be used:

Other key sizes will raise a pre-condition or constraint error exception. The recommended key size is 32 bytes to use AES-256. The key could be declared as follows:

Key : Util.Encoders.Secret_Key
          (Length => Util.Encoders.AES.AES_256_Length);

The encryption and decryption key are initialized by using the Util.Encoders.Create operations or by using one of the key derivative functions provided by the Util.Encoders.KDF package. A simple string password is created by using:

Password_Key : constant Util.Encoders.Secret_Key
          := Util.Encoders.Create ("mysecret");

Using a password key like this is not the good practice and it may be useful to generate a stronger key by using one of the key derivative function. We will use the PBKDF2 HMAC-SHA256 with 20000 loops (see RFC 8018):

Util.Encoders.KDF.PBKDF2_HMAC_SHA256 (Password => Password_Key,
                                      Salt     => Password_Key,
                                      Counter  => 20000,
                                      Result   => Key);

To write a text, encrypt the content and save the file, we can chain several stream objects together. Because they are chained, the last stream object in the chain must be declared first and the first element of the chain will be declared last. The following declaration is used:

  Out_Stream   : aliased Util.Streams.Files.File_Stream;
  Cipher       : aliased Util.Streams.AES.Encoding_Stream;
  Printer      : Util.Streams.Texts.Print_Stream;

The stream objects are chained together by using their Initialize procedure. The Out_Stream is configured to write on the encrypted.aes file. The Cipher is configured to write in the Out_Stream with a 32Kb buffer. The Printer is configured to write in the Cipher with a 4Kb buffer.

  Out_Stream.Initialize (Mode => Ada.Streams.Stream_IO.In_File,
                         Name => "encrypted.aes");
  Cipher.Initialize (Output => Out_Stream'Unchecked_Access,
                     Size   => 32768);
  Printer.Initialize (Output => Cipher'Unchecked_Access,
                      Size   => 4096);

The last step before using the cipher is to configure the encryption key and modes:

  Cipher.Set_Key (Secret => Key, Mode => Util.Encoders.AES.ECB);

It is now possible to write the text by using the Printer object:

Printer.Write ("Hello world!");

12 Encoders

The Util.Encoders package defines the Encoder and Decoder types which provide a mechanism to transform a stream from one format into another format. The basic encoder and decoder support base16, base64, base64url and sha1. The following code extract will encode in base64:

C : constant Encoder := Util.Encoders.Create ("base64");
S : constant String := C.Encode ("Ada is great!");

and the next code extract will decode the base64:

D : constant Decoder := Util.Encoders.Create ("base64");
S : constant String := D.Decode ("QWRhIGlzIGdyZWF0IQ==");

To use the packages described here, use the following GNAT project:

with "utilada_sys";

12.1 URI Encoder and Decoder

The Util.Encoders.URI package provides operations to encode and decode using the URI percent encoding and decoding scheme. A string encoded using percent encoding as described in RFC 3986 is simply decoded as follows:

Decoded : constant String := Util.Encoders.URI.Decode ("%20%2F%3A");

To encode a string, one must choose the character set that must be encoded and then call the Encode function. The character set indicates those characters that must be percent encoded. Two character sets are provided,

Encoded : constant String := Util.Encoders.URI.Encode (" /:");

12.2 Error Correction Code

The Util.Encoders.ECC package provides operations to support error correction codes. The error correction works on blocks of 256 or 512 bytes and can detect 2-bit errors and correct 1-bit error. The ECC uses only three additional bytes. The ECC algorithm implemented by this package is implemented by several NAND Flash memory. It can be used to increase the robustness of data to bit-tempering when the data is restored from an external storage (note that if the external storage has its own ECC correction, adding another software ECC correction will probably not help).

The ECC code is generated by using the Make procedure that gets a block of 256 or 512 bytes and produces the 3 bytes ECC code. The ECC code must be saved together with the data block.

Code : Util.Encoders.ECC.ECC_Code;
...
Util.Encoders.ECC.Make (Data, Code);

When reading the data block, you can verify and correct it by running again the Make procedure on the data block and then compare the current ECC code with the expected ECC code produced by the first call. The Correct function is then called with the data block, the expected ECC code that was saved with the data block and the computed ECC code.

New_Code : Util.Encoders.ECC.ECC_Code;
...
Util.Encoders.ECC.Make (Data, New_Code);
case Util.Encoders.ECC.Correct (Data, Expect_Code, New_Code) is
   when NO_ERROR | CORRECTABLE_ERROR => ...
   when others => ...
end case;

13 Other utilities

13.1 Text Builders

The Util.Texts.Builders generic package was designed to provide string builders. The interface was designed to reduce memory copies as much as possible.

First, instantiate the package for the element type (eg, String):

package String_Builder is new Util.Texts.Builders (Character, String);

Declare the string builder instance with its initial capacity:

Builder : String_Builder.Builder (256);

And append to it:

String_Builder.Append (Builder, "Hello");

To get the content collected in the builder instance, write a procedure that receives the chunk data as parameter:

procedure Collect (Item : in String) is ...

And use the Iterate operation:

String_Builder.Iterate (Builder, Collect'Access);

13.2 Listeners

The Listeners package implements a simple observer/listener design pattern. A subscriber registers to a list. When a change is made on an object, the application can notify the subscribers which are then called with the object.

13.2.1 Creating the listener list

The listeners list contains a list of listener interfaces.

L : Util.Listeners.List;

The list is heterogeneous meaning that several kinds of listeners could be registered.

13.2.2 Creating the observers

First the Observers package must be instantiated with the type being observed. In the example below, we will observe a string:

package String_Observers is new Util.Listeners.Observers (String);

13.2.3 Implementing the observer

Now we must implement the string observer:

type String_Observer is new String_Observer.Observer with null record;
procedure Update (List : in String_Observer; Item : in String);

13.2.4 Registering the observer

An instance of the string observer must now be registered in the list.

O : aliased String_Observer;
L.Append (O'Access);

13.2.5 Publishing

Notifying the listeners is done by invoking the Notify operation provided by the String_Observers package:

String_Observer.Notify (L, "Hello");

13.3 Timer Management

The Util.Events.Timers package provides a timer list that allows to have operations called on regular basis when a deadline has expired. It is very close to the Ada.Real_Time.Timing_Events package but it provides more flexibility by allowing to have several timer lists that run independently. Unlike the GNAT implementation, this timer list management does not use tasks at all. The timer list can therefore be used in a mono-task environment by the main process task. Furthermore you can control your own task priority by having your own task that uses the timer list.

The timer list is created by an instance of Timer_List:

Manager : Util.Events.Timers.Timer_List;

The timer list is protected against concurrent accesses so that timing events can be setup by a task but the timer handler is executed by another task.

13.3.1 Timer Creation

A timer handler is defined by implementing the Timer interface with the Time_Handler procedure. A typical timer handler could be declared as follows:

type Timeout is new Timer with null record;
overriding procedure Time_Handler (T : in out Timeout);
My_Timeout : aliased Timeout;

The timer instance is represented by the Timer_Ref type that describes the handler to be called as well as the deadline time. The timer instance is initialized as follows:

T : Util.Events.Timers.Timer_Ref;
Manager.Set_Timer (T, My_Timeout'Access, Ada.Real_Time.Seconds (1));

13.3.2 Timer Main Loop

Because the implementation does not impose any execution model, the timer management must be called regularly by some application’s main loop. The Process procedure executes the timer handler that have elapsed and it returns the deadline to wait for the next timer to execute.

Deadline : Ada.Real_Time.Time;
loop
   ...
   Manager.Process (Deadline);
   delay until Deadline;
end loop;

13.4 Executors

The Util.Executors generic package defines a queue of work that will be executed by one or several tasks. The Work_Type describes the type of the work and the Execute procedure will be called by the task to execute the work. After instantiation of the package, an instance of the Executor_Manager is created with a number of desired tasks. The tasks are then started by calling the Start procedure.

A work object is added to the executor’s queue by using the Execute procedure. The work object is added in a concurrent fifo queue. One of the task managed by the executor manager will pick the work object and run it.

14 Performance Measurements

Performance measurements is often made using profiling tools such as GNU gprof or others. This profiling is however not always appropriate for production or release delivery. The mechanism presented here is a lightweight performance measurement that can be used in production systems.

The Ada package Util.Measures defines the types and operations to make performance measurements. It is designed to be used for production and multi-threaded environments.

14.1 Create the measure set

Measures are collected in a Measure_Set. Each measure has a name, a counter and a sum of time spent for all the measure. The measure set should be declared as some global variable. The implementation is thread safe meaning that a measure set can be used by several threads at the same time. It can also be associated with a per-thread data (or task attribute).

To declare the measure set, use:

 with Util.Measures;
    ...
    Perf : Util.Measures.Measure_Set;

14.2 Measure the implementation

A measure is made by creating a variable of type Stamp. The declaration of this variable marks the beginning of the measure. The measure ends at the next call to the Report procedure.

 with Util.Measures;
 ...
   declare
      Start : Util.Measures.Stamp;
   begin
      ...
      Util.Measures.Report (Perf, Start, "Measure for a block");
   end;

When the Report procedure is called, the time that elapsed between the creation of the Start variable and the procedure call is computed. This time is then associated with the measure title and the associated counter is incremented. The precision of the measured time depends on the system. On GNU/Linux, it uses gettimeofday.

If the block code is executed several times, the measure set will report the number of times it was executed.

14.3 Reporting results

After measures are collected, the results can be saved in a file or in an output stream. When saving the measures, the measure set is cleared.

 Util.Measures.Write (Perf, "Title of measures",
                      Ada.Text_IO.Standard_Output);

14.4 Measure Overhead

The overhead introduced by the measurement is quite small as it does not exceeds 1.5 us on a 2.6 Ghz Core Quad.

14.5 What must be measured

Defining a lot of measurements for a production system is in general not very useful. Measurements should be relatively high level measurements. For example: