phone

    • chevron_right

      Ignite Realtime Blog: Dan is voted in the XSF's Council!

      news.movim.eu / PlanetJabber · Thursday, 21 December, 2023 - 12:01

    Our very own @danc was voted into the XMPP Standards Foundation Council not to long ago!

    The XMPP Standards Foundation is an independent, nonprofit standards development organisation whose primary mission is to define open protocols for presence, instant messaging, and real-time communication and collaboration on top of the IETF’s Extensible Messaging and Presence Protocol (XMPP). Most of the projects that we’re maintaining in the Ignite Realtime community have a strong dependency on XMPP.

    The XMPP Council, that Dan now is a member of, is the technical steering group that approves XMPP Extension Protocols. With that, he’s now on the forefront of new developments within the XMPP community! Congrats to you, Dan!

    For other release announcements and news follow us on Mastodon or X

    4 posts - 4 participants

    Read full topic

    • chevron_right

      ProcessOne: Instant Messaging: Protocols are “Commons”, Let’s Take Them Seriously

      news.movim.eu / PlanetJabber · Wednesday, 20 December, 2023 - 17:50 · 8 minutes

    TLDR;

    Thirty years after the advent of the first instant messaging services, we still haven’t reached the stage where instant messaging platforms can freely communicate with each other, as is the case with email. In 1999, the Jabber/XMPP protocol was created and standardized for this purpose by the Internet Engineering Task Force (IETF). Since then, proprietary messaging services have continuously leveraged the power of internet giants to dominate the market. Why do neither XMPP nor the more recent Matrix, which aimed to improve upon it, break through this barrier, when it’s clear that protocols must be open to enable exchange? Without this fundamental principle, the Internet itself wouldn’t exist.

    In the following article, I revisit how the French government recently promoted the instant messaging service Olvid and what this reveals about our approach to digital technology. It’s frustrating to see France promote a secure, yet proprietary messaging service that offers no progress in terms of interoperability, especially at a time when the European Union is striving to open up the sector by requiring all messaging services to be capable of intercommunication, through the Digital Markets Act .

    I conclude with reflections on our inability in Europe to collaborate on “commons,” our difficulty in building a foundation, an ecosystem that allows for healthy co-opetition, a blend of competition and collaboration, which is the only way to regain significance in the digital economy. Short-term political thinking forces our companies into an every-man-for-himself approach, preferring to dominate a small market rather than share a larger one.

    Today, perhaps, it’s time for a change?

    Cables

    Thirty years and counting since the emergence of the first instant messaging services, we still lack a universally accepted exchange protocol, as is the case with email. The Jabber protocol, later renamed XMPP (eXtensible Messaging and Presence Protocol) and made a standard, was born with the hope of breaking the proliferation of isolated silos like MSN, ICQ, Yahoo!, which did not communicate with each other. Today, other silos have emerged, but the problem persists: it is still impossible to exchange messages between accounts from different major messaging providers. Why? Let me tell you the story of a clumsy communication operation around a French messaging service, Olvid, which illustrates well the familiar patterns we often find ourselves stuck in.

    The French Government’s Endorsement of a Proprietary Messaging Service: A Closer Look

    I discovered the messaging service Olvid in late November 2023, following a flood of articles in the French press. I wondered how a company of 15 employees, created in 2019, had managed to get such press coverage. It was promoted directly by Prime Minister Elisabeth Borne: “Popular messaging applications like WhatsApp, Telegram or Signal have ‘security flaws’,” justified the office of Elisabeth Borne, who urged her ministers to download the French application.” ( Les Échos, November 30, 2023 ). In November 2023, Matignon asked government members and ministerial offices to install this system on their phones and computers “to replace other instant messaging services to enhance the security of exchanges.” Then came the superlatives: “The most secure messaging service in the world” (Jean-Noël Barrot). “A step towards greater French sovereignty” (Elisabeth Borne). And it needs to be done quickly. Elisabeth Borne asked ministers to “take all necessary steps” to deploy Olvid in their ministry “by December 8, 2023, at the latest” ( Ouest France , November 29, 2023).

    Why Olvid? The articles I read on the subject remain relatively vague; I know mainly that it is certified by ANSII, the organization guaranteeing the state’s IT security. Yet, it’s far from the first secure messaging service I’ve come across, and it’s the first time I’ve heard of Olvid. What about other services and especially Signal, which is recognized worldwide for its security, backed by audits? Among secure messengers, the list is long: Signal, Threema, Wire, Berty, etc. So, what security flaws are we talking about?

    Signal Hits Back: A Strong Response to Security Claims

    Signal’s response was swift, with a direct and clear position from Meredith Whittaker, president of the Signal Foundation:

    The French PM is mandating ministers use a small French messaging app. OK. But I’m alarmed that she’s claiming “security flaws” in Signal (et al) to justify the move. This claim is not backed by any evidence, and is dangerously misleading esp. coming from gov.
    If you want to use a French product go for it! But don’t spread misinfo in the process. Signal is independently audited, open source, and our protocol has been tested for >10yrs. We are serious about responsible disclosure and we prioritize all reports to security@signal.org
    Numérama, December 1, 2023

    Double Ratchet

    Regarding Olvid’s security, the main argument seems to be as follows: The system does not rely on centralized directories, operates without identifiers, which means no user account is hosted in the cloud.

    First, it seems to me that this is the principle of key-based authentication. Message routing is done solely based on a key, in the cryptographic sense. If it is lost, it’s impossible to recover the account. Nothing revolutionary, then; it’s cryptography, dating back to the encryption software PGP (Pretty Good Privacy) of the 1990s and even before.

    Then, such a system generally requires the physical exchange of public keys. Where Olvid seems to stand out is in the alternative ways proposed to simplify and lighten the burden of key exchange by meeting physically. This can work, first because the product is not free, so the user base is limited, where Signal, for example, offers a global platform and says it needs an identifier, the phone number to limit spam. Then, these alternative methods rely on mobile device management (MDM) tools, interfacing with an enterprise version of the Olvid server. In one way or another, this goes through a central point of distribution and reintroduces a weakness. It’s far from a completely decentralized protocol like what the team building the Berty messaging service is trying to do, for instance.

    Browsing their site to find the protocol, I admit I choked a bit on some mentions thrown a little freely on their site, for example, Post Quantum Cryptography , cryptography that resists quantum computing. It’s nice, it’s pleasant, but in practice, what’s the reality? I didn’t find more detail under this mention, but personally, being hit with such buzzwords makes me rather flee, as it smells of a commercial who got a bit carried away. But let’s assume, the Olvid team is composed of encryption experts. I skimmed their specifications, but I admit I’m not a mathematician, so who am I to judge their math formulas?

    What I do understand, however, is that almost all secure messaging systems, including Olvid, rely on the Double Ratchet algorithm, which was first introduced by… Signal.

    At the Heart of Messaging: The Critical Role of Protocols

    In terms of protocol, however, I am an expert. I have been working on instant messaging protocols since 1999. And, it’s not beautiful… Olvid’s protocol is the antithesis of what I would like to see in an ambitious messaging protocol. It is a proprietary, ad hoc protocol, not based on any standard, minimalist for now, and condemns itself to reinventing the wheel, poorly. The burning question is, why not choose an open protocol that already works on a large scale, like XMPP, adding their value on top? The Internet protocol, TCP/IP, is open, all machines in the world can communicate, yet there are competing internet service providers. I am still looking for an answer. Because XMPP is too complex, some will say? I think any sufficiently advanced chat protocol tends to become a derivative of XMPP, less accomplished. Come on, why not even use Matrix, a competing protocol to my favorite? Apart from simple ignorance, I see no reason. Unless it’s to lock down the platform, perhaps? But, locking a communication protocol makes no sense. It’s replaying the battle of internet protocols, TCP/IP versus X.25. A communication protocol is meant to be open and interoperable. Personally, I would invite Olvid to adopt a messaging standard. Let them turn to the W3C or IETF, to XMPP or MLS. These organizations do good work. And it’s a guarantee of sustainability and above all, of interoperability.

    We come to a very sore point. The European Commission, and therefore France as well, is discussing the implementation of the Digital Market Act. Among the points the European Union wants to impose is… the interoperability of instant messaging services. How can the French government promote a messaging solution that is not interoperable? And preferably standardized and open.

    I talked about Olvid’s proprietary protocol, which is actually more of an API (Application Programming Interface), that is, a document that describes how to automate certain functions of their server. What about the implementation? The client is open source (on iOS and Android), but seeing in their exchange interface calls to URLs named /Freetrial. This implies payment. I am not sure that Olvid would welcome the idea of compiling and deploying one’s own version of the client. That’s the principle of Open Source, but such an initiative could try to circumvent payments to Olvid. As anyway, no open-source server is available and the only one running is operated by Olvid, the client code is of little use. Especially since the client code is published by Olvid, but to what extent can we know if it is 100% identical to the version distributed in the iOS and Android app stores? We don’t really have a way of knowing.

    I know that Olvid promises one day to release the server as Open Source. What I’ve seen of the protocol, their business model, and what they say about their implementation, very tied to the Amazon infrastructure (an infrastructure managed by an American company, so much for sovereignty), makes me think that this will not happen, at least not for a very long time. I hope, of course, to be wrong.

    Toward Openness and Collaboration in Digital Communication

    In the meantime? I would really like us to be serious about instant messaging, that finally all players in the sector row in the same direction, those who work on open protocols, offering free servers and clients, that we build real collaboration, worthy of the construction of internet protocols, to build the foundation of a universal, open, open-source and truly interoperable messaging service. It doesn’t take much, to develop the culture of “coopetition,” collaboration around a common good between competing companies.


    Found a mistake? I’m not perfect and would be happy to correct it. Contact us!

    — Photo by Steve Johnson on Unsplash

    The post Instant Messaging: Protocols are “Commons”, Let’s Take Them Seriously first appeared on ProcessOne .
    • wifi_tethering open_in_new

      This post is public

      www.process-one.net /blog/instant-messaging-protocols-are-commons-lets-take-them-seriously/

    • chevron_right

      Isode: Red/Black – 2.1 New Capabilities

      news.movim.eu / PlanetJabber · Wednesday, 13 December, 2023 - 15:13 · 3 minutes

    Overview

    This release adds important new functionality and adds further device drivers to Red/Black, a management tool that allows you to monitor and control devices and servers across a network, with a particular focus on HF Radio Systems.  A general summary is given in the white paper Red/Black Overview .

    Rules

    Red/Black 2.1 adds a Rules capability that allows rules to be specified in the Lua programming language, which allows flexible control.    Standard rules are provided along with sample rules to help creation of rules useful for a deployment.  There are a number of rule capabilities:

    • A basic rule capability is control based on device parameter values.   Rules can generate alerts, for example to alert at operator at selected severity when a message queue exceeds a certain size.
    • For devices with parameters that clearly show faults or exception status,  standard device type rules are provided that will alert the operator to the fault condition.   This standard rule can be selected for devices of that type.
    • Rules can set parameters on devices, including control of device actions.   For example, this can be used to turn off  a device when a thermometer device records a high temperature.
    • Rules can reference devices connected in the communications chain.  For example a rule can be created to alert an operator if the frequency used on a radio does not match the supported frequency range of a connected antenna.
    • Rules can be used to reconfigure (soft) connectivity, for example to switch in a replacement device when a device fails.

    Snapshot

    Configuration snapshots can be taken, reflecting the current Red/Black configuration, and Red/Black configuration can be reset to a snapshot. The capability is intended to record standard operational status of a setup to allow convenient reversion after temporary changes.

    eLogic Radio Gateway driver

    The eLogic Radio Gateway provides conversion between synchronous serial and TCP, with multiple convertors in a single SNMP-managed box.  A key target for this is data connectivity to remote Tx/Rx sites.  The Red/Black driver enables configuration as TCP to Serial and Serial to TCP modes, enabling a Red/Black operator to change selected modem/radios.

    Web (http) Drivers

    Red/Black 2.1 has added an internal Isode framework for managing devices with an HTTP interface, which is being used in a number of new drivers.  This is Isode’s preferred approach for managing devices.   New drivers are:

    1. M-Link.   Allows monitoring of M-Link servers, showing:
      1. Number of connected users.
      2. Number of peer connections.
      3. Number of queued stanzas.
    2. Icon-5066.  Controlling  STANAG 5066 product:
      1. Enable/Disable node
      2. Show STANAG 5066 Address
      3. Show Number connected SIS clients
      4. Show If flow is on or off
    3. Icon-PEP.  Providing:
      1. Enable/Disable service
      2. Show number of TCP connections
      3. Show current transfer rate
    4. Sodium Sync.   Providing:
      1. Number of synchronizations
      2. Last synchronization that made changes
      3. List of synchronizations not working correctly
      4. Alerts for failed synchronizations
    5. Supported Modems.   This replaces drivers working directly with modems included in Icon-5066 3.0.   The new driver talks directly to Proxy Modem or to Icon-5066 where Proxy Modem is not used.  This displays a wide range of modem parameters.   Various modem types can be selected to display appropriate information from the connected device:
      1. Narrowband Modem.
      2. Narrowband Modem with ALE.
      3. Wideband Modem.
      4. Modem/Radio combined variants of the previous three types.

    Other

    • Parameter Encryption.   Red/Black can securely store parameters, such as passwords, to prevent exposure as command line arguments to device drivers.
    • Device Ordering.   Devices are now listed in alphabetical order.
    • Alert Source.  Alerts now clearly show where they are generated (Red/Black; Rule; Device Driver; Device).
    • Link to device management.   Where Red/Black monitored devices have Web management, the URL of the Web interface can be configured in Red/Black so that the management UI can be accessed with single click from Red/Black.
    • wifi_tethering open_in_new

      This post is public

      www.isode.com /company/wordpress/red-black-2-1-new-capabilities/

    • chevron_right

      Erlang Solutions: MongooseIM 6.2: Easy to set up, use and manage

      news.movim.eu / PlanetJabber · Wednesday, 13 December, 2023 - 11:14 · 10 minutes

    MongooseIM, which is our scalable, flexible and cost-efficient instant messaging server, is now easier to use than ever before. The latest release 6.2 introduces a completely new CETS in-memory storage backend, letting you easily deploy it with modern cloud infrastructure solutions such as Kubernetes. The XMPP extensions are also updated, which means that we support new features of the XMPP protocol.

    The new version of MongooseIM is very easy to try out, as there are two new options:

    • Firstly, you can check out trymongoose.im – a live demo installation of the latest version, which lets you create your own XMPP domain and experiment with it. It also showcases how a Phoenix web application can be integrated with MongooseIM using its GraphQL API.
    • If you want to set up your own MongooseIM installation, you can now easily set it up in Kubernetes with Helm. Our new Helm chart automatically templates the configuration files, making it possible to quickly set up a running cluster of several nodes connected to a database.

    One of the biggest new features is the support for CETS, which makes management of MongooseIM much easier than before. To fully appreciate this improvement, we need to start with an overview of the clustered storage options in MongooseIM. We will follow with a brief guide, helping you quickly set up a running server with the latest features enabled.

    From Mnesia to CETS

    MongooseIM is implemented in Erlang, making it possible to handle millions of connected clients exchanging messages.  However, a typical user should not need any Erlang knowledge to deploy and maintain a messaging server. Up to version 6.1 there is one component, which breaks this assumption, making management and maintenance much harder. This component is the built-in Erlang database, Mnesia , which is convenient when you are starting your journey with MongooseIM, because it resides on the local disk and does not need to be started as a separate service. All MongooseIM nodes are clustered together, and they replicate Mnesia tables between them.

    Issues with Mnesia

    When you go beyond small experiments on your local machine, it is essential not to store any persistent data in Mnesia, because it is not designed for storing large volumes of data. Also, network connectivity issues or incorrect restarts might make your database inconsistent, leading to unexpected errors and cluster nodes refusing to start. It is also difficult to migrate your data to another database. That is why it is strongly recommended to use a relational database management system (RDBMS) such as PostgreSQL or MySQL, which you can host yourself or use cloud-based solutions such as Amazon RDS. However, when you configure MongooseIM 6.1 and its extension modules to use RDBMS, you will find out that the server still needs Mnesia for its operation. This is because Mnesia is also used to store in-memory data shared between the cluster nodes. For example, by sharing user sessions MongooseIM can route messages between users connected to different nodes of the cluster.

    When Mnesia was first created, a server node used to be a long-running physical unit that is very rarely restarted – actually one of the main advantages of Erlang was the ability to significantly reduce downtime. With the introduction of virtualisation and containers, a server node is no longer tied to the underlying hardware, and new nodes can be dynamically added or removed. This means that the cluster is much more dynamic, and nodes can be started more often. This brings us to another issue of Mnesia – the need for storing the database schema on disk, which contains the information about all nodes in the cluster and their tables. This is mostly a problem with platforms like Kubernetes, where adding disk storage requires use of persistent volumes, which are costly and need to be manually deleted when a node is removed from the cluster. As a result, the whole management process becomes more error-prone.

    Another problem is the additional cluster management required for each node. When a new node starts up, it is not a member of any cluster. There is a join_cluster command that needs to be executed. Same happens with node removal, when leave_cluster needs to be called. For the convenience of the user, our Helm charts automatically call these commands for the started nodes, but they still need to be started in a particular order, which has to be respected when doing restarts and upgrades as well. If for some reason you change that order, the nodes might be locked until all of them are online (see the documentation ) – which is inconvenient, might result in overload and can even cause the whole cluster to be down if the final node does not start up for some reason. Finally, network connectivity issues might result in an inconsistent database or other errors (even without persistent tables), which can be difficult to understand for anyone but Erlang developers and may require manual intervention on the affected nodes. The solution is usually to stop the affected node, clean up the Mnesia volume, and start it again – which adds unwanted downtime for the server and workload for the operator.

    It is important to note that we have these issues not because Mnesia is inherently bad, but because our use case has drifted away from its intended purpose, i.e. we need no persistence and transactions, but we would benefit from automatic features like simple conflict resolution and dynamic cluster discovery. This situation led us to develop a new library, which precisely meets our requirements.

    Introducing Cluster ETS

    CETS is a lightweight replication layer for ETS (Erlang Term Storage) tables. The main principle of this library is to replicate ETS data to other nodes of the cluster with simple and automatic conflict resolution. In most cases the conflicts are not even possible, because the key of each stored key-value tuple uniquely identifies the creating node. In MongooseIM, we are using the RDBMS cluster node discovery mechanism. This means that each cluster node updates the database periodically, storing its name and IP address in the discovery_nodes table. Other nodes check this table periodically to determine the cluster nodes, and connect to them. Nodes that are down for a long time (by default 1 hour) are removed from the table to avoid trying to connect them. The database used for CETS is the same one that is used to store other persistent data, so in a typical case there should be no extra databases required.

    The first benefit visible to the user is that the nodes don’t need to be added to the cluster anymore. You don’t need commands like join_cluster or leave_cluster – actually you cannot use them anymore. Another immediate benefit is the lack of persistent volumes required by MongooseIM, which means that any node can be immediately replaced by another fresh instance. It is also no longer possible to have consistency errors, because there is no persistent schema and any (unlikely) conflicts are resolved automatically.

    Using CETS

    Let’s see how quickly the new MongooseIM with CETS can be set up. This simple example assumes that you have Docker and Kubernetes installed locally. These tools simplify the setup process a lot, but if you cannot use them, you can manually configure MongooseIM to use CETS as well – see the tutorial . In this example we will use PostgreSQL for all persistent storage in MongooseIM, including CETS node discovery. You only need to download the database schema file pg.sql to your current directory and execute the following command:

    $ docker run -d --name mongooseim-postgres -e POSTGRES_PASSWORD=mongooseim_secret \
        -e POSTGRES_USER=mongooseim -v `pwd`/pg.sql:/docker-entrypoint-initdb.d/pgsql.sql:ro \
        -p 5432:5432 postgres

    The database should be up and running – let’s check it with psql :

    $ PGPASSWORD=mongooseim_secret psql -U mongooseim -h localhost
    (...)
    mongooseim=#

    Next, let’s install MongooseIM in Kubernetes with Helm. The volatileDatabase and persistentDatabase options are used to populate the generated MongooseIM configuration file with the required database options. Since we have set the DB to use the default MongooseIM credentials, we don’t need to provide them here. If you want to use a different user name, password or other parameters, see the chart documentation for a complete list of options.

    $ helm repo add mongoose https://esl.github.io/MongooseHelm/
    $ helm install mim mongoose/mongooseim --set replicaCount=3 --set volatileDatabase=cets \
        --set persistentDatabase=rdbms
    NAME: test-mim
    LAST DEPLOYED: Tue Nov 28 08:56:16 2023
    NAMESPACE: default
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    NOTES:
    Thank you for installing MongooseIM 6.2.0
    (...)
    

    Your three-node cluster using CETS and RDBMS should start up quickly. You can monitor its progress with Kubernetes:

    $ watch kubectl get sts,pod,svc
    
    NAME                          READY   AGE
    statefulset.apps/mongooseim   3/3     2m
    
    NAME               READY   STATUS    RESTARTS   AGE
    pod/mongooseim-0   1/1     Running   0          2m
    pod/mongooseim-1   1/1     Running   0          2m
    pod/mongooseim-2   1/1     Running   0          1m
    
    NAME                    TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                    AGE
    service/kubernetes      ClusterIP      10.96.0.1        <none>        443/TCP                    91d
    service/mongooseim      ClusterIP      None             <none>        4369/TCP,5222/TCP, (...)   2m
    service/mongooseim-lb   LoadBalancer   10.102.205.139   localhost     5222:32178/TCP, (...)      2m 

    When the XMPP port 5222 is open on localhost by the load balancer, the whole service is ready to use. You can check CETS cluster status on each node with the CLI (or the GraphQL API ). The following command checks the status on mongooseim-0 (the first node in the cluster):

    $ kubectl exec -it mongooseim-0 -- /usr/lib/mongooseim/bin/mongooseimctl cets systemInfo
    {
      "data" : {
        "cets" : {
          "systemInfo" : {
            "unavailableNodes" : [],
            "remoteUnknownTables" : [],
            "remoteNodesWithoutDisco" : [],
            "remoteNodesWithUnknownTables" : [],
            "remoteNodesWithMissingTables" : [],
            "remoteMissingTables" : [],
            "joinedNodes" : [
              "mongooseim@mongooseim-0.mongooseim.default.svc.cluster.local",
              "mongooseim@mongooseim-1.mongooseim.default.svc.cluster.local",
              "mongooseim@mongooseim-2.mongooseim.default.svc.cluster.local"
            ],
            "discoveryWorks" : true,
            "discoveredNodes" : [
              "mongooseim@mongooseim-0.mongooseim.default.svc.cluster.local",
              "mongooseim@mongooseim-1.mongooseim.default.svc.cluster.local",
              "mongooseim@mongooseim-2.mongooseim.default.svc.cluster.local"
            ],
            "conflictTables" : [],
            "conflictNodes" : [],
            "availableNodes" : [
              "mongooseim@mongooseim-0.mongooseim.default.svc.cluster.local",
              "mongooseim@mongooseim-1.mongooseim.default.svc.cluster.local",
              "mongooseim@mongooseim-2.mongooseim.default.svc.cluster.local"
            ]
          }
        }
      }
    }

    You should see all nodes listed in joinedNodes, discoveredNodes and availableNodes . Other lists should be empty. There is tableInfo as well. This command shows information about each table:

    $ kubectl exec -it mongooseim-0 -- /usr/lib/mongooseim/bin/mongooseimctl cets tableInfo
    {
      "data" : {
        "cets" : {
          "tableInfo" : [
            {
              "tableName" : "cets_bosh",
              "size" : 0,
              "nodes" : [
                "mongooseim@mongooseim-0.mongooseim.default.svc.cluster.local",
                "mongooseim@mongooseim-1.mongooseim.default.svc.cluster.local",
                "mongooseim@mongooseim-2.mongooseim.default.svc.cluster.local"
              ],
              "memory" : 141
            },
            {
              "tableName" : "cets_cluster_id",
              "size" : 1,
              "nodes" : [
                "mongooseim@mongooseim-0.mongooseim.default.svc.cluster.local",
                "mongooseim@mongooseim-1.mongooseim.default.svc.cluster.local",
                "mongooseim@mongooseim-2.mongooseim.default.svc.cluster.local"
              ],
              "memory" : 156
            },
            {
              "tableName" : "cets_external_component",
              "size" : 0,
              "nodes" : [
                "mongooseim@mongooseim-0.mongooseim.default.svc.cluster.local",
                "mongooseim@mongooseim-1.mongooseim.default.svc.cluster.local",
                "mongooseim@mongooseim-2.mongooseim.default.svc.cluster.local"
              ],
              "memory" : 307
            },
            (...)
          ]
        }
      }
    }


    You can find more information about these commands in our GraphQL docs , because the CLI is actually using the GraphQL commands. To complete our example, let’s create our first XMPP user account:

    $ kubectl exec -it mongooseim-0 -- /usr/lib/mongooseim/bin/mongooseimctl account registerUser \
      --username alice --domain localhost --password secret
    {
      "data" : {
        "account" : {
          "registerUser" : {
            "message" : "User alice@localhost successfully registered",
            "jid" : "alice@localhost"
          }
        }
      }
    }


    Now you can connect to the server with an XMPP client as alice@localhost – see https://trymongoose.im/client-apps or https://xmpp.org/software/?platform=all-platforms for client software.

    New extensions

    MongooseIM 6.2 satisfies the XMPP Compliance Suites 2023 , as reported at xmpp.org . Thanks to the new extensible architecture of mongoose_c2s, we are implementing new extensions faster than before. For example, we have recently added support for XEP-0386: Bind 2 and XEP-0388: Extensible SASL Profile , allowing the client to authenticate, bind the resource and enable extensions like message carbons , stream management and client state indication . All of this can be done in a single step without the need for redundant roundtrips (see the example ). This way your clients can establish their sessions faster than before, putting less load on the client and the server. We have also updated multiple extensions to their latest versions, and we will continue the effort to keep them up to date, adding new ones as well. Do you think we should support a new XMPP extension? Feel free to request a feature , so we can put it on our roadmap, and if you really want it now, we can discuss possible sponsoring options.

    Summary

    With the latest release 6.2 we have brought MongooseIM closer to you. Now you can try it out online as well as easily install it in Kubernetes without caring about persistent state and volumes. Your next step is to try our live demo, install MongooseIM with Helm and experiment with it. You can do it all for free and without Erlang knowledge, so go ahead and use it as the foundation of your new messaging solution. You are also not left alone – should you have any questions, please feel free to contact us , and we will be happy to deploy, load-test, health-check, optimise and customise MongooseIM to fit your needs.

    The post MongooseIM 6.2: Easy to set up, use and manage appeared first on Erlang Solutions .

    • chevron_right

      JMP: Newsletter: Holidays

      news.movim.eu / PlanetJabber · Wednesday, 13 December, 2023 - 00:25 · 2 minutes

    Hi everyone!

    Welcome to the latest edition of your pseudo-monthly JMP update!

    In case it’s been a while since you checked out JMP, here’s a refresher: JMP lets you send and receive text and picture messages (and calls) through a real phone number right from your computer, tablet, phone, or anything else that has a Jabber client. Among other things, JMP has these features: Your phone number on every device; Multiple phone numbers, one app; Free as in Freedom; Share one number with multiple people.

    Automatic refill for users of the data plan was rolled out to everyone this fall. This has been going well and we fully expect to enable new SIM and eSIM orders for all JMP customers (with no waitlist) in January, after the holidays.

    Speaking of holidays, MBOA staff, including JMP support staff, will be taking an end of year break just like we always do. Expect support response times to be longer than usual from December 18 until January 2.

    This fall also saw the silent launch of new inventory features for JMP. Historically, JMP has never held inventory of phone numbers, buying them directly from our carrier partners when a customer places an order. Unfortunately, this leaves us at the mercy of which regions our partners choose to keep in stock, and this year saw several occasions where there was no stock at all for all of Canada. So we now have a limited amount of local inventory to improve coverage of important regions, and may eventually be adding a function for “premium numbers” for very rare area codes or similar which cost more to stock.

    We have also been working in partnership with Snikket on a cross-platform SDK which we hope will make it easier for developers to build applications that integrate with the Jabber network without needing to be protocol or standards experts. Watch the chatroom and the Snikket blog for more information and demos.

    There have also been several releases of the Cheogram Android app ( latest is 2.13.0-1 ) with new features including:

    • Improved call connection stability
    • Verify DNSSEC and DANE and show status in UI
    • Show command UI on channels when there are commands to show
    • Show thread selector when starting a mention
    • Circle around thread selector
    • Several Android 14 specific fixes, including for dialler integration
    • Opening WebXDC from home screen even from a very old message

    To learn what’s happening with JMP between newsletters, here are some ways you can find out:

    Thanks for reading and have a wonderful rest of your week!

    • wifi_tethering open_in_new

      This post is public

      blog.jmp.chat /b/december-newsletter-2023

    • chevron_right

      Ignite Realtime Blog: Smack 4.5.0-alpha2 released

      news.movim.eu / PlanetJabber · Saturday, 9 December, 2023 - 17:46

    We are happy to announce the release of the second alpha release of Smack’s upcoming 4.5 version.

    This version fixes a nasty bug in Smack’s reactor, includes support for XMPP over WebSocket connections and much more. Even though Smack has a good test coverage, due its comprehensive unit test suite and integration test framework, we kindly ask you to test pre-releases and report feedback.

    As always, this Smack release is available via Maven Central .

    1 post - 1 participant

    Read full topic

    • chevron_right

      Erlang Solutions: Reimplementing Technical Debt with State Machines

      news.movim.eu / PlanetJabber · Wednesday, 6 December, 2023 - 10:29 · 16 minutes

    In the ever-evolving landscape of software development, mastering the art of managing complexity is a skill every developer and manager alike aspires to attain. One powerful tool that often remains in the shadows, yet holds the key to simplifying intricate systems, is the humble state machine. Let’s get started.

    Models

    State machines can be seen as models that represent system behaviour. Much like a flowchart on steroids, these models represent an easy way to visualise complex computation flows through a system.

    A typical case study for state machines is the modelling implementation of internet protocols. Be it TLS, SSH, HTTP or XMPP, these protocols define an abstract machine that reacts to client input by transforming its own state, or, if the input is invalid, dying.

    A case study

    Let’s see the case for –a simplified thereof– XMPP protocol. This messaging protocol is implemented on top of a TCP stream, and it uses XML elements as its payload format. The protocol, on the server side, goes as follows:

    1. The machine is in the “waiting for a stream-start” state, it hasn’t received any input yet.
    2. When the client sends such stream-start, a payload looking like the following:

    <stream:stream to='localhost' version='1.0' xml:lang='en' xmlns='jabber:client' xmlns:stream='http://etherx.jabber.org/streams'>

    Then the machine forwards certain payloads to the client –a stream-start and a stream-features, its details are omitted in this document for simplicity– and transitions to “waiting for features before authentication”.

    1. When the client sends an authentication request, a payload looking like the following:

    <auth xmlns='urn:ietf:params:xml:ns:xmpp-sasl' mechanism='PLAIN'>AGFsaWNFAG1hdHlncnlzYQ==</auth>

    Then the machine, if no request-response mechanism is required for authentication, answers to the client and transitions to a new “waiting for stream-start”, but this time “after authentication”.

    1. When the client again starts the stream, this time authenticated, with a payload like the following:

    <stream:stream to='localhost' version='1.0' xml:lang='en' xmlns='jabber:client' xmlns:stream='http://etherx.jabber.org/streams'>

    Then the machine again answers the respective payloads, and transitions to a new “waiting for features after authentication”.

    1. And finally, when the client sends

    <iq type='set' id='1c037e23fab169b92edb4b123fba1da6'>

    <bind xmlns='urn:ietf:params:xml:ns:xmpp-bind'>

    <resource>res1</resource>

    </bind>

    </iq>

    Then in transitions to “session established”.

    1. From this point, other machines can find it and send it new payloads, called “stanzas”, which are XML elements whose names are one of “message”, “iq”, or “presence”. We will omit the details of these for the sake of simplicity again.

    Because often one picture is worth a thousand words, see the diagram below:

    GhYeFukbpkkBRqqI_uX7wUX-bdr7mQA6xjJ55VWCrsU07DbMXwu1XYpNxYV6BYPsd6PgyjaGzmpIogKwV2ONZxC6qEqtxkvScSwnmpNBKSXuTmPmz7ndZMEyqxMyEmFKC8tr3sRgYPS5XUFmK5P5un2C40vTNOMzvLL4IaAud2CS0uQHGo28mx39zzYxHw

    Implementing the case

    Textbook examples of state machines, and indeed the old OTP implementation of such behaviour, gen_fsm, always give state machines whose states can be defined by a single name, not taking into account that such a name can be “the name of” a data structure instead. In Erlang in particular, gen_fsm imposed the name of the state to be an atom, just so that it can be mapped to a function name and be callable. But this is an unfortunate oversight of complexity management, where the state of a machine depends on a set of variables that, if not in the name, need to be stored elsewhere, usually the machine’s data, breaking the abstraction.

    Observe in the example above, the case for waiting for stream-start and features: they both exist within unauthenticated and authenticated realms. A naive implementation, where the function name is the state, the first parameter the client’s input, and the second parameter the machine’s data, would say that:

    wait_for_stream(stream_start, Data#data{auth = false}) ->
    	{wait_for_feature, Data}.
    
    wait_for_feature(authenticate, Data#data{auth = false}) ->
    	{wait_for_stream, Data#data{auth = true}}.
    
    wait_for_stream(stream_start, Data#data{auth = true}) ->
    	{wait_for_feature, Data}.
    
    wait_for_feature(session, Data#data{auth = true}) ->
    	{session, Data}.

    In each case, we will take different actions, like building different answers for the client, so we cannot coalesce seemingly similar states into less functions.

    But what if we want to implement retries on authentication?

    We need to add a new field to the data record, as follows:

    wait_for_stream(stream_start, Data#data{auth = false}) ->
    	{wait_for_feature, Data#data{retry = 3}}.
    
    wait_for_feature(authenticate, Data#data{auth = false}) ->
    	{wait_for_stream, Data#data{auth = true}};
    wait_for_feature(_, Data#data{auth = false, retry = 0}) ->
    stop;
    wait_for_feature(_, Data#data{auth = false, retry = N}) ->
    	{wait_for_feature, Data#data{auth = true, retry = N - 1}}.

    The problem here is twofold:

    1. When the machine is authenticated, this field is not valid anymore, yet it will be kept in the data record for the whole life of the machine, wasting memory and garbage collection time.
    2. It breaks the finite state machine abstraction –too early–, as it uses an unbounded memory field with random access lookups to decide how to compute the next transition, effectively behaving like a full Turing Machine — note that this power is one we will need nevertheless, but we will introduce it for a completely different purpose.

    This can get unwieldy when we introduce more features that depend on specific states. For example, when authentication requires roundtrips and the final result depends on all the accumulated input of such roundtrips, we would also accumulate them on the data record, and pattern-match which input is next, or introduce more function names.

    Or if authentication requires a request-response roundtrip to a separate machine, if we want to make such requests asynchronous because we want to process more authentication input while the server processes the first payload, we would also need to handle more states and remember the accumulated one. Again, storing these requests on the data record keeps more data permanent that is relevant only to this state, and uses more memory outside of the state definition. Fixing this antipattern lets us reduce the data record from having 62 fields to being composed of only 10.

    Before we go any further, let’s talk a bit about automatas.

    Automata theory

    In computer sciences, and more particularly in automata theory, we have at our disposal a set of theoretical constructs that allow us to model certain problems of computation, and even more ambitiously, define what a computer can do altogether. Namedly, there are three automatas ordered by computing power: finite state machines, pushdown automatas, and Turing machines. They define a very specific algorithm schema, and they define a state of “termination”. With the given algorithm schema and their definition of termination, they are distinguished by the input they are able to accept while terminating.

    Conceptually, a Turing Machine is a machine capable of computing everything we know computers can do: really, Turing Machines and our modern computers are theoretically one and the same thing, modulo equivalence.

    Let’s get more mathematical. Let’s give some definitions:

    1. Alphabet: a set denoted as Σ of input symbols, for example Σ = {0,1}, or Σ = [ASCII]
    2. A string over Σ: a concatenation of symbols of the alphabet Σ
    3. The Power of an Alphabet: Σ*, the set of all possible strings over Σ, including the empty set (an empty string).

    An automaton is said to recognise a string over if it “terminates” when consuming the string as input. On this view, automata generate formal languages, that is, a specific subset of Σ* with certain properties. Let’s see the typical automata:

    1. A Finite State Machine is a finite set of states Q (hence the name of the concept), an alphabet Σ and a function 𝛿 of a state and an input symbol that outputs a new state (and can have side-effects)
    2. A Pushdown Automata is a finite set of states Q, an alphabet Σ, a stack Γ of symbols of Σ, and a function 𝛿 of a state, an input symbol, and the stack, that outputs a new state, and modifies the stack by either popping the last symbol, pushing a new symbol, or both (effectively swapping the last symbol).
    3. A Turing Machine is a finite set of states Q, an alphabet Σ, an infinite tape Γ of cells containing symbols of Σ, and a function 𝛿 of a state, an input symbol, and the current tape cell, that outputs a new state, a new symbol to write in the current cell (which might be the same as before), and a direction, either left or right, to move the head of the tape.

    Conceptually, Finite State Machines can “keep track of” one thing, while Pushdown Automata can “keep track of” up to two things. For example, there is a state machine that can recognise all strings that have an even number of zeroes, but there is no state machine that can recognise all strings that have an equal number of ones and zeroes. However,this can be done by a pushdown automaton. But, neither state machines nor pushdown automata can generate the language for all strings that have an equal number of a’s, b’s, and c’s: this, a Turing Machine can do.

    How to relate these definitions to our protocols, when the input has been defined as an alphabet? In all protocols worth working on, however many inputs there are, they are finite set which can be enumerated. When we define an input element as, for example, <stream:start to=[SomeHost]/> , the list of all possible hosts in the world is a finite list, and we can isomorphically map these hosts to integers, and define our state machines as consuming integers. Likewise for all other input schemas. So, in order to save the space of defining all possible inputs and all possible states of our machines, we will work with schemas , that is, rules to construct states and input. The abstraction is isomorphic.

    Complex states

    We know that state machine behaviours, both the old gen_fsm and the new gen_statem, really are Turing Machines: they both keep a data record that can hold unbounded memory, hence acting as the Turing Machine tape. The OTP documentation for the gen_statem behaviour even says so explicitly:

    Like most gen_ behaviours, gen_statem keeps a server Data besides the state. Because of this, and as there is no restriction on the number of states (assuming that there is enough virtual machine memory) or on the number of distinct input events, a state machine implemented with this behaviour is in fact Turing complete. But it feels mostly like an Event-Driven Mealy machine .

    But we can still model a state machine schema with accuracy. ‘gen_statem’, on initialisation, admits a callback mode called ‘handle_event_function’. We won’t go into the details, but they are well explained in the available official documentation .

    By choosing the callback mode, we can use data structures as states. Note again that, theoretically, a state machine whose states are defined by complex data structures are isomorphic to giving unique names to every possible combination of such data structures internals: however large of a set, such a set is still finite.

    Now, let’s implement the previous protocol in an equivalent manner, but with no data record whatsoever, with retries and asynchronous authentication included:

    handle_event(_, {stream_start, Host},
                 [{wait_for_stream, not_auth}], _) ->
        StartCreds = get_configured_auth_for_host(Host),
        {next_state, [{wait_for_feature, not_auth}, {creds, StartCreds}, {retry, 3}]};
    
    handle_event(_, {authenticate, Creds},
                 [{wait_for_feature, not_auth}, {creds, StartCreds}, {retries, -1}], _) ->
    	stop;
    handle_event(_, {authenticate, Creds},
                 [{wait_for_feature, not_auth}, {creds, StartCreds}, {retries, N}], _) ->
        Req = auth_server:authenticate(StartCreds, Creds),
        {next_state, [{wait_for_feature, not_auth}, {req, Req}, {creds, Creds}, {retries, N-1}]};
    handle_event(_, {authenticated, Req},
                 [{wait_for_feature, not_auth}, {req, Req}, {creds, Creds} | _], _) ->
        {next_state, [{wait_for_stream, auth}, {jid, get_jid(Creds)}]};
    handle_event(_, Other,
                 [{wait_for_feature, not_auth} | _], _) ->
        {keep_state, [postpone]};
    
    handle_event(_, {stream_start, Host}, [{wait_for_stream, auth}, {jid, JID}], _) ->
        {next_state, [{wait_for_feature, auth}, {jid, JID}]};
    
    handle_event(_, {session, Resource}, [{wait_for_feature, auth}, {jid, JID}], _) ->
        FullJID = jid:replace_resource(JID, Resource),
        session_manager:put(self(), FullJID),
        {next_state, [{session, FullJID}]};

    And from this point on, we have a session with a known Jabber IDentifier (JID) registered in the session manager, that can send and receive messages. Note how the code pattern-matches between the given input and the state, and the state is a proplist ordered by every element being a substate of the previous.

    Now the machine is ready to send and receive messages, so we can add the following code:

    handle_event(_, {send_message_to, Message, To},
                 [{session, FullJID}], _) ->
        ToPid = session_manager:get(To),
        ToPid ! {receive_message_from, Message, FullJID,
        keep_state;
    handle_event(_, {receive_message_from, Message, From},
                 [{session, FullJID}], #data{socket = Socket}) ->
        tcp_socket:send(Socket, Message),
        keep_state;

    Only in these two function clauses, state machines interact with each other. There’s only one element that would be needed to be stored on the data record: the Socket. This element is valid for the entire life of the state-machine, and while we could include it on the state definition for every state, for once we might as well keep it globally on the data record for all of them, as it is globally valid.

    Please read the code carefully, as you’ll find it is self-explanatory.

    Staged processing of events

    A protocol like XMPP, is defined entirely in the Application Layer of the OSI Model , but as an implementation detail, we need to deal with the TCP (and potentially TLS) packets and transform them into the XML data-structures that XMPP will use as payloads. This can be implemented as a separate gen_server that owns the socket, receives the TCP packets, decrypts them, decodes the XML binaries, and sends the final XML data structure to the state-machine for processing. In fact, this is how this protocol was originally implemented, but for completely different reasons.

    In much older versions of OTP, SSL was implemented in pure Erlang code, where crypto operations (basically heavy number-crunching), was notoriously slow in Erlang. Furthermore, XML parsing was also in pure Erlang and using linked-lists as the underlying implementation of the strings. Both these operations were terribly slow and prone to produce enormous amounts of garbage, so it was implemented in a separate process. Not for the purity of the state machine abstractions, but simply to unblock the original state machine from doing other protocol related processing tasks.

    But this means a certain duplicity. Every client now has two Erlang processes that send messages to each other, effectively incurring a lot of copying in the messaging. Now OTP implements crypto operations by binding to native libcrypto code, and XML parsing is done using exml , our own fastest XML parser available in the BEAM world. So the cost of packet preprocessing is now lower than the message copying, and therefore, it can be implemented in a single process.

    Enter internal events:

    handle_event(info, {tls, Socket, Payload}, _, Data#{socket = Socket}) ->
    	XmlElements = exml:parse(tls:decrypt(Socket, Payload)),
    	StreamEvents = [{next_event, internal, El} || El <- XmlElements],
    	{keep_state, StreamEvents};

    Using this mechanism, all info messages from the socket will be preprocessed in a single function head, and all the previous handlers simply need to match on events of type internal and of contents an XML data structure.

    A pure abstraction

    We have prototyped a state machine implementing the full XMPP Core protocol (RFC6120) , without violating the abstraction of the state machine. At no point do we have a full Turing-complete machine, or even a pushdown automaton. We have a machine with a finite set of states and a finite set of input strings, albeit large as they’re both defined schematically, and a function, `handle_event/4`, that takes a new input and the current state and calculates side effects and the next state.

    However, for convenience we might break the abstraction in sensible ways. For example, in XMPP, you might want to enable different configurations for different hosts, and as the host is given in the very first event, you might as well store in the data record the host and the configuration type expected for this connection – this is what we do in MongooseIM’ s implementation of the XMPP server.

    Breaking purity

    But there’s one point of breaking in the XMPP case, which is in the name of the protocol. “ X” stands for extensible , that is, any number of extensions can be defined and enabled, that can significantly change the behaviour of the machine by introducing new states, or responding to new events. This means that the function 𝛿 that decides the next step and the side-effects, does not depend only on the current state and the current input, but also on the enabled extensions and the data of those extensions.

    Only at this point we need to break the finite state machine abstraction: the data record will keep an unbounded map of extensions and their data records, and 𝛿 will need to take this map to decide the next step and decide not only the next state and the side-effects, but also what to write on the map: that is, here, our State Machine does finally convert into a fully-fledged Turing Machine.

    With great power…

    Restraining your protocol to a Finite State Machine has certain advantages:

    • Memory consumption: the main difference, simplifying, between a Turing Machine and an FSM, is that the Turing Machine has infinite memory at its disposal, which means that when you have way too many Turing Machines roaming around in your system, it might get hard to reason about the amount of memory they all consume and how it aggregates. In contrast, it’s easier to reason about upper bounds for the memory the FSMs will need.
    • Determinism: FSMs exhibit deterministic behaviour, meaning that the transition from one state to another is uniquely determined by the input. This determinism can be advantageous in scenarios where predictability and reliability are crucial. Turing machines instead can exhibit a complexity that may not be needed for certain applications.
    • Halting: we have all heard of the Halting Problem, right? Turns out, proving that a Finite State Machine halts is always possible.
    • Testing: as the number of states and transitions of an FSM are finite, testing all the code-paths of such a machine is indeed a finite task. There are indeed State Machine learning algorithms that verify implementations (see LearnLib ) and property-based testing has a good chance to reach all edge-cases.

    When all we wanted was to implement a communication protocol, as varied as XMPP or TLS, where what we implement is a relationship of input, states, and output, a Finite State Machine is the right tool for the job. Using hierarchical states can model certain states better than using a simplified version of the states and global memory to decide the transitions (i.e., implement 𝛿) and will result in a purer and more testable implementation.

    Further examples:

    The post Reimplementing Technical Debt with State Machines appeared first on Erlang Solutions .

    • chevron_right

      Erlang Solutions: Advent of Code 2023

      news.movim.eu / PlanetJabber · Friday, 1 December, 2023 - 12:26 · 3 minutes

    Hello! I’m Piotr from Erlang Solutions Poland and I have the pleasure of saving Christmas this year with the power of Erlang for you!

    This is the second time we participate in the amazing event called the Advent of Code . Last year’s edition was solved by my colleague Aleksander and as far as I know – many of you enjoyed following his efforts. I hope you’ll like my tale of helping Santa too!

    I’m going to publish my solutions in my GitHub repository . They will be accompanied by a commentary, added to this page on a daily basis. I will add solutions for each day in an individual folder, with my input file downloaded from the AoC website.

    I’m also going to include a bit of microbenchmarking in every solution, with and without JIT. Firstly, to measure the overall performance of the code and secondly to see how much the efficiency improves thanks to JIT. I’m going to measure the computation time only with `timer:tc/3` call, as I consider time needed to compile a module and load the input file irrelevant. By “load” I mean: read it and split it into lines. Any further processing of individual lines is considered a computation. I will provide min, max and arithmetic average of 100 runs.

    I’ll be implementing solutions as EScripts, so running them is a bit more straightforward. And frankly – I think they are underrated and sometimes I prefer them over writing BASH scripts. I’ll always include `-mode(compile).` directive to avoid the interpretation performance penalty. For those who are not aware of this capability, I’ll also run Day 1 without this option to show you how the timings change.

    I’m going to run every piece of the code on Linux Mint 21.2 VirtualBox machine with 4 cores and 8GB of memory, hosted on my personal PC with Ryzen 3700X and DDR4 at 3200MHz. I will use OTP 26.1.1.

    Day 1

    Part 1

    I would never suspect that I’ll begin the AoC challenge with being loaded onto a trebuchet. I’d better do the math properly! Or rather – have Erlang do the calibration for me.

    FYI: I do have some extra motivation to repair the snow production: my kids have been singing “Do You Want to Build a Snowman?” for a couple of days already and there is still nowhere enough of it where I live.

    I considered three approaches to the first part of the puzzle:

    1. Run a regular expression on each line.
    2. Filter characters of a line with binary comprehension and then get the first and last digit from the result.
    3. Iterate over characters of a line and store digits in two accumulators.

    I chose the last one, as (1) felt like shooting a mosquito with a M61 Vulcan Cannon. Second one felt kind of less Erlang-ish than the third one. After all, matching binaries and recursive solutions are very natural in this language.

    Timings

    Min Avg Max
    Compiled + JIT 0.000091s 0.000098s 0.000202s
    Compiled + no JIT 0.000252s 0.000268s 0.000344s
    Interpreted 0.091494s 0.094965s 0.111017s

    Part 2

    By choosing the method of matching binaries, I was able to add support for digits as words pretty easily. If there were more mappings than just nine, I’d probably use a map to store all possible conversions and maybe even compile a regular expression from them.

    Eventually, the temptation of violating the DRY rule a bit was too strong and I just went for individual function clauses.

    And my solution was invalid. Shame on me but I admit I needed a hint from other participants – it turned out that some words can overlap and they have to be treated as individual digits. It wasn’t explicitly specified and ignoring overlaps in the example did not lead to an invalid result – a truly evil decision of AoC maintainers!

    Simply put, at first I thought such a code would be enough:

    parse(<<"one", Rest/binary>>, First, _Last) -> store(1, Rest, First);

    But the actual Rest must be defined as <<_:8, Rest/binary>>.

    Timings

    Min Avg Max
    Compiled + JIT 0.000212s 0.000225s 0.000324s
    Compiled + no JIT 0.000648s 0.000679s 0.000778s
    Interpreted 0.207670s 0.213344s 0.242223s

    JIT does make a difference, doesn’t it?

    The post Advent of Code 2023 appeared first on Erlang Solutions .

    • chevron_right

      Ignite Realtime Blog: More Openfire plugin maintenance releases!

      news.movim.eu / PlanetJabber · Tuesday, 28 November, 2023 - 14:24 · 2 minutes

    Following the initial batch of Openfire plugin releases that we did last week, another few have been made available!

    Version 1.0.1 of the Spam Blacklist plugin was released. This plugin uses an external blocklist to reject traffic from specific addresses. This is a minor maintenance release that does not introduce functionality changes.

    Version 1.0.0 of the EXI plugin was released. Efficient XML Interchange (EXI) is a binary XML format for exchange of data on a computer network. It is one of the most prominent efforts to encode XML documents in a binary data format, rather than plain text. Using EXI format reduces the verbosity of XML documents as well as the cost of parsing. Improvements in the performance of writing (generating) content depends on the speed of the medium being written to, the methods and quality of actual implementations. After our request for comments on this prototype, no major defects were reported. As such, we’ve decided to publish a proper release of the plugin!

    Version 1.0.4 of the Email on Away plugin was released. This plugin allows to forward messages to user’s email address when the user is away (not offline). In this release, the build process was fixed. No functional changes were introduced.

    Version 1.0.0 of the Push Notification plugin was released. This plugin adds support sending push notifications to client software, as described in XEP-0357: “Push Notifications” . In this release, compatibility with Openfire 4.8 was implemented.

    Version 0.0.3 of the Ohùn plugin was released. This plugin implements a simple audio conferencing solution for Openfire using the Kraken WebRTC client and server . No functional changes were introduced in this release.

    Version 0.0.3 of the Gitea plugin was released. This Openfire plugin adds a real-time communication to content management using a familiar GIT based workflow to create a very responsive collaboration platform that will enable an agile team to create, manage and deliver any type of content with quality assurance. IN this release, the gitea dependency was updated to 1.7.3.

    Version 1.3.0 of the User Status plugin was released. This plugin automatically saves the last status (presence, IP address, logon and logoff time) per user and resource to userStatus table in the Openfire database. In this release, compatibility with Openfire 4.8 was implemented.

    All of these plugins should show up in your Openfire admin console in the next few hours. You can also download them directly from their archive pages, which is linked to in the text above.

    For other release announcements and news follow us on Mastodon or X

    1 post - 1 participant

    Read full topic