call_end

    • Pl chevron_right

      Gajim: Gajim 1.5.4

      news.movim.eu / PlanetJabber • 3 December, 2022 • 1 minute

    Gajim 1.5.4 comes with a reworked file transfer interface, better URL detection, message selection improvements, and many fixes under the hood. Thank you for all your contributions!

    What’s New

    Gajim’s interface for sending files has been reworked, and should be much easier to use now. For each file you’re about to send, Gajim will generate a preview. This way, you can avoid sending the wrong file to somebody. Regardless of how you start a file transfer, be it drag and drop, pasting a screen shot, or simply clicking the share button, you’ll always be able to check what you’re about to send.

    Gajim’s new file transfer interface

    Gajim’s new file transfer interface

    More Changes

    • Performance: Chat history is now displayed quicker
    • Support for Jingle XTLS has been dropped, since it hasn’t been standardized
    • geo:-URIs are now prettier (thanks, @mjk )
    • Dependencies: pyOpenSSL has been replaced by python-cryptography

    Fixes

    • Fixes for message selection
    • Improvements for recognizing URLs ( @mjk )
    • Many fixes to improve Gajim’s usability

    Over 20 issues have been fixed in this release. Have a look at the changelog for a complete list.

    Gajim

    As always, don’t hesitate to contact us at gajim@conference.gajim.org or open an issue on our Gitlab .

    • Pl chevron_right

      Ignite Realtime Blog: Openfire Monitoring Service plugin 2.4.0 release

      news.movim.eu / PlanetJabber • 22 November, 2022

    Earlier today, we have released version 2.4.0 of the Openfire Monitoring Service plugin. This plugin adds both statistics, as well as message archiving functionality to Openfire.

    In this release, compatibility with future versions of Openfire is added. A bug that affects MSSQL users has been fixed, and the dreaded “Unable to save XML properties” error message has been resolved. A few other minor tweaks have been added.

    As always, your instance of Openfire should automatically display the availability of the update. Alternatively, you can download the new release of the plugin at the Monitoring plugin’s archive page .

    For other release announcements and news follow us on Twitter

    1 post - 1 participant

    Read full topic

    • Pl chevron_right

      ProcessOne: ejabberd 22.10

      news.movim.eu / PlanetJabber • 28 October, 2022 • 7 minutes

    This ejabberd 22.10 release includes five months of work, over 120 commits, including relevant improvements in MIX, MUC, SQL, and installers, and bug fixes as usual.

    jabberd 22.10 released
    This version bring support for latest MIX protocol version, and significantly improves detection and recovery of SQL connection issues.

    There are no breaking changes in SQL schemas, configuration, or commands API. If you develop an ejabberd module, notice two hooks have changed: muc_subscribed and muc_unsubscribed .

    A more detailed explanation of those topics and other features:

    Erlang/OTP 19.3

    You may remember than in the previous ejabberd release, ejabberd 22.05 , support for Erlang/OTP 25 was introduced, even if 24.3 is still recommended for stable deployments.

    It is expected that around April 2023, GitHub Actions will remove Ubuntu 18 and it will not be possible to run automatic tests for ejabberd using Erlang 19.3, the lowest possible will be Erlang 20.0.

    For that reason, the planned schedule is:

    • ejabberd 22.10
      • Usage of Erlang 19.3 is discouraged
      • Anybody still using Erlang 19.3 is encouraged to upgrade to 24.3, or at least 20.0.
    • ejabberd 23.05 (or later)
      • Support for Erlang 19.3 is deprecated
      • Erlang requirement softly increased in `configure.ac`
      • Announce: no warranty ejabberd can compile, start or pass the Common Tests suite using Erlang 19.3,
      • Provide instructions for anybody to manually re-enable it and run the tests.
    • ejabberd 23.xx+1 (or later)
      • Support for Erlang 19.3 is removed completely in the source code

    New log_burst_limit_* options

    Two options were added in #3865 to configure logging limits in case of high traffic:

    • log_burst_limit_window_time defines the time period to rate-limit log messages by.

    • log_burst_limit_count defines the number of messages to accept in that time period before starting to drop them.

    Support ERL_DIST_PORT option to work without epmd

    The option ERL_DIST_PORT is added to ejabberdctl.cfg , disabled by default.

    When this option is set to a port number, the Erlang node will not start epmd and will not listen in a range of ports for erlang connections (typically used for ejabberdctl and for clustering ). Instead, the erlang node will simply listen in that port number.

    Please note:

    • Erlang/OTP 23.1 or higher is required to use ERL_DIST_PORT
    • make relive doesn’t support ERL_DIST_PORT , neither rebar3 nor elixir
    • To start several ejabberd nodes in the same machine, configure a different port in each node

    Support version macros in captcha_cmd option

    Support for the @VERSION@ and @SEMVER@ macros was added to the captcha_cmd option in #3835 .

    Those macros are useful because the example captcha scripts are copied in a path like ejabberd-VERSION/priv/bin that depends on the ejabberd version number and changes for each release. Also, depending on the install method (rebar3 or Elixir’s mix), that VERSION may be in XX.YY or in SEMVER format (respectively).

    Now, it’s possible to configure like this:

    captcha_cmd: /opt/ejabberd-@VERSION@/lib/ejabberd-@SEMVER@/priv/bin/captcha.sh
    

    Hook Changes

    Two hooks have changed: muc_subscribed and muc_unsubscribed . Now they get the packet and room state, and can modify the sent packets. If you write source code that adds functions to those hooks, please notice that previously they were ran like:

    ejabberd_hooks:run(muc_subscribed, ServerHost, [ServerHost, Room, Host, BareJID]);
    

    and now they are ran like this:

    {Packet2a, Packet2b} = ejabberd_hooks:run_fold(muc_subscribed, ServerHost, {Packet1a, Packet1b},
    [ServerHost, Room, Host, BareJID, StateData]),
    

    being Packet1b a copy of Packet1a without the jid attribute in the muc_subscribe element.

    Translations Updates

    Several translations were improved: Ukrainian, Chinese (Simplified), French, German, Russian, Portuguese (Brazil), Spanish and Catalan. Thanks to all this people that contribute in ejabberd at Weblate !

    WebAdmin page for external modules

    A new page is added in ejabberd’s WebAdmin to view available external modules, update their source code, install, upgrade and remove them. All this is equivalent to what was already available using API commands from the modules tag .

    Many modules in the ejabberd-contrib git repository have been improved, and their documentation updated. Additionally, those modules are now automatically tested, at least compilation, installation and static code analysis.

    Documentation Improvements

    In addition to the normal improvements and fixes, two sections in the ejabberd Documentation are greatly improved:

    ChangeLog

    General

    • Add log_burst_limit_* options ( #3865 )
    • Support ERL_DIST_PORT option to work without epmd
    • Auth JWT: Catch all errors from jose_jwt:verify and log debugging details ( #3890 )
    • CAPTCHA: Support @VERSION@ and @SEMVER@ in captcha_cmd option ( #3835 )
    • HTTP: Fix unix socket support ( #3894 )
    • HTTP: Handle invalid values in X-Forwarded-For header more gracefuly
    • Listeners: Let module take over socket
    • Listeners: Don’t register listeners that failed to start in config reload
    • mod_admin_extra : Handle empty roster group names
    • mod_conversejs : Fix crash when mod_register not enabled ( #3824 )
    • mod_host_meta : Complain at start if listener is not encrypted
    • mod_ping : Fix regression on stop_ping in clustering context ( #3817 )
    • mod_pubsub : Don’t crash on command failures
    • mod_shared_roster : Fix cache invalidation
    • mod_shared_roster_ldap : Update roster_get hook to use #roster_item{}
    • prosody2ejabberd : Fix parsing of scram password from prosody

    MIX

    • Fix MIX’s filter_nodes
    • Return user jid on join
    • mod_mix_pam : Add new MIX namespaces to disco features
    • mod_mix_pam : Add handling of IQs with newer MIX namespaces
    • mod_mix_pam : Do roster pushes on join/leave
    • mod_mix_pam : Parse sub elements of the mix join remote result
    • mod_mix_pam : Provide MIX channels as roster entries via hook
    • mod_mix_pam : Display joined channels on webadmin page
    • mod_mix_pam : Adapt to renaming of participant-id from mix_roster_channel record
    • mod_roster : Change hook type from #roster{} to #roster_item{}
    • mod_roster : Respect MIX “ setting
    • mod_roster : Adapt to change of mix_annotate type to boolean in roster_query
    • mod_shared_roster : Fix wrong hook type #roster{} (now #roster_item{} )

    MUC:

    • Store role, and use it when joining a moderated room ( #3330 )
    • Don’t persist none role ( #3330 )
    • Allow MUC service admins to bypass max_user_conferences limitation
    • Show allow_query_users room option in disco info ( #3830 )
    • Don’t set affiliation to none if it’s already none in mod_muc_room:process_item_change/3
    • Fix mucsub unsubscribe notification payload to have muc_unsubcribe in it
    • Allow muc_{un}subscribe hooks to modify sent packets
    • Pass room state to muc_{un}subscribed hook
    • The archive_msg export fun requires MUC Service for room archives
    • Export mod_muc_admin:get_room_pid/2
    • Export function for getting room diagnostics

    SQL

    • Handle errors reported from begin/commit inside transaction
    • Make connection close errors bubble up from inside sql transaction
    • Make first sql reconnect wait shorter time
    • React to sql driver process exit earlier
    • Skip connection exit message when we triggered reconnection
    • Add syntax_tools to applications, required when using ejabberd_sql_pt ( #3869 )
    • Fix mam delete_old_messages_batch for sql backend
    • Use INSERT ... ON DUPLICATE KEY UPDATE for upsert on mysql
    • Update mysql library
    • Catch mysql connection being close earlier

    Compile

    • make all : Generate start scripts here, not in make install ( #3821 )
    • make clean : Improve this and “distclean”
    • make deps : Ensure deps configuration is ran when getting deps ( #3823 )
    • make help : Update with recent changes
    • make install : Don’t leak DESTDIR in files copied by ‘make install’
    • make options : Fix error reporting on OTP24+
    • make update : configure also in this case, similarly to make deps
    • Add definition to detect OTP older than 25, used by ejabberd_auth_http
    • Configure eimp with mix to detect image convert properly ( #3823 )
    • Remove unused macro definitions detected by rebar3_hank
    • Remove unused header files which content is already in xmpp library

    Container

    • Get ejabberd-contrib sources to include them
    • Copy .ejabberd-modules directory if available
    • Do not clone repo inside container build
    • Use make deps , which performs additional steps ( #3823 )
    • Support ERL_DIST_PORT option to work without epmd
    • Copy ejabberd-docker-install.bat from docker-ejabberd git and rename it
    • Set a less frequent healthcheck to reduce CPU usage ( #3826 )
    • Fix build instructions, add more podman examples

    Installers

    • make-binaries: Include CAPTCHA script with release
    • make-binaries: Edit rebar.config more carefully
    • make-binaries: Fix linking of EIMP dependencies
    • make-binaries: Fix GitHub release version checks
    • make-binaries: Adjust Mnesia spool directory path
    • make-binaries: Bump Erlang/OTP version to 24.3.4.5
    • make-binaries: Bump Expat and libpng versions
    • make-packages: Include systemd unit with RPM
    • make-packages: Fix permissions on RPM systems
    • make-installers: Support non-root installation
    • make-installers: Override code on upgrade
    • make-installers: Apply cosmetic changes

    External modules

    • ext_mod: Support managing remote nodes in the cluster
    • ext_mod: Handle correctly when COMMIT.json not found
    • Don’t bother with COMMIT.json user-friendly feature in automated user case
    • Handle not found COMMIT.json, for example in GH Actions
    • Add WebAdmin page for managing external modules

    Workflows Actions

    • Update workflows to Erlang 25
    • Update workflows: Ubuntu 18 is deprecated and 22 is added
    • CI: Remove syntax_tools from applications, as fast_xml fails Dialyzer
    • Runtime: Add Xref options to be as strict as CI

    Full Changelog

    https://github.com/processone/ejabberd/compare/22.05…22.10

    ejabberd 22.10 download & feedback

    As usual, the release is tagged in the Git source code repository on GitHub .

    The source package and installers are available in ejabberd Downloads page. To check the *.asc signature files, see How to verify ProcessOne downloads integrity .

    For convenience, there are alternative download locations like the ejabberd DEB/RPM Packages Repository and the GitHub Release / Tags .

    The Docker image is in Docker Hub , and there’s an alternative Container image in GitHub Packages .

    If you suspect that you’ve found a bug, please search or fill a bug report on GitHub Issues .

    The post ejabberd 22.10 first appeared on ProcessOne .
    • Pl chevron_right

      Paul Schaub: Implementing Packet Sequence Validation using Pushdown Automata

      news.movim.eu / PlanetJabber • 26 October, 2022 • 6 minutes

    This is part 2 of a small series on verifying the validity of packet sequences using tools from theoretical computer science. Read part 1 here .

    In the previous blog post I discussed how a formal grammar can be transformed into a pushdown automaton in order to check if a sequence of packets or tokens is part of the language described by the grammar. In this post I will discuss how I implemented said automaton in Java in order to validate OpenPGP messages in PGPainless.

    In the meantime, I made some slight changes to the automaton and removed some superfluous states. My current design of the automaton looks as follows:

    If you compare this diagram to the previous iteration, you can see that I got rid of the states “Signed Message”, “One-Pass-Signed Message” and “Corresponding Signature”. Those were states which had ε -transitions to another state, so they were not really useful.

    For example, the state “One-Pass-Signed Message” would only be entered when the input “OPS” was read and ‘m’ could be popped from the stack. After that, there would only be a single applicable rule which would read no input, pop nothing from the stack and instead push back ‘m’. Therefore, these two rule could be combined into a single rule which reads input “OPS”, pops ‘m’ from the stack and immediately pushes it back onto it. This rule would leave the automaton in state “OpenPGP Message”. Both automata are equivalent.

    One more minor detail: Since I am using Bouncy Castle, I have to deal with some of its quirks. One of those being that BC bundles together encrypted session keys (PKESKs/SKESKs) with the actual encrypted data packets (SEIPD/SED). Therefore when implementing, we can further simplify the diagram by removing the SKESK|PKESK parts:

    Now, in order to implement this automaton in Java, I decided to define enums for the input and stack alphabets, as well as the states:

    public enum InputAlphabet {
        LiteralData,
        Signature,            // Sig
        OnePassSignature,     // OPS
        CompressedData,
        EncryptedData,        // SEIPD|SED
        EndOfSequence         // End of message/nested data
    }
    public enum StackAlphabet {
        msg,                 // m
        ops,                 // o
        terminus             // #
    }
    public enum State {
        OpenPgpMessage,
        LiteralMessage,
        CompressedMessage,
        EncryptedMessage,
        Valid
    }

    Note, that there is no “Start” state, since we will simply initialize the automaton in state OpenPgpMessage , with ‘m#’ already put on the stack.

    We also need an exception class that we can throw when OpenPGP packet is read when its not allowed. Therefore I created a MalformedOpenPgpMessageException class.

    Now the first design of our automaton itself is pretty straight forward:

    public class PDA {
        private State state;
        private final Stack<StackAlphabet> stack = new Stack<>();
        
        public PDA() {
            state = State.OpenPgpMessage;    // initial state
            stack.push(terminus);            // push '#'
            stack.push(msg);                 // push 'm'
        }
    
        public void next(InputAlphabet input)
                throws MalformedOpenPgpMessageException {
            // TODO: handle the next input packet
        }
    
        StackAlphabet popStack() {
            if (stack.isEmpty()) {
                return null;
            }
            return stack.pop();
        }
    
        void pushStack(StackAlphabet item) {
            stack.push(item);
        }
    
        boolean isEmptyStack() {
            return stack.isEmpty();
        }
    
        public boolean isValid() {
            return state == State.Valid && isEmptyStack();
        }
    }

    As you can see, we initialize the automaton with a pre-populated stack and an initial state. The automatons isValid() method only returns true , if the automaton ended up in state “Valid” and the stack is empty.

    Whats missing is an implementation of the transition rules. I found it most straight forward to implement those inside the State enum itself by defining a transition() method:

    public enum State {
    
        OpenPgpMessage {
            @Overrides
            public State transition(InputAlphabet input, PDA automaton)
                    throws MalformedOpenPgpMessageException {
                StackAlphabet stackItem = automaton.popStack();
                if (stackItem != OpenPgpMessage) {
                    throw new MalformedOpenPgpMessageException();
                }
                swith(input) {
                    case LiteralData:
                        // Literal Packet,m/ε
                        return LiteralMessage;
                    case Signature:
                        // Sig,m/m
                        automaton.pushStack(msg);
                        return OpenPgpMessage;
                    case OnePassSignature:
                        // OPS,m/mo
                        automaton.push(ops);
                        automaton.push(msg);
                        return OpenPgpMessage;
                    case CompressedData:
                        // Compressed Data,m/ε
                        return CompressedMessage;
                    case EncryptedData:
                        // SEIPD|SED,m/ε
                        return EncryptedMessage;
                    case EndOfSequence:
                    default:
                        // No transition
                        throw new MalformedOpenPgpMessageException();
                }
            }
        },
    
        LiteralMessage {
            @Overrides
            public State transition(InputAlphabet input, PDA automaton)
                    throws MalformedOpenPgpMessageException {
                StackAlphabet stackItem = automaton.popStack();
                switch(input) {
                    case Signature:
                        if (stackItem == ops) {
                            // Sig,o/ε
                            return LiteralMessage;
                        } else {
                            throw new MalformedOpenPgpMessageException();
                        }
                    case EndOfSequence:
                        if (stackItem == terminus && automaton.isEmptyStack()) {
                            // ε,#/ε
                            return valid;
                        } else {
                            throw new MalformedOpenPgpMessageException();
                        }
                    default:
                        throw new MalformedOpenPgpMessageException();
                }
            }
        },
    
        CompressedMessage {
            @Overrides
            public State transition(InputAlphabet input, PDA automaton)
                    throws MalformedOpenPgpMessageException {
                StackAlphabet stackItem = automaton.popStack();
                switch(input) {
                    case Signature:
                        if (stackItem == ops) {
                            // Sig,o/ε
                            return CompressedMessage;
                        } else {
                            throw new MalformedOpenPgpMessageException();
                        }
                    case EndOfSequence:
                        if (stackItem == terminus && automaton.isEmptyStack()) {
                            // ε,#/ε
                            return valid;
                        } else {
                            throw new MalformedOpenPgpMessageException();
                        }
                    default:
                        throw new MalformedOpenPgpMessageException();
                }
            }
        },
    
        EncryptedMessage {
            @Overrides
            public State transition(InputAlphabet input, PDA automaton)
                    throws MalformedOpenPgpMessageException {
                StackAlphabet stackItem = automaton.popStack();
                switch(input) {
                    case Signature:
                        if (stackItem == ops) {
                            // Sig,o/ε
                            return EncryptedMessage;
                        } else {
                            throw new MalformedOpenPgpMessageException();
                        }
                    case EndOfSequence:
                        if (stackItem == terminus && automaton.isEmptyStack()) {
                            // ε,#/ε
                            return valid;
                        } else {
                            throw new MalformedOpenPgpMessageException();
                        }
                    default:
                        throw new MalformedOpenPgpMessageException();
                }
            }
        },
    
        Valid {
            @Overrides
            public State transition(InputAlphabet input, PDA automaton)
                    throws MalformedOpenPgpMessageException {
                // Cannot transition out of Valid state
                throw new MalformedOpenPgpMessageException();
            }
        }
        ;
    
        abstract State transition(InputAlphabet input, PDA automaton)
                throws MalformedOpenPgpMessageException;
    }

    It might make sense to define the transitions in an external class to allow for different grammars and to remove the dependency on the PDA class, but I do not care about this for now, so I’m fine with it.

    Now every State has a transition() method, which takes an input symbol and the automaton itself (for access to the stack) and either returns the new state, or throws an exception in case of an illegal token.

    Next, we need to modify our PDA class, so that the new state is saved:

    public class PDA {
        [...]
    
        public void next(InputAlphabet input)
                throws MalformedOpenPgpMessageException {
            state = state.transition(input, this);
        }
    }

    Now we are able to verify simple packet sequences by feeding them one-by-one to the automaton:

    // LIT EOS
    PDA pda = new PDA();
    pda.next(LiteralData);
    pda.next(EndOfSequence);
    assertTrue(pda.isValid());
    
    // OPS LIT SIG EOS
    pda = new PDA();
    pda.next(OnePassSignature);
    pda.next(LiteralData);
    pda.next(Signature);
    pda.next(EndOfSequence);
    assertTrue(pda.isValid());
    
    // COMP EOS
    PDA pda = new PDA();
    pda.next(CompressedData);
    pda.next(EndOfSequence);
    assertTrue(pda.isValid());

    You might say “Hold up! The last example is a clear violation of the syntax! A compressed data packet alone does not make a valid OpenPGP message!”.

    And you are right. A compressed data packet is only a valid OpenPGP message, if its decompressed contents also represent a valid OpenPGP message. Therefore, when using our PDA class, we need to take care of packets with nested streams separately. In my implementation, I created an OpenPgpMessageInputStream , which among consuming the packet stream, handling the actual decryption, decompression etc. also takes care for handling nested PDAs. I will not go into too much details, but the following code should give a good idea of the architecture:

    public class OpenPgpMessageInputStream {
        private final PDA pda = new PDA();
        private BCPGInputStream pgpIn = ...; // stream of OpenPGP packets
        private OpenPgpMessageInputStream nestedStream;
    
        public OpenPgpMessageInputStream(BCPGInputStream pgpIn) {
            this.pgpIn = pgpIn;
            switch(pgpIn.nextPacketTag()) {
                case LIT:
                    pda.next(LiteralData);
                    ...
                    break;
                case COMP:
                    pda.next(CompressedData);
                    nestedStream = new OpenPgpMessageInputStream(decompress());
                    ...
                    break;
                case OPS:
                    pda.next(OnePassSignature);
                    ...
                    break;
                case SIG:
                    pda.next(Signature);
                    ...
                    break;
                case SEIPD:
                case SED:
                    pda.next(EncryptedData);
                    nestedStream = new OpenPgpMessageInputStream(decrypt());
                    ...
                    break;
                default:
                    // Unknown / irrelevant packet
                    throw new MalformedOpenPgpMessageException();
        }
    
        boolean isValid() {
            return pda.isValid() &&
                   (nestedStream == null || nestedStream.isValid());
    
        @Override
        close() {
            if (!isValid()) {
                throw new MalformedOpenPgpMessageException();
            }
            ...
        }
    }

    The key thing to take away here is, that when we encounter a nesting packet ( EncryptedData , CompressedData ), we create a nested OpenPgpMessageInputStream on the decrypted / decompressed contents of this packet. Once we are ready to close the stream (because we reached the end), we not only check if our own PDA is in a valid state, but also whether the nestedStream (if there is one) is valid too.

    This code is of course only a rough sketch and the actual implementation is far more complex to cover many possible edge cases. Yet, it still should give a good idea of how to use pushdown automata to verify packet sequences 🙂 Feel free to check out my real-world implementation here and here .

    Happy Hacking!

    • Pl chevron_right

      Erlang Solutions: Learning functional and concurrent programming concepts with Elixir

      news.movim.eu / PlanetJabber • 19 October, 2022 • 9 minutes

    If you are early in the process of learning Elixir or considering learning it in the future, you may have wondered a few things.  What is the experience like? How easy is it to pick up functional and concurrent programming concepts when coming from a background in languages which lack those features? Which aspects of the language are the most challenging for newcomers to learn?

    In this article, I will relate my experience as a new Elixir developer, working to implement the dice game Yatzy as my first significant project with the language.

    So far in my education and career, I have worked primarily with Java.

    This project was my first extensive exposure to concepts such as recursive functions, concurrent processes, supervision trees, and finite state machines, all of which will be covered in more depth throughout this article.

    The rules of Yatzy

    Yatzy is a variation of Yahtzee, with slight but notable differences to the rules and scoring. Players take turns rolling a set of five dice. They have the option to choose any number of their dice to re-roll up to two times each turn. After this, they must choose one of fifteen categories to score in. The “upper half” of the scorecard consists of six categories- “ones” through “sixes”. The score for each simply is the sum of all dice with the specified number. The “lower half”, consisting of the remaining nine categories, has more specific requirements, such as “two pairs”, “three of a kind”, “full house”, etc..

    If a player’s total score in the upper half is equal to or greater than 63, they receive a 50-point bonus. The player with the highest total across the whole scorecard once all categories have been filled wins the game.

    Requirements of the project

    Given this ruleset, a functioning implementation of Yatzy would need to do the following:

    • Simulate dice rolls, including those where certain dice are kept for subsequent rolls
    • Calculate the score a roll would result in for each category
    • Save each player’s scorecard throughout the entire game
    • Determine the winner at the end of the game,
    • Allow the players to take these actions via a simple UI.

    Due to my object-oriented background, my approach to this project in prior years would be to define classes to represent relevant concepts, such as the player, the scorecard, and the roll, and maintain the state via instances of these objects.

    Additionally, I would make sure of iteration via loops to traverse data structures. Working with Elixir requires these problems to be tackled in different ways. The concepts are instead represented by processes that can be run concurrently, and data structures are traversed with recursive functions.

    Adapting to this different structure and way of thinking was the most challenging and rewarding part of this project.

    Score calculations and pattern matching

    My first step in writing the project was to implement functions for rolling a set of five dice and calculating the potential scores of those dice rolls in each available category. The dice roll itself was fairly simple, but makes use of a notable feature of Elixir that I had not previously encountered: setting a default argument for a function.

    In this instance, the roll function takes a single argument, ‘keep’, representing the dice from a previous roll that the player has chosen to keep.

    def roll(keep \\ []) do
      dice = 1..6
      number_of_dice = 5 - Enum.count(keep)
      func = fn -> Enum.random(dice) end
      roll = Stream.repeatedly(func) |> Enum.take(number_of_dice)
      keep ++ roll
    end
    

    Here ‘keep’ has a default value of an empty list that will be used if ‘roll’ is called with no arguments, as it would be for the first roll in any turn. If a list is passed to ‘roll’, the function will only generate enough new numbers to fill out the rest of the roll, and then combine this list with ‘keep’ for its final output. This allowed my code to be simpler, defining one function head that could be used in multiple different scenarios.

    The score calculations themselves were far more complex and required making use of Elixir’s pattern-matching capabilities.

    In this case, testing for a valid score in each category required accounting for every possible configuration the dice could appear in when passed into the function. I was able to greatly reduce the number of cases by ensuring the dice were sorted descending when passed, but this still left a lot to account for. However, Elixir’s pattern matching makes this process easier than it would be otherwise: the cases can be handled entirely in the function heads, and each function can be written in a single line:

    def two_pairs([x, x, y, y, _]) when x != y, do: x * 2 + y * 2
    def two_pairs([x, x, _, y, y]) when x != y, do: x * 2 + y * 2
    def two_pairs([_, x, x, y, y]) when x != y, do: x * 2 + y * 2
    def two_pairs(_roll), do: 0
    

    Processes and GenServers

    The next step of building the game was to implement processes, starting with those for the player and the scorecard. Processes in Elixir are vital for maintaining state and allowing concurrency – as many of them can be run simultaneously. I was able to set up a process for each player in the game -one for the scorecard belonging to each of those players, as well as one more to handle the score calculations.

    As processes are dissimilar to the object-oriented model, they were the aspect of Elixir that took me the longest time to adjust to. I became comfortable with them by first learning how to work with raw processes, in order to better understand the theory behind them. After this, I converted these processes into GenServers, which contain improved functionality and handle most of the client/server interactions automatically.

    The supervision tree

    Another benefit of GenServers over raw processes is that they can be used as part of a supervision tree. In Elixir, a supervisor is a process that monitors other processes and restarts them if they crash. A supervision tree is a branching structure consisting of multiple supervisors and their child processes. In my Yatzy application, the supervision tree consists of a head supervisor with the scoring process as a child, along with another child supervisor for each player in the game. Each of these player supervisors has two children: a player and a scorecard.

    Due to supervisors being syntactically similar to GenServers, the majority of this step of the process was simple. I had already learned how to implement the relevant API and callback functions, however, one mistake that took some time to notice was accidentally using GenServer.start_link instead of Supervisor.start_link in the API for the player supervisor. This problem was particularly hard to diagnose as it resulted in no compile or runtime errors in the application but did result in the supervisor’s child processes not starting and the game not functioning.

    Finite state machine

    After setting up the supervision tree, I still needed to define one more process to handle the functions for running through a single player’s turn. This process was implemented as another child of the head supervisor. As this process needed to handle multiple different states representing different stages of the turn, I constructed it as a finite state machine using the GenStateMachine module.

    In this case, I defined four states, representing how many rolls are remaining in the turn: three, two, one, and none. It contains functions handling calls that represent a roll of the dice, which will set the machine to its next state, and functions that will reset it to its initial state for the end of the turn, including if the player decides not to use all their rolls.

    Below is an example of one of the calls, representing a player making their second roll in a turn.

    def handle_event({:call, from}, {:roll, keep}, :two_rolls, data) do
      data = data ++ keep
      {:next_state, :one_roll, data, [{:reply, from, data}]}
    end
    

    Compared to learning how to work with GenServers and Supervisors, this functionality was actually rather simple to pick up. I had never worked with finite state machines in other languages, but the examples of GenStateMachine in the Elixir documentation were easy to understand and contained all the information I needed in order to implement this process.

    User interface and recursion

    Once the required processes were in place in a supervision tree, it was time to implement a simple text-based interface allowing a full game of Yatzy to be played all the way through.

    This would require each player in turn to receive the results of a dice roll, be prompted to choose which, if any, dice to keep for their subsequent rolls, and then again prompt them to choose which category to score in for that turn. It should loop through the players in this way until the game is complete, at which point it should declare the winner and prompt the user to reset the scorecards and play again.

    Implementing the interface was the most complex and time-consuming part of the project. This required a significant amount of trial-and-error and researching through the Elixir docs, in order to get something functioning. However, one aspect that was easier than expected was working with recursive functions. I had rarely used recursion while working in Java due to the language’s focus on iterative loops, and as such never became fully comfortable with the technique. Implementing the interface required me to use recursion in several different places, and I was surprised at how easy it was to pick up in this language, with the pattern matching on function parameters making it simple to account for the end of the loop. The following is one of the recursive functions I implemented, which maps the results of a dice roll to the letters a, b, c, d, and e, allowing the player to pick which of the five they want to keep in the text-based interface.

    def map_dice([head | tail], indexes) do
      index = String.to_atom(head)
      key_in_indexes = Map.has_key?(indexes, index)
      case index do
        index when key_in_indexes ->
          value = Map.get(indexes, index)
          [value | map_dice(tail, indexes)]
        _index ->
          [map_dice(tail, indexes)]
      end
    end
    

    Future

    Although my Yatzy implementation is currently functioning correctly, I plan to extend the project further in the future. In the current version, only three players are supported, with their names hard-coded into the program. I would like future versions to have a dynamic amount of players, along with the ability for the players to specify their own usernames.

    Additionally, I am also planning to learn the basics of Phoenix LiveView in the near future. Once I have done this, I would like to write a frontend for the program, allowing the players to interact with a more readable, visually appealing graphical interface, rather than the current text-based version.

    Conclusion

    Overall, I would describe my experience with the project as positive and feel that it served as a good introduction to Elixir.

    I was able to learn many of the basic features of the language naturally in order to fulfill the requirements of the game, and adjusted my ways of thinking about programming to better suit working with functional and concurrent programs. As a result, I feel like I have a good understanding of the basics of Elixir, and I am more confident about my ability to carry out other work with the language in the future.

    The post Learning functional and concurrent programming concepts with Elixir appeared first on Erlang Solutions .

    • Pl chevron_right

      Erlang Solutions: Everything you need to know about Phoenix Framework 1.7

      news.movim.eu / PlanetJabber • 13 October, 2022 • 7 minutes

    It is an exciting time for the Elixir community. As you may have seen at ElixirConf or ElixirConf EU, we are celebrating the 10th anniversary of Elixir . Despite now being 10 years old, there is no slowdown in the number of exciting new features, frameworks, and improvements being made to the language.

    One of the most exciting developments for Elixir is undoubtedly Phoenix . It is a project that is growing in both features and uses cases at an incredible pace. Phoenix 1.5 included some huge changes including the addition of LiveView to the framework, the creation of LiveDashboard, and the new version of PubSub (2.0.).

    Next Phoenix 1.6 introduced even more exciting features, most notably the HEEx engine, the authentication and mailer generators, better integration with LiveView, and the removal of node and webpack, which was replaced with a more simplified esbuild tool.

    For many of us, each new Phoenix framework release brings back the feeling of being a kid on Christmas, we wait with eager anticipation for Chris McCord to announce the new toys we have to play with for the upcoming year, but with these new toys also comes a challenge for those who want to keep their skills and their systems up-to-date. The migration nightmare. We will revisit that at the end of this post.

    Roadmap

    Since Phoenix 1.5 it is a noticeable trend to move into LiveView, as we progress, LiveView can replace more and more JavaScript code, allowing the Elixir developer to get better control of the HTML generation. In the latest release, this trend is continued with the following new features:

    • Verified Routes. This gives us the ability to define paths using a sigil that checks compilation time compared to defined routes.
    • Tailwind. In addition to answering our prayers concerning JavaScript and HTML, this new version also helps manage CSS.
    • Component-based generators. These features offer us a new and better way to write components.
    • Authentication generation code using LiveView. This lets us generate the code for the authentication code but using LiveView instead of the normal controllers, views, and templates.

    We will go deeper into each of these features, but you can already see a trend, right? We are moving more and more to LiveView in the same way we are removing the need to manage things like HTML, JavaScript, and CSS.

    First, let’s look more at LiveView specifically, for release 0.18, Chris McCord announced these improvements:

    • Declarative assigns/slots – which let us define information about attributes and slots which are included inside of the components.
    • HTML Formatter – which performs the format (mix format) for HEEx code even if it’s included inside of the sigil ~H.
    • Accessibility building blocks.

    Now let’s look at each of these elements in deeper detail.

    Verified Routes

    The story is that Jason Stiebs (from the Phoenix team) has been requesting a better, less verbose way, to use the routes for the last 8 years. The 12th time he requested it Chris McCord agreed to this feedback and José Valim had a fantastic way to make that happen.

    The basic idea is that if we have this:

    This is generating the route which we could use in this way:

    This is very verbose, but it could be even worse if we have a definition of the routes nested like this one:

    And it is just as verbose when we use LiveView:

    To avoid this, the Verified Routes provides us a shortcut using the path:

    As you can see, using the sigil “~p” we can define the path where we want to go and it’s completely equivalent to using the previous Routes helper function.

    The main advantage of this feature is that it allows us to write the path concisely and still check if that route is valid or not in the same way we would use the Route Helper function.

    Tailwind

    To understand this change let’s look at what Adam Wathan (creator of Tailwind) said about CSS and the use of CSS:

    The use of CSS in a traditional way, that is using “semantic class names”, is hard to maintain and that’s why he created Tailwind. Tailwind is based on the specification of how the element should be shown. There can be different elements that are semantically the same, for example, two “Accept” buttons where we want one to appear big and the other a bit narrower. Under this paradigm, we’d be forced to use the class “accept-button” in addition to the classes which are modifying this case and which do not allow them to be reused.

    The other approach is to implement small modifications to how we present the buttons. In this way, we can define a lot in HTML and get rid of the CSS.

    The main idea, as I said previously, is to replace as much CSS as possible in the same way as LiveView replaced a lot of JavaScript:

    For example, using Tailwind with HTML and getting rid of CSS, we could build a button like this one with the code shown in the image below:

    It could be argued that it’s complex, but it’s indeed perfect from the point of view of LiveView and components because these classes can be encapsulated inside of the component and we can use it in this way:

    And finally, in the template:

    Easy, right?

    Authentication generation code using LiveView

    Big thank you to Berenice Medel on the Phoenix team, she had the great idea to have the generation of the authentication templates work with LiveView.

    Declarative Assigns / Slots

    Before going into this section, Chris McCord gave a big thank you to Marius Saraiva and Connor Lay. They are the people in charge of all of the improvements regarding declarative assigns, slots, and HEEx.

    The idea behind slots and attrs is to provide us with a way to define attributes and sub-elements inside of a defined component. The example above, it’s defining a component with the name “table”. It’s defining the attributes “row_id” and “rest”, as you can see in the documentation, the attributes for the table are “rows”, “row_id”, and “class”. That means we can find “row_id”, then “rest” will feature a map with all of the remaining attributes.

    As we said, the slot is a way to indicate we are going to use a sub-element “col” inside of the “table”. In the example, you can see two elements “col” inside of “table”. The “col” element has only defined one attribute “if” which is a boolean.

    HTML Formatter

    A big thank you to Felipe Renan who worked on the implementation of this for HEEx to be included in Phoenix. Now, it’s possible to have a “mix format” fixing the format of the code written inside of the templates, even inside of the ~H sigil.

    Accessibility building blocks

    Phoenix 1.7 includes some primitives for helping to create more accessible websites. One of them is “focus_wrap”:

    This helps define the areas where you want to shift focus between multiple elements inside of a defined area instead of a whole website.

    This works in combination with functions in the JS module which configure the focus like a stack. When you go into the modal it pushes the focus area that we use and when the modal is closed, we pop out from that area of the stack and stay with the previous one.

    More improvements in the Roadmap

    One of the improvements for LiveView is Storybook. Storybook is a visual UI creator which lets us define the components we want to be included in our websites and then generate the code to be implemented for it. Christian Blavier did great work starting this in his repository but he’s now off and the Phoenix team is going to be moving it forward and evolving it.

    Streaming data for optimized handling of collections data is another priority in the roadmap. The work for this has already started, fingers are crossed that it might be announced for the next release.

    During recent conferences, another speaker raised a concern about the messaging incompatibility between LiveView and LiveComponent, luckily, this is on the roadmap to be fixed shortly.

    And is that all?

    With all the developments in Phoenix, it would be easy to talk about at much greater length and in much greater detail. The pace of the Phoenix team’s progress is impressive and exciting.

    As it continues to grow it is easy to imagine a future where we only need to write HEEx code inside of Elixir to get full control of generated HTML, CSS, and JavaScript for the browser. It’s exciting to imagine and will be sure to further grow the use and adoption of Elixir as a full-stack technology.

    Read to adopt Elixir? Or need help with your implementation? Or contact us about our training options.

    The post Everything you need to know about Phoenix Framework 1.7 appeared first on Erlang Solutions .

    • Pl chevron_right

      Prosodical Thoughts: Mutation Testing in Prosody

      news.movim.eu / PlanetJabber • 13 October, 2022 • 7 minutes

    This is a post about a new automated testing technique we have recently adopted to help us during our daily development work on Prosody. It’s probably most interesting to developers, but anyone technically-inclined should be able to follow along!

    If you’re unfamiliar with our project, it’s an open-source real-time messaging server, built around the XMPP protocol. It’s used by many organizations and self-hosting hobbyists, and also powers applications such as Snikket , JMP.chat and Jitsi Meet .

    Like most software projects, we routinely use automated testing tools to ensure Prosody is behaving correctly, even as we continue to work daily on fixes and improvements throughout the project.

    We use unit tests, which test the individual modules that Prosody is built from, via the busted testing tool for Lua. We also developed scansion , an automated XMPP client, for our integration tests that ensure Prosody as a whole is functioning as expected at the XMPP level.

    Recently we’ve been experimenting with a new testing technique.

    Introducing ‘mutation testing’

    Mutation testing is a way to test the tests. It is an automated process that introduces intentional errors (known as “mutations”) into the source code, and then runs the tests after each possible mutation, to make sure they identify the error and fail.

    Example mutations are things like changing true to false , or + to - . If the program was originally correct, then these changes should make it incorrect and the tests should fail. However, if the tests were not extensive enough, they might not notice the change and continue to report that the code is working correctly. That’s when there is work to do!

    Mutation testing is similar and related to other testing methods such as fault injection , which intentionally introduce errors into an application at runtime to ensure it handles them correctly. Mutation testing is specifically about errors introduced by modifying the application source code in certain ways. For this reason it is applicable to any code written in a given language, and does not need to be aware of any application-specific APIs or the runtime environment.

    One end result of a full mutation testing analysis is a “mutation score”, which is simply the percentage of mutated versions of the program (“mutants”) that the test suite failed to identify. Along with coverage (which counts the percentage of lines successfully executed during a test run), the mutation score provides a way to measure the quality of a test suite.

    Code coverage is not enough

    Measuring coverage alone does not suffice to assess the quality of a test suite. Take this example function:

    function max(a, b, c)
    	if a > b or a > c then
    		return a
    	elseif b > a or b > c then
    		return b
    	elseif c > a or c > b then
    		return c
    	end
    end
    

    This (not necessarily correct) function returns the largest of three input values. The lazy (fictional!) developer who wrote it was asked to ensure 100% test coverage for this function, here is the set of tests they produced:

    assert(max(10, 0, 0) == 10) -- test case 1, a is greater
    assert(max(0, 10, 0) == 10) -- test case 2, b is greater
    assert(max(0, 0, 10) == 10) -- test case 3, c is greater
    

    Like most tests, it executes the function with various input values and ensures it returns the expected result. In this case, the developer moves the maximum value ‘10’ between the three input parameters and successfully exercises every line of the function, achieving 100% code coverage. Mission accomplished!

    But wait… is this really a comprehensive test suite? How can we judge how extensively the behaviour of this function is actually being tested?

    Mutation testing

    Running this function through a mutation testing tool will highlight behaviour that the developer forgot to test. So that’s exactly what I did.

    The tool generated 5 mutants, and the tests failed to catch 4 of them. This means the test suite only has a mutation score of 20%. This is a very low score, and despite the 100% line and branch coverage of the tests, we now have a strong indication that they are inadequate.

    To fix this, we next have to analyze the mutants that our tests considered acceptable. Here is mutant number one:

    function max(a, b, c)
    	if false and a > b or a > c then
    		return a
    	elseif b > a or b > c then
    		return b
    	elseif c > a or c > b then
    		return c
    	end
    end
    

    See what it did? It changed the first if a > b to if false and a > b , effectively ensuring the condition a > b will never be checked. A condition was entirely disabled, yet the tests continued to pass?! There are two possible reasons for this: either this condition is not really needed for the program to work correctly, or we just don’t have any tests verifying that this condition is doing its job.

    Which test case should have tested this path? Obviously ‘test case 1’:

    assert(max(10, 0, 0) == 10)
    

    a is the greatest input here, and indeed the test confirms that the function returns it correctly. But according to our mutation testing, this is happening even without the a > b check, and that seems wrong - we would only want to return a if it is also greater than b . So let’s add a test for the case where a is greater than c but not greater than b :

    assert(max(10, 15, 0) == 15)
    

    What a surprise, our new test fails:

    Failure → spec/max_spec.lua @ 4
    max produces the expected results
    spec/max_spec.lua:1: Expected objects to be equal.
    Passed in:
    (number) 10
    Expected:
    (number) 15
    

    With this new test case added, the mutant we looked at will no longer be passed, and we’ve successfully improved our mutation score.

    Mutation testing helped us discover that our tests were not complete, despite having 100% coverage, and helped us identify which test cases we had forgotten to write. We can now go and fix our code to make the new test case pass, resulting in better tests and more confidence in the correctness of our code.

    Mutation testing limitations

    As a new tool in our toolbox, mutation testing has already helped us improve lots of our unit tests in ways we didn’t previously know they were lacking, and we’re focusing especially on improving our tests that currently have a low mutation score. But before you get too excited, you should be aware that although it is an amazing tool to have, it is not entirely perfect.

    Probably the biggest problem with mutation testing, as anyone who tries it will soon discover, is what are called ‘equivalent mutants’. These are mutated versions of the source code that still behave correctly. Unfortunately, identifying whether mutants are equivalent to the original code often requires manual inspection by a developer.

    Equivalent mutants are common where there are performance optimizations in the code but the code still works correctly without them. There are other cases too, such as when code only deals with whether a number is positive or negative (the mutation tool might change -1 to -2 and expect the tests to fail). There are also APIs where modifying parameters will not change the result. A common example of this in Prosody’s code is Lua’s string.sub() , where indices outside the boundaries of the input string do not affect the result ( string.sub("test", 1, 4) and string.sub("test", 1, 5) are equivalent because the string is only 4 characters long).

    The implementation

    Although mutation testing is something I first read about many years ago and it immediately interested me, there were no mutation testing tools available for Lua source code at the time. As this is the language I spend most of my time in while working on Prosody, I’ve never been able to properly use the technique.

    However, for our new authorization API in Prosody, I’m currently adding more new code and tests than usual and the new code is security-related. I want to be sure that everything I add is covered well by the accompanying tests, and that sparked again my interest in mutation testing to support this effort.

    Still no tool was available for Lua, so I set aside a couple of hours to determine whether producing such a thing would be feasible. Luckily I didn’t need to start from scratch - there is already a mature project for parsing and modifying Lua source code called ltokenp written by Luiz Henrique de Figueiredo. On top of this I needed to write a small filter script to actually define the mutations, and a helper script for the testing tool we use ( busted ) to actually inject the mutated source code during test runs.

    Combining this all together, I wrote a simple shell script to wrap the process of generating the mutants, running the tests, and keeping score. The result is a single-file script that I’ve committed to the Prosody repository, and we will probably link it up to our CI in the future.

    It’s still very young, and there are many improvements that could be made, but it is already proving very useful to us. If there is sufficient interest, maybe it will graduate into its own project some day!

    If you’re interested in learning more about mutation testing, check out these resources:

    • Pl chevron_right

      ProcessOne: Matrix protocol added to ejabberd

      news.movim.eu / PlanetJabber • 13 October, 2022 • 2 minutes

    ejabberd is already the most versatile and scalable messaging server. In this post, we are giving a sneak peak at what is coming next.

    ejabberd just get new ace in it sleeve – you can now use ejabberd to talk with other Matrix servers, a protocol sometimes used for small corporate server messaging.

    Of course, you all know ejabberd supports the XMPP instant messaging protocol with hundreds of XMPP extensions, this is what it is famous for.

    The second major protocol in XMPP is MQTT. ejabberd support MQTT 5 with clustering, and is massively scalable. ejabberd can be used to implement Internet of Things projects, using either XMPP or MQTT and it also supports hybrid workflow, where you can mix humans and machines exchanging messages on the same platform.

    It also supports SIP, as you can connect to ejabberd with a SIP client, so that you can use a softphone directly with ejabberd for internal calls.

    So far, so good, ejabberd leading both in terms of performance and number of messaging protocol it supports.

    We always keep an eye on new messaging protocol. Recently, the Matrix protocol emerged as a new way to implement messaging for the small corporate servers.

    ejabberd adds support for Matrix protocol

    Or course, by design, the Matrix protocol cannot scale as well as XMPP or MQTT protocols. At the heart of Matrix protocol, you have a kind of merging algorithm that reminds a bit of Google Wave. It means that a conversation is conceptually represented as a sort document you constantly merge on the server. This is a consuming process that is happening on the server for each message received in all conversations. That’s why Matrix has the reputation to be so difficult to scale.

    Even if it is not as scalable as XMPP, we believe that we can make Matrix much more scalable than what it is now. That’s what we are doing right now.

    As a first step, we have been working on implementing a large subset of the Matrix protocol as a bridge in ejabberd.

    It means that an ejabberd server will be able to act as a Matrix server in the Matrix ecosystem. XMPP users will be able to exchange messages with Matrix users, transparently.

    To do that, we implemented the Matrix protocol for conversations and the server-to-server protocol to allow interop between XMPP and Matrix protocol.

    This feature coming first for our customers, in the coming weeks, whether they are using ejabberd Business Edition internally or on Fluux ejabberd SaaS platform. It will come later to ejabberd Community Edition.

    Interested? Let’s talk! Contact us .

    The post Matrix protocol added to ejabberd first appeared on ProcessOne .
    • Pl chevron_right

      Profanity: Profanity 0.13.1

      news.movim.eu / PlanetJabber • 12 October, 2022

    One month ago we released Profanity 0.13.0 and yesterday the minor release 0.13.1.

    18 people contributed code to this release: @binex-dsk, @cockroach, @DebXWoody, @MarcoPolo-PasTonMolo, @mdosch, @nandesu-utils, @netboy3, @paulfertser, @sjaeckel, @Zash, @omar-polo, @wahjava, @vinegret, @sgn, Max Wuttke, @tran-h-trung, @techmetx11 and @jubalh. Also a big thanks to our sponsors: @mdosch, @wstrm, @LeSpocky, @jamesponddotco and one anonymous person.

    We would also like to thank our testers, packagers and users.

    The release already landed several major distributions.

    For a list of changes please see the 0.13.0 and 0.13.1 release notes.