call_end

    • Pl chevron_right

      JMP: Newsletter: Holidays

      news.movim.eu / PlanetJabber • 13 December, 2023 • 2 minutes

    Hi everyone!

    Welcome to the latest edition of your pseudo-monthly JMP update!

    In case it’s been a while since you checked out JMP, here’s a refresher: JMP lets you send and receive text and picture messages (and calls) through a real phone number right from your computer, tablet, phone, or anything else that has a Jabber client. Among other things, JMP has these features: Your phone number on every device; Multiple phone numbers, one app; Free as in Freedom; Share one number with multiple people.

    Automatic refill for users of the data plan was rolled out to everyone this fall. This has been going well and we fully expect to enable new SIM and eSIM orders for all JMP customers (with no waitlist) in January, after the holidays.

    Speaking of holidays, MBOA staff, including JMP support staff, will be taking an end of year break just like we always do. Expect support response times to be longer than usual from December 18 until January 2.

    This fall also saw the silent launch of new inventory features for JMP. Historically, JMP has never held inventory of phone numbers, buying them directly from our carrier partners when a customer places an order. Unfortunately, this leaves us at the mercy of which regions our partners choose to keep in stock, and this year saw several occasions where there was no stock at all for all of Canada. So we now have a limited amount of local inventory to improve coverage of important regions, and may eventually be adding a function for “premium numbers” for very rare area codes or similar which cost more to stock.

    We have also been working in partnership with Snikket on a cross-platform SDK which we hope will make it easier for developers to build applications that integrate with the Jabber network without needing to be protocol or standards experts. Watch the chatroom and the Snikket blog for more information and demos.

    There have also been several releases of the Cheogram Android app ( latest is 2.13.0-1 ) with new features including:

    • Improved call connection stability
    • Verify DNSSEC and DANE and show status in UI
    • Show command UI on channels when there are commands to show
    • Show thread selector when starting a mention
    • Circle around thread selector
    • Several Android 14 specific fixes, including for dialler integration
    • Opening WebXDC from home screen even from a very old message

    To learn what’s happening with JMP between newsletters, here are some ways you can find out:

    Thanks for reading and have a wonderful rest of your week!

    • Pl chevron_right

      Ignite Realtime Blog: Smack 4.5.0-alpha2 released

      news.movim.eu / PlanetJabber • 9 December, 2023

    We are happy to announce the release of the second alpha release of Smack’s upcoming 4.5 version.

    This version fixes a nasty bug in Smack’s reactor, includes support for XMPP over WebSocket connections and much more. Even though Smack has a good test coverage, due its comprehensive unit test suite and integration test framework, we kindly ask you to test pre-releases and report feedback.

    As always, this Smack release is available via Maven Central .

    1 post - 1 participant

    Read full topic

    • Pl chevron_right

      Erlang Solutions: Reimplementing Technical Debt with State Machines

      news.movim.eu / PlanetJabber • 6 December, 2023 • 16 minutes

    In the ever-evolving landscape of software development, mastering the art of managing complexity is a skill every developer and manager alike aspires to attain. One powerful tool that often remains in the shadows, yet holds the key to simplifying intricate systems, is the humble state machine. Let’s get started.

    Models

    State machines can be seen as models that represent system behaviour. Much like a flowchart on steroids, these models represent an easy way to visualise complex computation flows through a system.

    A typical case study for state machines is the modelling implementation of internet protocols. Be it TLS, SSH, HTTP or XMPP, these protocols define an abstract machine that reacts to client input by transforming its own state, or, if the input is invalid, dying.

    A case study

    Let’s see the case for –a simplified thereof– XMPP protocol. This messaging protocol is implemented on top of a TCP stream, and it uses XML elements as its payload format. The protocol, on the server side, goes as follows:

    1. The machine is in the “waiting for a stream-start” state, it hasn’t received any input yet.
    2. When the client sends such stream-start, a payload looking like the following:

    <stream:stream to='localhost' version='1.0' xml:lang='en' xmlns='jabber:client' xmlns:stream='http://etherx.jabber.org/streams'>

    Then the machine forwards certain payloads to the client –a stream-start and a stream-features, its details are omitted in this document for simplicity– and transitions to “waiting for features before authentication”.

    1. When the client sends an authentication request, a payload looking like the following:

    <auth xmlns='urn:ietf:params:xml:ns:xmpp-sasl' mechanism='PLAIN'>AGFsaWNFAG1hdHlncnlzYQ==</auth>

    Then the machine, if no request-response mechanism is required for authentication, answers to the client and transitions to a new “waiting for stream-start”, but this time “after authentication”.

    1. When the client again starts the stream, this time authenticated, with a payload like the following:

    <stream:stream to='localhost' version='1.0' xml:lang='en' xmlns='jabber:client' xmlns:stream='http://etherx.jabber.org/streams'>

    Then the machine again answers the respective payloads, and transitions to a new “waiting for features after authentication”.

    1. And finally, when the client sends

    <iq type='set' id='1c037e23fab169b92edb4b123fba1da6'>

    <bind xmlns='urn:ietf:params:xml:ns:xmpp-bind'>

    <resource>res1</resource>

    </bind>

    </iq>

    Then in transitions to “session established”.

    1. From this point, other machines can find it and send it new payloads, called “stanzas”, which are XML elements whose names are one of “message”, “iq”, or “presence”. We will omit the details of these for the sake of simplicity again.

    Because often one picture is worth a thousand words, see the diagram below:

    GhYeFukbpkkBRqqI_uX7wUX-bdr7mQA6xjJ55VWCrsU07DbMXwu1XYpNxYV6BYPsd6PgyjaGzmpIogKwV2ONZxC6qEqtxkvScSwnmpNBKSXuTmPmz7ndZMEyqxMyEmFKC8tr3sRgYPS5XUFmK5P5un2C40vTNOMzvLL4IaAud2CS0uQHGo28mx39zzYxHw

    Implementing the case

    Textbook examples of state machines, and indeed the old OTP implementation of such behaviour, gen_fsm, always give state machines whose states can be defined by a single name, not taking into account that such a name can be “the name of” a data structure instead. In Erlang in particular, gen_fsm imposed the name of the state to be an atom, just so that it can be mapped to a function name and be callable. But this is an unfortunate oversight of complexity management, where the state of a machine depends on a set of variables that, if not in the name, need to be stored elsewhere, usually the machine’s data, breaking the abstraction.

    Observe in the example above, the case for waiting for stream-start and features: they both exist within unauthenticated and authenticated realms. A naive implementation, where the function name is the state, the first parameter the client’s input, and the second parameter the machine’s data, would say that:

    wait_for_stream(stream_start, Data#data{auth = false}) ->
    	{wait_for_feature, Data}.
    
    wait_for_feature(authenticate, Data#data{auth = false}) ->
    	{wait_for_stream, Data#data{auth = true}}.
    
    wait_for_stream(stream_start, Data#data{auth = true}) ->
    	{wait_for_feature, Data}.
    
    wait_for_feature(session, Data#data{auth = true}) ->
    	{session, Data}.

    In each case, we will take different actions, like building different answers for the client, so we cannot coalesce seemingly similar states into less functions.

    But what if we want to implement retries on authentication?

    We need to add a new field to the data record, as follows:

    wait_for_stream(stream_start, Data#data{auth = false}) ->
    	{wait_for_feature, Data#data{retry = 3}}.
    
    wait_for_feature(authenticate, Data#data{auth = false}) ->
    	{wait_for_stream, Data#data{auth = true}};
    wait_for_feature(_, Data#data{auth = false, retry = 0}) ->
    stop;
    wait_for_feature(_, Data#data{auth = false, retry = N}) ->
    	{wait_for_feature, Data#data{auth = true, retry = N - 1}}.

    The problem here is twofold:

    1. When the machine is authenticated, this field is not valid anymore, yet it will be kept in the data record for the whole life of the machine, wasting memory and garbage collection time.
    2. It breaks the finite state machine abstraction –too early–, as it uses an unbounded memory field with random access lookups to decide how to compute the next transition, effectively behaving like a full Turing Machine — note that this power is one we will need nevertheless, but we will introduce it for a completely different purpose.

    This can get unwieldy when we introduce more features that depend on specific states. For example, when authentication requires roundtrips and the final result depends on all the accumulated input of such roundtrips, we would also accumulate them on the data record, and pattern-match which input is next, or introduce more function names.

    Or if authentication requires a request-response roundtrip to a separate machine, if we want to make such requests asynchronous because we want to process more authentication input while the server processes the first payload, we would also need to handle more states and remember the accumulated one. Again, storing these requests on the data record keeps more data permanent that is relevant only to this state, and uses more memory outside of the state definition. Fixing this antipattern lets us reduce the data record from having 62 fields to being composed of only 10.

    Before we go any further, let’s talk a bit about automatas.

    Automata theory

    In computer sciences, and more particularly in automata theory, we have at our disposal a set of theoretical constructs that allow us to model certain problems of computation, and even more ambitiously, define what a computer can do altogether. Namedly, there are three automatas ordered by computing power: finite state machines, pushdown automatas, and Turing machines. They define a very specific algorithm schema, and they define a state of “termination”. With the given algorithm schema and their definition of termination, they are distinguished by the input they are able to accept while terminating.

    Conceptually, a Turing Machine is a machine capable of computing everything we know computers can do: really, Turing Machines and our modern computers are theoretically one and the same thing, modulo equivalence.

    Let’s get more mathematical. Let’s give some definitions:

    1. Alphabet: a set denoted as Σ of input symbols, for example Σ = {0,1}, or Σ = [ASCII]
    2. A string over Σ: a concatenation of symbols of the alphabet Σ
    3. The Power of an Alphabet: Σ*, the set of all possible strings over Σ, including the empty set (an empty string).

    An automaton is said to recognise a string over if it “terminates” when consuming the string as input. On this view, automata generate formal languages, that is, a specific subset of Σ* with certain properties. Let’s see the typical automata:

    1. A Finite State Machine is a finite set of states Q (hence the name of the concept), an alphabet Σ and a function 𝛿 of a state and an input symbol that outputs a new state (and can have side-effects)
    2. A Pushdown Automata is a finite set of states Q, an alphabet Σ, a stack Γ of symbols of Σ, and a function 𝛿 of a state, an input symbol, and the stack, that outputs a new state, and modifies the stack by either popping the last symbol, pushing a new symbol, or both (effectively swapping the last symbol).
    3. A Turing Machine is a finite set of states Q, an alphabet Σ, an infinite tape Γ of cells containing symbols of Σ, and a function 𝛿 of a state, an input symbol, and the current tape cell, that outputs a new state, a new symbol to write in the current cell (which might be the same as before), and a direction, either left or right, to move the head of the tape.

    Conceptually, Finite State Machines can “keep track of” one thing, while Pushdown Automata can “keep track of” up to two things. For example, there is a state machine that can recognise all strings that have an even number of zeroes, but there is no state machine that can recognise all strings that have an equal number of ones and zeroes. However,this can be done by a pushdown automaton. But, neither state machines nor pushdown automata can generate the language for all strings that have an equal number of a’s, b’s, and c’s: this, a Turing Machine can do.

    How to relate these definitions to our protocols, when the input has been defined as an alphabet? In all protocols worth working on, however many inputs there are, they are finite set which can be enumerated. When we define an input element as, for example, <stream:start to=[SomeHost]/> , the list of all possible hosts in the world is a finite list, and we can isomorphically map these hosts to integers, and define our state machines as consuming integers. Likewise for all other input schemas. So, in order to save the space of defining all possible inputs and all possible states of our machines, we will work with schemas , that is, rules to construct states and input. The abstraction is isomorphic.

    Complex states

    We know that state machine behaviours, both the old gen_fsm and the new gen_statem, really are Turing Machines: they both keep a data record that can hold unbounded memory, hence acting as the Turing Machine tape. The OTP documentation for the gen_statem behaviour even says so explicitly:

    Like most gen_ behaviours, gen_statem keeps a server Data besides the state. Because of this, and as there is no restriction on the number of states (assuming that there is enough virtual machine memory) or on the number of distinct input events, a state machine implemented with this behaviour is in fact Turing complete. But it feels mostly like an Event-Driven Mealy machine .

    But we can still model a state machine schema with accuracy. ‘gen_statem’, on initialisation, admits a callback mode called ‘handle_event_function’. We won’t go into the details, but they are well explained in the available official documentation .

    By choosing the callback mode, we can use data structures as states. Note again that, theoretically, a state machine whose states are defined by complex data structures are isomorphic to giving unique names to every possible combination of such data structures internals: however large of a set, such a set is still finite.

    Now, let’s implement the previous protocol in an equivalent manner, but with no data record whatsoever, with retries and asynchronous authentication included:

    handle_event(_, {stream_start, Host},
                 [{wait_for_stream, not_auth}], _) ->
        StartCreds = get_configured_auth_for_host(Host),
        {next_state, [{wait_for_feature, not_auth}, {creds, StartCreds}, {retry, 3}]};
    
    handle_event(_, {authenticate, Creds},
                 [{wait_for_feature, not_auth}, {creds, StartCreds}, {retries, -1}], _) ->
    	stop;
    handle_event(_, {authenticate, Creds},
                 [{wait_for_feature, not_auth}, {creds, StartCreds}, {retries, N}], _) ->
        Req = auth_server:authenticate(StartCreds, Creds),
        {next_state, [{wait_for_feature, not_auth}, {req, Req}, {creds, Creds}, {retries, N-1}]};
    handle_event(_, {authenticated, Req},
                 [{wait_for_feature, not_auth}, {req, Req}, {creds, Creds} | _], _) ->
        {next_state, [{wait_for_stream, auth}, {jid, get_jid(Creds)}]};
    handle_event(_, Other,
                 [{wait_for_feature, not_auth} | _], _) ->
        {keep_state, [postpone]};
    
    handle_event(_, {stream_start, Host}, [{wait_for_stream, auth}, {jid, JID}], _) ->
        {next_state, [{wait_for_feature, auth}, {jid, JID}]};
    
    handle_event(_, {session, Resource}, [{wait_for_feature, auth}, {jid, JID}], _) ->
        FullJID = jid:replace_resource(JID, Resource),
        session_manager:put(self(), FullJID),
        {next_state, [{session, FullJID}]};

    And from this point on, we have a session with a known Jabber IDentifier (JID) registered in the session manager, that can send and receive messages. Note how the code pattern-matches between the given input and the state, and the state is a proplist ordered by every element being a substate of the previous.

    Now the machine is ready to send and receive messages, so we can add the following code:

    handle_event(_, {send_message_to, Message, To},
                 [{session, FullJID}], _) ->
        ToPid = session_manager:get(To),
        ToPid ! {receive_message_from, Message, FullJID,
        keep_state;
    handle_event(_, {receive_message_from, Message, From},
                 [{session, FullJID}], #data{socket = Socket}) ->
        tcp_socket:send(Socket, Message),
        keep_state;

    Only in these two function clauses, state machines interact with each other. There’s only one element that would be needed to be stored on the data record: the Socket. This element is valid for the entire life of the state-machine, and while we could include it on the state definition for every state, for once we might as well keep it globally on the data record for all of them, as it is globally valid.

    Please read the code carefully, as you’ll find it is self-explanatory.

    Staged processing of events

    A protocol like XMPP, is defined entirely in the Application Layer of the OSI Model , but as an implementation detail, we need to deal with the TCP (and potentially TLS) packets and transform them into the XML data-structures that XMPP will use as payloads. This can be implemented as a separate gen_server that owns the socket, receives the TCP packets, decrypts them, decodes the XML binaries, and sends the final XML data structure to the state-machine for processing. In fact, this is how this protocol was originally implemented, but for completely different reasons.

    In much older versions of OTP, SSL was implemented in pure Erlang code, where crypto operations (basically heavy number-crunching), was notoriously slow in Erlang. Furthermore, XML parsing was also in pure Erlang and using linked-lists as the underlying implementation of the strings. Both these operations were terribly slow and prone to produce enormous amounts of garbage, so it was implemented in a separate process. Not for the purity of the state machine abstractions, but simply to unblock the original state machine from doing other protocol related processing tasks.

    But this means a certain duplicity. Every client now has two Erlang processes that send messages to each other, effectively incurring a lot of copying in the messaging. Now OTP implements crypto operations by binding to native libcrypto code, and XML parsing is done using exml , our own fastest XML parser available in the BEAM world. So the cost of packet preprocessing is now lower than the message copying, and therefore, it can be implemented in a single process.

    Enter internal events:

    handle_event(info, {tls, Socket, Payload}, _, Data#{socket = Socket}) ->
    	XmlElements = exml:parse(tls:decrypt(Socket, Payload)),
    	StreamEvents = [{next_event, internal, El} || El <- XmlElements],
    	{keep_state, StreamEvents};

    Using this mechanism, all info messages from the socket will be preprocessed in a single function head, and all the previous handlers simply need to match on events of type internal and of contents an XML data structure.

    A pure abstraction

    We have prototyped a state machine implementing the full XMPP Core protocol (RFC6120) , without violating the abstraction of the state machine. At no point do we have a full Turing-complete machine, or even a pushdown automaton. We have a machine with a finite set of states and a finite set of input strings, albeit large as they’re both defined schematically, and a function, `handle_event/4`, that takes a new input and the current state and calculates side effects and the next state.

    However, for convenience we might break the abstraction in sensible ways. For example, in XMPP, you might want to enable different configurations for different hosts, and as the host is given in the very first event, you might as well store in the data record the host and the configuration type expected for this connection – this is what we do in MongooseIM’ s implementation of the XMPP server.

    Breaking purity

    But there’s one point of breaking in the XMPP case, which is in the name of the protocol. “ X” stands for extensible , that is, any number of extensions can be defined and enabled, that can significantly change the behaviour of the machine by introducing new states, or responding to new events. This means that the function 𝛿 that decides the next step and the side-effects, does not depend only on the current state and the current input, but also on the enabled extensions and the data of those extensions.

    Only at this point we need to break the finite state machine abstraction: the data record will keep an unbounded map of extensions and their data records, and 𝛿 will need to take this map to decide the next step and decide not only the next state and the side-effects, but also what to write on the map: that is, here, our State Machine does finally convert into a fully-fledged Turing Machine.

    With great power…

    Restraining your protocol to a Finite State Machine has certain advantages:

    • Memory consumption: the main difference, simplifying, between a Turing Machine and an FSM, is that the Turing Machine has infinite memory at its disposal, which means that when you have way too many Turing Machines roaming around in your system, it might get hard to reason about the amount of memory they all consume and how it aggregates. In contrast, it’s easier to reason about upper bounds for the memory the FSMs will need.
    • Determinism: FSMs exhibit deterministic behaviour, meaning that the transition from one state to another is uniquely determined by the input. This determinism can be advantageous in scenarios where predictability and reliability are crucial. Turing machines instead can exhibit a complexity that may not be needed for certain applications.
    • Halting: we have all heard of the Halting Problem, right? Turns out, proving that a Finite State Machine halts is always possible.
    • Testing: as the number of states and transitions of an FSM are finite, testing all the code-paths of such a machine is indeed a finite task. There are indeed State Machine learning algorithms that verify implementations (see LearnLib ) and property-based testing has a good chance to reach all edge-cases.

    When all we wanted was to implement a communication protocol, as varied as XMPP or TLS, where what we implement is a relationship of input, states, and output, a Finite State Machine is the right tool for the job. Using hierarchical states can model certain states better than using a simplified version of the states and global memory to decide the transitions (i.e., implement 𝛿) and will result in a purer and more testable implementation.

    Further examples:

    The post Reimplementing Technical Debt with State Machines appeared first on Erlang Solutions .

    • Pl chevron_right

      Erlang Solutions: Advent of Code 2023

      news.movim.eu / PlanetJabber • 1 December, 2023 • 3 minutes

    Hello! I’m Piotr from Erlang Solutions Poland and I have the pleasure of saving Christmas this year with the power of Erlang for you!

    This is the second time we participate in the amazing event called the Advent of Code . Last year’s edition was solved by my colleague Aleksander and as far as I know – many of you enjoyed following his efforts. I hope you’ll like my tale of helping Santa too!

    I’m going to publish my solutions in my GitHub repository . They will be accompanied by a commentary, added to this page on a daily basis. I will add solutions for each day in an individual folder, with my input file downloaded from the AoC website.

    I’m also going to include a bit of microbenchmarking in every solution, with and without JIT. Firstly, to measure the overall performance of the code and secondly to see how much the efficiency improves thanks to JIT. I’m going to measure the computation time only with `timer:tc/3` call, as I consider time needed to compile a module and load the input file irrelevant. By “load” I mean: read it and split it into lines. Any further processing of individual lines is considered a computation. I will provide min, max and arithmetic average of 100 runs.

    I’ll be implementing solutions as EScripts, so running them is a bit more straightforward. And frankly – I think they are underrated and sometimes I prefer them over writing BASH scripts. I’ll always include `-mode(compile).` directive to avoid the interpretation performance penalty. For those who are not aware of this capability, I’ll also run Day 1 without this option to show you how the timings change.

    I’m going to run every piece of the code on Linux Mint 21.2 VirtualBox machine with 4 cores and 8GB of memory, hosted on my personal PC with Ryzen 3700X and DDR4 at 3200MHz. I will use OTP 26.1.1.

    Day 1

    Part 1

    I would never suspect that I’ll begin the AoC challenge with being loaded onto a trebuchet. I’d better do the math properly! Or rather – have Erlang do the calibration for me.

    FYI: I do have some extra motivation to repair the snow production: my kids have been singing “Do You Want to Build a Snowman?” for a couple of days already and there is still nowhere enough of it where I live.

    I considered three approaches to the first part of the puzzle:

    1. Run a regular expression on each line.
    2. Filter characters of a line with binary comprehension and then get the first and last digit from the result.
    3. Iterate over characters of a line and store digits in two accumulators.

    I chose the last one, as (1) felt like shooting a mosquito with a M61 Vulcan Cannon. Second one felt kind of less Erlang-ish than the third one. After all, matching binaries and recursive solutions are very natural in this language.

    Timings

    Min Avg Max
    Compiled + JIT 0.000091s 0.000098s 0.000202s
    Compiled + no JIT 0.000252s 0.000268s 0.000344s
    Interpreted 0.091494s 0.094965s 0.111017s

    Part 2

    By choosing the method of matching binaries, I was able to add support for digits as words pretty easily. If there were more mappings than just nine, I’d probably use a map to store all possible conversions and maybe even compile a regular expression from them.

    Eventually, the temptation of violating the DRY rule a bit was too strong and I just went for individual function clauses.

    And my solution was invalid. Shame on me but I admit I needed a hint from other participants – it turned out that some words can overlap and they have to be treated as individual digits. It wasn’t explicitly specified and ignoring overlaps in the example did not lead to an invalid result – a truly evil decision of AoC maintainers!

    Simply put, at first I thought such a code would be enough:

    parse(<<"one", Rest/binary>>, First, _Last) -> store(1, Rest, First);

    But the actual Rest must be defined as <<_:8, Rest/binary>>.

    Timings

    Min Avg Max
    Compiled + JIT 0.000212s 0.000225s 0.000324s
    Compiled + no JIT 0.000648s 0.000679s 0.000778s
    Interpreted 0.207670s 0.213344s 0.242223s

    JIT does make a difference, doesn’t it?

    The post Advent of Code 2023 appeared first on Erlang Solutions .

    • Pl chevron_right

      Ignite Realtime Blog: More Openfire plugin maintenance releases!

      news.movim.eu / PlanetJabber • 28 November, 2023 • 2 minutes

    Following the initial batch of Openfire plugin releases that we did last week, another few have been made available!

    Version 1.0.1 of the Spam Blacklist plugin was released. This plugin uses an external blocklist to reject traffic from specific addresses. This is a minor maintenance release that does not introduce functionality changes.

    Version 1.0.0 of the EXI plugin was released. Efficient XML Interchange (EXI) is a binary XML format for exchange of data on a computer network. It is one of the most prominent efforts to encode XML documents in a binary data format, rather than plain text. Using EXI format reduces the verbosity of XML documents as well as the cost of parsing. Improvements in the performance of writing (generating) content depends on the speed of the medium being written to, the methods and quality of actual implementations. After our request for comments on this prototype, no major defects were reported. As such, we’ve decided to publish a proper release of the plugin!

    Version 1.0.4 of the Email on Away plugin was released. This plugin allows to forward messages to user’s email address when the user is away (not offline). In this release, the build process was fixed. No functional changes were introduced.

    Version 1.0.0 of the Push Notification plugin was released. This plugin adds support sending push notifications to client software, as described in XEP-0357: “Push Notifications” . In this release, compatibility with Openfire 4.8 was implemented.

    Version 0.0.3 of the Ohùn plugin was released. This plugin implements a simple audio conferencing solution for Openfire using the Kraken WebRTC client and server . No functional changes were introduced in this release.

    Version 0.0.3 of the Gitea plugin was released. This Openfire plugin adds a real-time communication to content management using a familiar GIT based workflow to create a very responsive collaboration platform that will enable an agile team to create, manage and deliver any type of content with quality assurance. IN this release, the gitea dependency was updated to 1.7.3.

    Version 1.3.0 of the User Status plugin was released. This plugin automatically saves the last status (presence, IP address, logon and logoff time) per user and resource to userStatus table in the Openfire database. In this release, compatibility with Openfire 4.8 was implemented.

    All of these plugins should show up in your Openfire admin console in the next few hours. You can also download them directly from their archive pages, which is linked to in the text above.

    For other release announcements and news follow us on Mastodon or X

    1 post - 1 participant

    Read full topic

    • Pl chevron_right

      yaxim: Planned downtime + Happy 10th anniversary, yax.im!

      news.movim.eu / PlanetJabber • 27 November, 2023 • 2 minutes

    Our Android XMPP client yaxim was created in 2009. A decade later, we celebrated its round birthday . To make the user experience more straightforward, we launched the yax.im public XMPP service in November 2013, to become the default server in yaxim. Now, ten years later, it’s time to recap and to upgrade the hosting infrastructure.

    Downtime announcement

    We will migrate the server from the old infrastructure to the new one, on November 31st, between 8:00 and 11:00 UTC. Please expect a few hours of downtime until everything is settled!

    The migration will also include an upgrade to prosody 0.12 and the deactivation of TLS v1.0 and v1.1 in favor of TLS v1.3.

    Many thanks go to boerde.de for being our home for the last decade, and for enduring a few DDoS attacks on our behalf. Additional thanks go to AS250 for offering us a new home.

    Ten years review

    We started the service on Debian Squeeze with the freshly released Prosody 0.9 on it. Since then, there were quite a few upgrades of both the OS and of prosody. However, for technical reasons, the server is currently running on a prosody development snapshot that predates the current 0.12 major update .

    In that time we’ve grown significantly, and are currently processing on average 100 thousand messages and 6.3 million presence stanzas every day.

    Back in 2013, we were quite avantgarde to support not only TLSv1.0, but also v1.1 and v1.2. The support was only added into Android with the 4.1 release in 2012 and wasn’t enabled by default until 2014 with Android 5 . Now we are lagging behind, given that TLS v1.3 came with Android 10 four years ago .

    IRC transports

    Since 2017, we are operating a beta (internal only) biboumi IRC transport on irc.yax.im and two dedicated transports for IRCnet on ircnet.yax.im and for euIRC on euirc.yax.im .

    These were never officially announced and have just a few users. They will be migrated to the new host as well, but with a lower priority.

    Spam fighting efforts

    The XMPP spam problem has been a significant annoyance to most users. We have the opinion that XMPP spam can be best fought at the server level, where aggregate views and statistics are available, and spam can be blocked centrally for all users with mod_firewall .

    In 2017, we have implemented spam detection and prevention both for yax.im users and against spam bots registered on our server. In 2020, we extended that to auto-quarantine suspicious account creations .

    In the last two weeks, our spam fighting efforts have blocked 21.000 spam messages from 7.600 accounts on 72 servers, including 480 auto-flagged bot accounts on yax.im. We were not explicitly keepig note, but the number of auto-flagged accounts since the measure was introduced in 2020 is around 30.000 .

    As part of the JabberSPAM initiative, we have helped report abuse and bring down unmaintained spam relays.

    Future

    With the new hosting platform and our committed team of three administrators, we are ready to take on the challenges of the future and to sustain the growth of our user base.

    • Pl chevron_right

      Ignite Realtime Blog: New Openfire plugin: Reporting Account Affiliations

      news.movim.eu / PlanetJabber • 27 November, 2023 • 1 minute

    I’m excited to announce a new Openfire plugin: the Reporting Account Affiliations Plugin!

    This plugin implements a new prototype XMPP extension of the same name .

    To quote the specification:

    In practice, a server may not trust all accounts equally. For example, if a server offers anonymous access or open registration, it may have very little trust in such users. Meanwhile a user account that was provisioned by a server administrator for an employee or a family member would naturally have a higher level of trust.

    Even if a server alters its own behaviour based on how much it trusts a user account (such as preventing anonymous users from performing certain actions), other entities on the network have no way of knowing what trust to place in JIDs they have not encountered before - they can only judge the server as a whole.

    This lack of insight can result in the negative actions (spam, abuse, etc.) by untrusted users on a domain causing the whole domain to be sanctioned by other servers.

    This new plugin allows for Openfire to report to other entities the relationship it has with a user on its domain.

    Note: at the time of writing, the protocol as implemented by this plugin has not yet been accepted for consideration or approved in any official manner by the XMPP Standards Foundation, and this document is not yet an XMPP Extension Protocol (XEP). This plugin should be considered experimental.

    The plugin will be visible in the list of available plugins of your Openfire instance in a matter of hours. You can also download it directly from its archive page .

    For other release announcements and news follow us on Mastodon or X

    1 post - 1 participant

    Read full topic

    • Pl chevron_right

      Ignite Realtime Blog: Smack 4.4.7 released

      news.movim.eu / PlanetJabber • 26 November, 2023 • 1 minute

    We are happy to announce the release of Smack 4.4.7. For a high-level overview of what’s changed in Smack 4.4.7, check out Smack’s changelog

    As with the last release, 4.4,6, parts of the release where driven by feedback from the Jitsi folks.

    Due to SMACK-927, we had to change the behavior of a certain kind of incoming stanzas listeners, namely the ones added with XMPPConnection.addStanzaListener() . Before Smack 4.4.7, they where invoked outside of Smack’s main loop, now they are invoked as part of the main loop. As a result, all listeners have to finish before the main loop of the connection can continue. Consequently, if you use these kinds of listeners, make sure that they do not block, as otherwise the connection will also stop processing incoming stanzas, which can easily lead to a deadlock.

    You usually should not need to use these kinds of incoming stanza listeners, alternaives include XMPPConnection.addSyncStanzaListener() and XMPPConnection.addAsyncStanzaListeners() . Especially the latter ones, asynchronous stanza listeners, are efficiently processed and safer to use. Note that those listeners are not guranteed to be processed in-order.

    As always, this Smack patchlevel release is API compatible within the same major-minor version series (4.4) and all Smack releases are available via Maven Central .

    We would like to use this occasion to point at that Smack now ships with a NOTICE file. Please note that this adds some requirements when using Smack as per the Apache License 2.0 . The content of Smack’s NOTICE file can conveniently be retrieved using Smack.getNoticeStream() .

    1 post - 1 participant

    Read full topic

    • Pl chevron_right

      Erlang Solutions: You’ve been curious about LiveView, but you haven’t gotten into it

      news.movim.eu / PlanetJabber • 6 April, 2023 • 21 minutes

    As a backend developer, I’ve spent most of my programming career away from frontend development. Whether it’s React/Elm for the web or Swift/Kotlin for mobile, these are fields of knowledge that fall outside of what I usually work with.

    Nonetheless, I always wanted to have a tool at my disposal for building rich frontends. While the web seemed like the platform with the lowest bar of entry for this, the size of the Javascript ecosystem had become so vast that familiarizing oneself with it was no small task.

    This is why I got very excited when Chris McCord first showed LiveView to the world. Building interactive frontends, with no Javascript required? This sounded like it was made for all of us Elixir backend developers that were “frontend curious”.

    However, if you haven’t already jumped into it, you might be hesitant to start. After all: it’s often not just about learning LiveView as if you were writing a greenfield project, but about how you would add LiveView into that Phoenix app that you’re already working on.

    Therefore, throughout this guide, I’ll presume that you already have an existing project that you wish to integrate LiveView into. If you have the luxury of a clean slate, then other resources (such as the Programming Phoenix LiveView book, by Bruce A. Tate and Sophie DeBenedetto ) may be of more use.

    I hope that this article may serve you well as a starting point!

    Will it work for my use case?

    You might have some worries about whether LiveView is a technology that you can introduce to your application. After all: no team likes to adopt a technology that they later figure out does not suit their use case.

    There are some properties of LiveView which are inherent to the technology, and therefore must be considered:

    Offline mode

    The biggest question is whether you need an offline mode for your application. My guess is that you probably do not need it , but if you do, LiveView is not the technology for you. The reason for this is that LiveView is rendered on the backend , necessitating communication with it.

    Latency

    The second biggest question: do you expect the latency from your clients to the server to be high , and would it being high be a serious detriment to your application?

    As Chris McCord put it in his announcement blog post on the Dockyard blog :

    “Certain use cases and experiences demand zero-latency, as well as offline capabilities. This is where Javascript frameworks like React, Ember, etc., shine.”

    Almost every interaction with a LiveView interface will send a request to the server; while requests will have highly optimized payloads, if you expect the average round trip from client to server to be too many milliseconds, then the user experience will suffer. LiveView ships with tools for testing your application with increased latency, but if you already know that there’s a certain latency maximum that your clients must not but very likely would exceed, then LiveView may not be suitable.

    If these are not of concern to your use case, then let’s get going!

    What does it take for me to start?

    Phoenix setup

    First of all, you’ll want to have a recent version of Phoenix, and your code up-to-date. Following are upgrade guides for older projects:

    LiveView setup

    The next step is to install LiveView into your existing project. The LiveView documentation has a great section on the subject: Installing LiveView into an existing project .

    The guide is rather straight-forward, so I will not reiterate its contents here. The only comment I’ll add is that the section at the very end about adding a topbar is (as the documentation points out) optional. It should be said, however, that this is added by default in new LiveView projects, so if you want to have a setup that’s as close to a freshly generated project, you should include this.

    At this point, you should have everything ready for introducing your own LiveView code!

    Quick LiveView overview

    Before we get to the actual coding, let’s get at a quick overview of the life cycle of a LiveView page. Here’s a high-level overview:

    The first request made to a LiveView route will be a plain HTTP request. The router will invoke a LiveView module, which calls the mount/3 function and then the render/1 function. This will render a static page (SEO-friendly out-of-the-box, by the way!), with the required Javascript for LiveView to work. The page then opens a WebSocket connection between the client and the server.

    After the WebSocket connection has been established, we get into the LiveView life cycle:

    Note that mount/3 and render/1 will be called again, this time over the WebSocket connection. While this probably will not be something you need to worry about when writing your first LiveView pages, it might be of relevance to know that this is the case ( discussion about this can be read here ). If you have a very expensive function call to make, and you only want to do it once, consider using the connected?/1 function.

    After render/1 has been called a second time, we get into the LiveView loop: wait for events, send the events over the wire, change the state on the server, then send back the minimal required data for updating the page on the client.

    Let’s now see how we’ll need to change your code to get to this LiveView flow.

    Making things live

    Now you might be asking:

    “OK, so the basics have been set up. What are the bare minimum things to get a page to be live?”

    You’ll need to do the following things:

    1. Convert an existing route to a live one
    2. Convert the controller module into a live module
    3. Modify the templates
    4. Introduce liveness

    Let’s go over them, one by one:

    Bringing life to the dead

    Here’s a question I once had, that you might be wondering:

    If I’ve got a regular (“dead”) Phoenix route, can I just add something live to a portion of the page, on the existing “dead” route?

    Considering how LiveView works, I’d like to transform the question into two new (slightly different) questions:

    1. Can one preserve the current routes and controllers, having them execute live code?
    2. Can one express the live interactions in the dead controllers?

    The answer to the first question: yes, but generally you won’t . You won’t, because of the answer to the second question: no , you’ll need separate live modules to express the live interactions.

    This leads to an important point:

    If you want some part of a page to be live, then your whole page has to be live.

    Technically , you can have the route be something else than live (e.g. a get route), and you would then use Phoenix.LiveView.Controller.live_render/3 in a “dead” controller function to render a LiveView module. This does still mean, however, that the page (the logic and templates) will be defined by the live module. You’re not “adding something live to a portion of the dead page”, but rather delegating to a live module from a dead route; you’ll still have to migrate the logic and templates to the live module.

    Therefore, your live code will be in LiveView modules (instead of your current controller modules ), invoked by live routes. As a sidenote: while it’s not covered by this article, you’ll eventually group live routes with live_session/3 , enabling redirects between routes without full page reloads.

    Introducing a live route

    Many tutorials and videos about LiveView use the example of programming a continuously updating rendering of a thermostat. Let’s therefore presume that you’ve got a home automation application, and up until now you had to go to /thermostats and refresh the page to get the latest data.

    The router.ex might look something like this:

    defmodule HomeAutomationWeb.Router do
      use HomeAutomationWeb, :router
    
      pipeline :browser do
        # ...
      end
    
      pipeline :logged_in do
        # ...
      end
    
      scope "/", HomeAutomationWeb do
        pipe_through [:browser, :logged_in]
    
        # ...
    
        resources "/thermostats", ThermostatController
        post "/thermostats/reboot", ThermostatController, :reboot
      end
    end
    

    This is a rather simple router (with some lines removed for brevity), but you can probably figure out how this compares to your code. We’re using a call to Phoenix.Router.resources/2 here to cover a standard set of CRUD actions; your set of actions could be different.

    Let’s introduce the following route after the post-route:

    live "/live/thermostats", ThermostatLive
    

    The ThermostatLive will be the module to which we’ll be migrating logic from ThermostatController.

    Creating a live module to migrate to

    Creating a skeleton

    Let’s start by creating a directory for LiveView modules, then create an empty thermostat_live.ex in that directory.

    $ mkdir lib/home_automation_web/live
    $ touch lib/home_automation_web/live/thermostat_live.ex
    

    It might seem a bit strange to create a dedicated directory for the live modules, considering that the dead parts of your application already have controller/template/view directories. This convention, however, allows one to make use of the following feature from the Phoenix.LiveView.render/1 callback (slight changes by me, for readability):

    If you don’t define [render/1 in your LiveView module], LiveView will attempt to render a template in the same directory as your LiveView. For example, if you have a LiveView named MyApp.MyCustomView inside lib/my_app/live_views/my_custom_view.ex, Phoenix will look for a template at lib/my_app/live_views/my_custom_view.html.heex.

    This means that it’s common for LiveView projects to have a live directory with file pairs, such as foobar.ex and foobar.html.heex, i.e. module and corresponding template. Whether you inline your template in the render/1 function or put it in a dedicated file is up to you.

    Open the lib/home_automation_web/live/thermostat_live.ex file, and add the following skeleton of the ThermostatLive module:

    defmodule HomeAutomationWeb.ThermostatLive do
      use HomeAutomationWeb, :live_view
    
      def mount(_params, _session, socket) do
        {:ok, socket}
      end
    
      def render(assigns) do
        ~H"""
        <div id="thermostats">
          <p>Thermostats</p>
        </div>
        """
      end
    end
    

    There are two mandatory callbacks in a LiveView module: mount/3, and render/1. As mentioned earlier, you can leave out render/1 if you have a template file with the right file name. You can also leave out the mount/3, but that would mean that you neither want to set any state, nor do any work on mount, which is unlikely.

    Migrating mount logic

    Let’s now look at our imagined HomeAutomationWeb.ThermostatController, to see what we’ll be transferring over to ThermostatLive:

    defmodule HomeAutomationWeb.ThermostatController do
      use HomeAutomationWeb, :controller
    
      alias HomeAutomation.Thermostat
    
      def index(conn, _params) do
        thermostats = Thermostat.all_for_user(conn.assigns.current_user)
    
        render(conn, :index, thermostats: thermostats)
      end
    
      # ...
    
      def reboot(conn, %{"id" => id}) do
        {:ok, thermostat} =
          id
          |> Thermostat.get!()
          |> Thermostat.reboot()
    
        conn
        |> put_flash(:info, "Thermostat '#{thermostat.room_name}' rebooted.")
        |> redirect(to: Routes.thermostat_path(conn, :index))
      end
    end
    

    We’ll be porting a subset of the functions that are present in the controller module: index/2 and reboot/2. This is mostly to have two somewhat different controller actions to work with.

    Let’s first focus on the index/2 function. We could imagine that Thermostat.all_for_user/1 makes a database call of some kind, possibly with Ecto. conn.assigns.current_user would be added to the assigns by the logged_in Plug in the pipeline in the router.

    Let’s naively move over the ThermostatController.index/2 logic to the LiveView module, and take it from there:

    defmodule HomeAutomationWeb.ThermostatLive do
      use HomeAutomationWeb, :live_view
    
      alias HomeAutomation.Thermostat
    
      def mount(_params, _session, socket) do
        thermostats = Thermostat.all_for_user(socket.assigns.current_user)
    
        {:ok, assign(socket, %{thermostats: thermostats})}
      end
    
      def render(assigns) do
        ~H"""
        <div id="thermostats">
          <p>Thermostats</p>
        </div>
        """
      end
    end
    

    Firstly, we’re inserting the index/2 logic into the mount/3 function of ThermostatLive, meaning that the data will be called for on page load.

    Secondly, notice that we changed the argument to Thermostat.all_for_user/1 from conn.assigns.current_user to socket.assigns.current_user. This is just a change of variable name, of course, but it signifies a change in the underlying data structure: you’re not working with a Plug.Conn struct, but rather with a Phoenix.LiveView.Socket.

    So far we’ve written some sample template code inside the render/1 function definition, and we haven’t seen the actual templates that would render the thermostats, so let’s get to those.

    Creating live templates

    Let’s presume that you have a rather simple index page, listing all of your thermostats.

    <h1>Listing Thermostats</h1>
    
    <%= for thermostat <- @thermostats do %>
      <div class="thermostat">
        <div class="row">
          <div class="column">
            <ul>
              <li>Room name: <%= thermostat.room_name %></li>
              <li>Temperature: <%= thermostat.temperature %></li>
            </ul>
          </div>
    
          <div class="column">
            Actions: <%= link("Show", to: Routes.thermostat_path(@conn, :show, thermostat)) %>
            <%= link("Edit", to: Routes.thermostat_path(@conn, :edit, thermostat)) %>
            <%= link("Delete",
              to: Routes.thermostat_path(@conn, :delete, thermostat),
              method: :delete,
              data: [confirm: "Are you sure?"]
            ) %>
          </div>
    
          <div class="column">
            <%= form_for %{}, Routes.thermostat_path(@conn, :reboot), fn f -> %>
              <%= hidden_input(f, :id, value: thermostat.id) %>
              <%= submit("Reboot", class: "rounded-full") %>
            <% end %>
          </div>
        </div>
      </div>
    <% end %>
    
    <%= link("New Thermostat", to: Routes.thermostat_path(@conn, :new)) %>
    

    Each listed thermostat has the standard resource links of Show/Edit/Delete, with a New-link at the very end of the page. The only thing that goes beyond the usual CRUD actions is the form_for, defining a Reboot-button. The Reboot-button will initiate a request to the POST /thermostats/reboot route.

    As previously mentioned, we can either move this template code into the ThermostatLive.render/1 function, or we can create a template file named lib/home_automation_web/live/thermostat_live.html.heex. To get used to the new ways of LiveView, let’s put the code into the render/1 function. You can always extract it later (but remember to delete the render/1 function, if you do!).

    The first step would be to simply copy paste everything, with the small change that you need to replace every instance of @conn with @socket. Here’s what the ThermostatLive will look like:

    defmodule HomeAutomationWeb.ThermostatLive do
      use HomeAutomationWeb, :live_view
    
      alias HomeAutomation.Thermostat
    
      def mount(_params, _session, socket) do
        thermostats = Thermostat.all_for_user(socket.assigns.current_user)
    
        {:ok, assign(socket, %{thermostats: thermostats})}
      end
    
      def render(assigns) do
        ~H"""
        <h1>Listing Thermostats</h1>
    
        <%= for thermostat <- @thermostats do %>
          <div class="thermostat">
            <div class="row">
              <div class="column">
                <ul>
                  <li>Room name: <%= thermostat.room_name %></li>
                  <li>Temperature: <%= thermostat.temperature %></li>
                </ul>
              </div>
    
              <div class="column">
                Actions: <%= link("Show", to: Routes.thermostat_path(@socket, :show, thermostat)) %>
                <%= link("Edit", to: Routes.thermostat_path(@socket, :edit, thermostat)) %>
                <%= link("Delete",
                  to: Routes.thermostat_path(@socket, :delete, thermostat),
                  method: :delete,
                  data: [confirm: "Are you sure?"]
                ) %>
              </div>
    
              <div class="column">
                <%= form_for %{}, Routes.thermostat_path(@socket, :reboot), fn f -> %>
                  <%= hidden_input(f, :id, value: thermostat.id) %>
                  <%= submit("Reboot", class: "rounded-full") %>
                <% end %>
              </div>
            </div>
          </div>
        <% end %>
    
        <%= link("New Thermostat", to: Routes.thermostat_path(@socket, :new)) %>
        """
      end
    end
    

    While this makes the page render, both the links and the form are doing the same “dead” navigation as before, leading to full-page reloads, not to mention that we currently get out from the live page.

    To make the page more live, let’s focus on making the clicking of the Reboot-button result in a LiveView event, instead of a regular POST with subsequent redirect.

    Changing the button to something live

    The Reboot-button is a good target to turn live, as it should just fire an asynchronous event, without redirecting anywhere. Let’s have a look at how the button is currently defined:

    <%= form_for %{}, Routes.thermostat_path(@socket, :reboot), fn f -> %>
      <%= hidden_input(f, :id, value: thermostat.id) %>
      <%= submit("Reboot", class: "rounded-full") %>
    <% end %>
    

    The reason why the “dead” template used a form_for with a submit is two-fold. Firstly , since the action of rebooting the thermostat is not a navigation action, using an anchor tag (<a>) styled to look like a button would not be appropriate: using a form with a submit button is better, since it indicates that an action will be performed, and the action is clearly defined by the form’s method and action attributes. Secondly , a form allows you to include a CSRF token , which is automatically injected into the resulting <form> with form_for.

    Let’s look at what the live version will look like:

    <%= link("Reboot",
      to: "#",
      phx_click: "reboot",
      phx_value_id: thermostat.id,
      data: [confirm: "Are you sure?"]
    ) %>
    

    Let’s break this down a bit:

    A note about <form>

    First thing to note: this is no longer a <form>!

    Above I mentioned CSRF protection being a reason for using the <form>, but the Channel (i.e. the WebSocket connection between server and client) is already protected with a CSRF token, so we can send LiveView events without worrying about this.

    The detail above about navigation technically still applies, but in LiveView one would (generally) use a link with to: “#” for most things functioning like a button.

    As a minor note: you’ll still be using forms in LiveView for data input, although you’ll be using the <.form> component , instead of calling form_for .

    The phx_click event

    The second thing to note is that is the phx_click attribute, and it’s value “reboot”. The key is indicating what event should be fired when interacting with the generated <a> tag. The various possible event bindings can be found here:

    https://hexdocs.pm/phoenix_live_view/bindings.html

    If you want to have a reference for what events you can work with in LiveView, the link above is a good one to bookmark!

    Clarifying a potentially confusing detail: the events listed in the above linked documentation use hyphens (-) as separators in their names. link uses underscores (_), but apart from this, the event names are the same.

    The “reboot” string specifies the “name” of the event that is sent to the server. We’ll see the usage of this string in a second.

    The value attribute

    Finally, let’s talk about the phx_value_id attribute. phx_value_id is special, in that part of the attribute name is user defined. The phx_value_-part of the attribute name indicates to LiveView that the attribute is an “event value”, and what follows after phx_value_ (in our case: id) will be the key name in the resulting “event data map” on the server side. The value of the attribute will become the value in the map.

    This means that this…:

    phx_value_id: "thermostat_13" ,

    …will be received as the following on the server:

    %{id: "thermostat_13"}

    Further explanation can be found in the documentation:

    https://hexdocs.pm/phoenix_live_view/bindings.html#click-events

    Adding the corresponding event to the LiveView module

    Now that we’ve changed the Reboot-button in the template, we can get to the final step: amending the ThermostatLive module to react to the “reboot” event. We need to add a handle_event function to the module, and we’ll use the logic that we saw earlier in ThermostatController.reboot/2:

    defmodule HomeAutomationWeb.ThermostatLive do
      use HomeAutomationWeb, :live_view
    
      alias HomeAutomation.Thermostat
    
      def mount(_params, _session, socket) do
        # ...
      end
    
      def handle_event("reboot", %{"id" => id}, socket) do
        {:ok, thermostat} =
          id
          |> Thermostat.get!()
          |> Thermostat.reboot()
    
        {:noreply,
          put_flash(
            socket,
            :info,
            "Thermostat '#{thermostat.room_name}' rebooted."
          )}
      end
    
      def render(assigns) do
        # ...
      end
    end
    

    This handle_event function will react to the “reboot” event. The first argument to the function is the event name, the second is any passed data (through phx-value-*), and finally the socket.

    A quick note about the :noreply: presume that you’ll be using {:noreply, socket}, as the alternative ({:reply, map, socket}) is rarely useful. Just don’t worry about this, for now.

    That’s it!

    If you’ve been following this guide, trying to adapt it to your application, then you should have something like the following:

    1. A live route.
    2. A live module, where you’ve ported some of the logic from the controller module.
    3. A template that’s been adapted to be rendered by a live module.
    4. An element on the page that, when interacted with, causes an event to fire, with no need for a page refresh.

    At this stage, one would probably want to address the other CRUD actions, at the very least having their navigation point to the live route, e.g. creating a new thermostat should not result in a redirect to the dead route. Even better would be to have the CRUD actions all be changed to be fully live, requiring no page reloads. However, this is unfortunately outside of the scope of this guide.

    I hope that this guide has helped you to take your first steps toward working with LiveView!

    Further reading

    Here’s some closing advice that you might find useful, if you want to continue on your own.

    Exploring generators

    A very educative thing to do is comparing what code Phoenix generates for “dead” pages vs. live pages.

    Following are the commands for first generating a “dead” CRUD page setup for a context (Devices) and entity (Thermostat), and then one generates the same context and entity, but in a live fashion. The resulting git commits illustrate how the same intent is expressed in the two styles.

    $ mix phx.new home_automation --live
    $ cd home_automation
    $ git init .
    $ git add .
    $ git commit -m "Initial commit"
    $ mix phx.gen.html Devices Thermostat thermostats room_name:string temperature:integer
    $ git add .
    $ git commit -m "Added Devices context with Thermostat entity"
    $ git show
    $ mix phx.gen.live Devices Thermostat thermostats room_name:string temperature:integer
    $ git add .
    $ git commit -m "Added Live version of Devices with Thermostat"
    $ git show
    

    Note that when you get to the phx.gen.live step, you’ll have to answer Y to a couple of questions, as you’ll be overwriting some code. Also, you’ll generate a superfluous Ecto migration, which you can ignore.

    Study these generated commits, the resulting files, and the difference between the generated approaches, as it helps a lot with understanding how the transition from dead to live is done.

    Broadcasting events

    You might want your live module to react to specific events in your application. In the case of the thermostat application it could be the change of temperature on any of the thermostats, or the reboot status getting updated asynchronously. In the case of a LiveView chat application, it would be receiving a new message from someone in the conversation.

    A very commonly used method for generating and listening to events is making use of Phoenix.PubSub . Not only is Phoenix.PubSub a robust solution for broadcasting events, it gets pulled in as a dependency to Phoenix, so you should already have the hex installed.

    There are numerous guides out there for how to make use of Phoenix.PubSub, but a good place to start is probably watching how Chris McCord uses LiveView and Phoenix.PubSub to create a Twitter clone, in about 15 minutes (the part with Phoenix.PubSub is about half-way through the video).

    HTTP verbs

    Regarding HTTP verbs, coming from the world of dead routes, you might be wondering:

    I’ve got various GET/POST/PUT/etc. routes that serve different purposes. When building live modules, do all of the routes (with their different HTTP verbs) just get replaced with live?

    Yes, mostly. Generally your live parts of the application will handle their communication over the WebSocket connection, sending various events. This means that any kind of meaning you wish to communicate through the various HTTP verbs will instead be communicated through various events instead.

    With that said, you may still have parts of your application that will still be accessed with regular HTTP requests, which would be a reason to keep these routes around. The will not, however, be called from your live components.

    Credits

    Last year, Stone Filipczak wrote an excellent guide on the SmartLogic blog , on how to quickly introduce LiveView to an existing phoenix app. It was difficult to not have overlap with that guide, so my intention has been to complement it. Either way, I encourage you to check it out!

    The post You’ve been curious about LiveView, but you haven’t gotten into it appeared first on Erlang Solutions .