call_end

    • Pl chevron_right

      Mathieu Pasquet: Poezio 0.15 / 0.15.1

      news.movim.eu / PlanetJabber • 28 March, 2025 • 1 minute

    About three years since the last version, poezio 0.15 (and 0.15.1 to address a small packaging mishap, version numbers are cheap) was released yesterday!

    Poezio is a terminal-based XMPP client which aims to replicate the feeling of terminal-based IRC clients such as irssi or weechat; to this end, poezio originally only supported multi-user chats.

    Features

    Not a lot this time around… Maybe next time?

    • A new moderate plugin (for XEP-0425 moderation).
    • Better self-ping (through the use of the slixmpp now builtin XEP-0410 plugin).
    • Use the system CA store by default.
    • Add a Ctrl-↑ shortcut to run /correct on the last message.
    • Poezio benefits from the recent slixmpp improvements, which means it can now transparently use Direct TLS as well as StartTLS.

    Fixes

    • Duplicated first message in conversation/private tab.
    • The many "clone" users in a room roster when on a spotty connection.
    • Python 3.13 and 3.14 compatibility (plenty of deprecations and removals).
    • Plenty of type checking mistakes and minor bugs spotted by mypy and pylint.

    Removals

    • Only python 3.11 and up is supported (was: 3.7).
    • The OTR plugin has been removed.
    • The launch/update.sh have been heavily simplified to use the uv tool instead of custom logic. It will be updated in the future to be able to run on pipx too, as uv is not available on some platforms.
    • Pl chevron_right

      Erlang Solutions: My Journey from Ruby to Elixir: Lessons from a Developer

      news.movim.eu / PlanetJabber • 27 March, 2025 • 9 minutes

    Why I Looked Beyond Ruby

    For years, Ruby was my go-to language for building everything from small prototypes to full-fledged production apps. I fell in love with its elegance and expressiveness and how Ruby on Rails could turn an idea into a working web app in record time. The community—with its focus on kindness and collaboration—only deepened my appreciation. In short, Ruby felt like home.

    But as my projects grew in complexity, I started running into bottlenecks. I had apps requiring real-time features, massive concurrency, and high availability. Scaling them with Ruby often meant juggling multiple processes, external services, or creative threading approaches—all of which worked but never felt truly seamless. That’s when I stumbled upon Elixir.

    At first glance, Elixir’s syntax reminded me of Ruby. It looked approachable and developer-friendly. But beneath the surface lies a fundamentally different philosophy, heavily influenced by Erlang’s functional model and the concurrency power of the BEAM. Moving from Ruby’s object-oriented approach to Elixir’s functional core was eye-opening. Here’s how I made that transition and why I think it’s worth considering if you’re a fellow Rubyist.

    The Mindset Shift: From Objects to Functions

    Life Before: Classes and Objects

    In Ruby, I approached problems by modeling them as classes, bundling data and behavior together. It was second nature to create an @name instance variable in an initializer, mutate it, and rely on inheritance or modules to share behavior. This style allowed me to write expressive code, but it also hid state changes behind class boundaries.

    A New Paradigm in Elixir

    Elixir flips that script. Data is immutable, and functions are the stars of the show. Instead of objects, I have modules that hold pure functions. Instead of inheritance, I rely on composition and pattern matching. This required me to unlearn some habits.

    • No more hidden state : Every function receives data as input and returns a new copy of that data, so you always know where transformations happen.

    No more deep class hierarchies : In Elixir, code sharing happens via modules and function imports rather than extending base classes.

    Example: Refactoring a Class into a Module

    Ruby

    class Greeter
      def initialize(name)
        @name = name
      end
    
      def greet
        "Hello, #{@name}!"
      end
    end
    
    greeter = Greeter.new("Ruby")
    puts greeter.greet  # => "Hello, Ruby!"
    

    Elixir

    defmodule Greeter do
    
      def greet(name), do: "Hello, #{name}!"
    
    end
    IO.puts Greeter.greet("Elixir")  # => "Hello, Elixir!"
    

    At first, I missed the idea of storing state inside an object, but soon realized how clean and predictable code can be when data and functions are separated. Immutability drastically cut down on side effects, which in turn cut down on surprises.

    Concurrency: Learning to Trust Processes

    Ruby’s approach

    Ruby concurrency typically means spinning up multiple processes or using multi-threading for IO-bound tasks. If you need to queue background jobs, gems like Sidekiq step in. Sidekiq runs in its own OS processes, separate from the main web server, and these processes can run on multiple cores for true parallelism. This approach is straightforward but often demands more memory and additional infrastructure for scaling.

    On the plus side, Ruby can handle many simultaneous web requests if they’re primarily IO-bound (such as database queries). Even with the Global Interpreter Lock (GIL) limiting the parallel execution of pure Ruby code, IO tasks can still interleave, allowing a single OS process to serve multiple requests concurrently.

    Elixir and the BEAM

    Elixir, on the other hand, was built for concurrency from the ground up, thanks to the BEAM virtual machine. It uses lightweight processes (not OS processes or threads) that are cheap to create and easy to isolate. These processes don’t share memory but communicate via message passing—meaning a crash in one process won’t cascade.

    Example: Background Jobs

    Ruby ( Sidekiq )

    class UserSyncJob
      include Sidekiq::Worker
    
      # This job fetches user data from an external API
      # and updates the local database.
      def perform(user_id)
        begin
          # 1. Fetch data from external service
          external_data = ExternalApi.get_user_data(user_id)
    
          # 2. Update local DB (pseudo-code)
          user = User.find(user_id)
          user.update(
            name: external_data[:name],
            email: external_data[:email]
          )
    
          puts "Successfully synced user #{user_id}"
        rescue => e
          # If something goes wrong, Sidekiq can retry
          # automatically, or we can log the error.
          puts "Error syncing user #{user_id}: #{e.message}"
        end
      end
    end
    
    # Trigger the job asynchronously:
    UserSyncJob.perform_async(42)
    
    

    Elixir ( Oban )

    Although GenServer is often used to showcase Elixir’s concurrency model, a more accurate comparison to Sidekiq would be Oban – a background job processing library.

    defmodule MyApp.Workers.UserSyncJob do
      use Oban.Worker, queue: :default
    
      @impl Oban.Worker
      def perform(%{args: %{"user_id" => user_id}}) do
        with {:ok, external_data} <- ExternalApi.get_user_data(user_id),
             %User{} = user <- MyApp.Repo.get(User, user_id) do
          user
          |> User.changeset(%{
            name: external_data.name,
            email: external_data.email
          })
          |> MyApp.Repo.update!()
    
          IO.puts("Successfully synced user #{user_id}")
        else
          error -> IO.puts("Error syncing user #{user_id}: #{inspect(error)}")
        end
    
        :ok
      end
    end
    
    # Enqueue the job asynchronously:
    MyApp.Workers.UserSyncJob.new(%{"user_id" => 42})
    |> Oban.insert()
    
    
    

    With Oban, jobs are persistent, retried automatically on failure, and can survive restarts – just like Sidekiq. It leverages Elixir’s process model but gives you the robustness of a mature job queueing system. Since it stores jobs in PostgreSQL, you get full visibility into job states and histories without adding extra infrastructure. Both libraries offer paid tiers – Sidekiq Pro , Oban Pro .

    Here are some notable features offered in the Pro versions of Sidekiq and Oban:

    Sidekiq Pro:

    1. Batches and Callbacks: Enables grouping jobs into sets that can be tracked collectively programmatically or within the Sidekiq Web interface, with the ability to execute callbacks once all jobs in a batch are complete.
    2. Enhanced Reliability: Utilizes Redis’s RPOPLPUSH command to ensure that jobs are not lost if a process crashes or is terminated unexpectedly. Additionally, the Sidekiq Pro client can withstand transient Redis outages or timeouts by enqueueing jobs locally upon error and attempting delivery once connectivity is restored. ​
    3. Queue Pausing and Scheduling: Allows for pausing queues (e.g., processing a queue only during business hours) and expiring unprocessed jobs after a specified deadline, providing greater control over job processing times.

    Oban Pro:

    1. Workflows: Enables composing jobs with arbitrary dependencies, allowing for sequential, fan-out, and fan-in execution patterns to model complex job relationships.
    2. Global Concurrency and Rate Limiting: Provides the ability to limit the number of concurrent jobs running across all nodes (global concurrency) and to restrict the number of jobs executed within a specific time window (rate limiting).
    3. Dynamic Cron: Offers cron configuration scheduling before boot or during runtime, globally, with scheduling guarantees and per-entry timezone overrides. It’s an ideal solution for applications that can’t miss a cron job or must dynamically start and manage scheduled jobs at runtime.

    Their open-source cores, however, already cover the most common background job needs and are well-suited for many production applications.

    Debugging and Fault Tolerance: A New Perspective

    Catching Exceptions in Ruby

    Error handling in Ruby typically involves begin/rescue blocks. If a critical background job crashes, I might rely on Sidekiq’s retry logic or external monitoring. It worked, but I always worried about a missed exception bringing down crucial parts of the app.

    Supervision Trees in Elixir

    Elixir uses a concept called a supervision tree , inherited from Erlang’s OTP. Supervisors watch over processes, restarting them automatically if they crash. At first, I found it odd to let a process crash on purpose instead of rescuing the error. But once I saw how quickly the supervisor restarted a failed process, I was hooked.

    defmodule Worker do
      use GenServer
    
      def start_link(_) do
    	GenServer.start_link(__MODULE__, %{}, name: __MODULE__)
      end
    
      def init(_), do: {:ok, %{}}
    
      def handle_call(:risky, _from, state) do
        raise "Something went wrong"
        {:reply, :ok, state}
      end
    end
    
    defmodule SupervisorTree do
      use Supervisor
    
      def start_link(_) do
        Supervisor.start_link(__MODULE__, :ok, name: __MODULE__)
      end
    
      def init(:ok) do
        children = [
          {Worker, []}
        ]
        Supervisor.init(children, strategy: :one_for_one)
      end
    end
    
    

    Now, if Worker crashes, the supervisor restarts it automatically. No manual intervention, no separate monitoring service, and no global meltdown.

    LiveView: A Game-Changer for Web Development

    Why I Loved Rails

    Rails made it trivial to spin up CRUD apps, handle migrations, and integrate with robust testing tools like RSpec. But building real-time interactions (like chat or real-time dashboards) could be tricky without relying heavily on JavaScript frameworks or ActionCable .

    Phoenix + LiveView

    Elixir’s Phoenix framework parallels Rails in many ways: fast bootstrapping, a clear folder structure, and strong conventions. But Phoenix Channels and LiveView push it even further. With LiveView, I can build highly interactive, real-time features that update the DOM via websockets—all without a dedicated front-end framework.


    Elixir (Phoenix LiveView)

    defmodule ChatLive do
      use Phoenix.LiveView
    
      def mount(_params, _session, socket) do
        {:ok, assign(socket, :messages, [])}
      end
    
      def handle_event("send", %{"message" => msg}, socket) do
        {:noreply, update(socket, :messages, fn msgs -> msgs ++ [msg] end)}
      end
    
      def render(assigns) do
        ~H"""
        <h1>Chat</h1>
        <ul>
          <%= for msg <- @messages do %>
            <li><%= msg %></li>
          <% end %>
        </ul>
    
        <form phx-submit="send">
          <input type="text" name="message" placeholder="Type something"/>
          <button type="submit">Send</button>
        </form>
        """
      end
    end
    
    

    This simple LiveView code handles real-time chat updates directly from the server, minimising the JavaScript I need to write. The reactive UI is all done through server-rendered updates.

    My Takeaways

    Embracing Immutability

    At first, it was tough to break free from the habit of mutating data in place. But once I got comfortable returning new data structures, my code became far more predictable. I stopped chasing side effects and race conditions.

    Let It Crash

    Ruby taught me to rescue and recover from every possible error. Elixir taught me to trust the supervisor process. This “let it crash” philosophy took some getting used to, but it simplifies error handling significantly.

    Less JavaScript, More Productivity

    LiveView drastically cut down my front-end overhead. I don’t need a full client framework for real-time updates. Seeing how quickly I could build a proof-of-concept live chat convinced me that Elixir was onto something big.

    Still Love Ruby

    None of this means I dislike Ruby. I still think Rails is fantastic for many use cases, especially when you need to prototype something quickly or build a classic CRUD app. Ruby fosters a developer-friendly environment that many languages can only aspire to. I simply reached a point where concurrency and fault tolerance became a top priority—and that’s where Elixir really shines.

    Final Advice for Rubyists Curious About Elixir

    1. Start Small : Experiment with a tiny service or background job. Don’t rewrite your entire monolith on day one.
    2. Get Comfortable with Functional Concepts : Embrace immutability and pattern matching. The mental shift is real, but it pays off.
    3. Check Out Phoenix and LiveView : If you’re doing web dev, see how your typical Rails flow translates in Phoenix. And definitely try LiveView.
    4. Utilise Existing Ruby Skills : Your understanding of test-driven development, domain modeling, and code readability all carry over—you’ll just write them differently.

    Ultimately, if you’re running into the same scaling or concurrency issues I did, Elixir might just be the upgrade you need. It brings a breath of fresh air to large-scale, real-time, and fault-tolerant applications while keeping developer happiness front and center. For me, it was worth the leap, and I haven’t looked back since. If you’re looking for a detailed comparison of Elixir and Ruby, our comprehensive Elixir vs. Ruby guide has you covered.

    The post My Journey from Ruby to Elixir: Lessons from a Developer appeared first on Erlang Solutions .

    • Pl chevron_right

      The XMPP Standards Foundation: Open Letter to Meta: Support True Messaging Interoperability with XMPP

      news.movim.eu / PlanetJabber • 27 March, 2025 • 1 minute

    It has been a little over a year since Meta announced their proposal for third-parties to achieve messaging interoperability with WhatsApp, with Facebook Messenger following half a year later. Not for everyone, and only because these services were designated as Gate Keepers under the recent Digital Markets Act (DMA) in the EU. So only in the EU, and then with many strings attached. In that time, a lot has been written. Element/Matrix have put in efforts to work with Meta to get some interoperability going. Unfortunately, the reference offers don’t provide what we would call true interoperability, and given that virtually nobody has taken up Meta on this offer, their proposal just falls short across the board.

    Over at the IETF, the More Instant Messaging Interoperability (MIMI) working group is working on mechanisms for interoperability. While several of our members are involved with MIMI and working on the implementation of the related MLS protocol for end-to-end encryption, we believe it is time to have true interoperability using a well-tested and widely implemented set of standards: XMPP.

    To that end, we today publish an Open Letter to Meta . A call to action, urging Meta to adopt XMPP for messaging interoperability. For a more in-depth reasoning, we also provide a detailed technical briefing .

    We are ready. Let’s make it happen.

    • Pl chevron_right

      Mathieu Pasquet: slixmpp v1.10

      news.movim.eu / PlanetJabber • 26 March, 2025 • 2 minutes

    This new version does not have many new features, but it has quite a few breaking changes, which should not impact many people, as well as one important security fix.

    Thanks to everyone who contributed with code, issues, suggestions, and reviews!

    Security

    After working on TLS stuff, I noticed that we still allowed unencrypted SCRAM to be negociated, which is really not good. For packagers who only want this security fix, the commit fd66aef38d48b6474654cbe87464d7d416d6a5f3 should apply cleanly on any slixmpp version.

    (most servers in the wild have unencrypted connections entirely disabled, so this is only an issue for Man in the Middle attacks)

    Enhancements

    • slixmpp now supports XEP-0368 and allows to choose easily between direct TLS, or STARTTLS.

    Breaking Changes

    • The security issue mentioned above is a breaking change if you actively want to connect to servers without encryption. If that is a desired behavior, you can still set xmpp['feature_mechanisms'].unencrypted_scram = True on init.

    • Removal of the timeout_callback parameter anywhere it was present. Users are encouraged to await on the coroutine or the future returned by the function, which will raise an IqTimeout exception when appropriate.

    • Removal of the custom google plugins, which I am guessing have not worked in a very long time (both the google and gmail_notify plugin).

    • Removal of the Stream Compression ( XEP-0138 ) plugin. It was not working at all and use of compression is actively discouraged for security reasons .

    • Due to the new connection code, the configuration of the connection parameters has changed quite a bit:

      • The XMLStream (from which the ClientXMPP class inherits) does not have a use_ssl parameter anymore. Instead it has enable_direct_tls and enable_starttls as well as enable_plaintext attributes. Those attributes control whether we want to connect using starttls or direct TLS. The plaintext is for components since we only implement the jabber component protocol ( XEP-0114 ).
      • The connect() method signature has changed entirely, and it no longer takes any other parameters than host and port (which must be provided together to have an effect).
      • Handling of custom addresses has changed a bit, now they are set through calling connect() , and kept until connect() is called without arguments again.
      • The DNS code will now fetch both xmpps-client and xmpp-client records (unless direct TLS is explicitly disabled) and prefer direct TLS if it has the same priority as STARTTLS.
      • The SRV targeted by the queries can be customized using the tls_services and starttls_services of ClientXMPP (but have no idea why anyone would do this)

    Fixes

    • Another issue encountered with the Rust JID, trying to compare a JID against strings that cannot be parsed or other objects would raise an InvalidJID exception instead of returning False .
    • The ssl_cert event would only be invoked on STARTTLS.
    • One of the asyncio warnings on program exit (that a coroutine is still running).
    • Traceback with BaseXMPP.get .
    • A potential edge case in the disco ( XEP-0030 ) plugin when using strings instead of JIDs.
    • A traceback in vcard-temp ( XEP-0054 ) and Legacy Delayed Delivery ( XEP-0091 ) when parsing datetimes.
    • A traceback when manipulating conditions in feature mechanisms.
    • A traceback in Ad-hoc commands ( XEP-0050 ) during error handling.
    • Many tracebacks in OAuth over XMPP ( XEP-0235 ) due to urllib API changes.

    Links

    You can find the new release on codeberg , pypi , or the distributions that package it in a short while.

    Previous version: 1.9.1 .

    • Pl chevron_right

      Kaidan: Kaidan 0.12.0: User Interface Polishing and Account Migration Fixes

      news.movim.eu / PlanetJabber • 20 March, 2025 • 1 minute

    Kaidan 0.12.0 looks and behaves better than ever before! Chats can now quickly be pinned and moved. In addition, the list of group chat participants to mention them is placed above the cursor if enough space is available. With this release, OMEMO can be used right after migrating an account and migrated contacts are correctly verified.

    Have a look at the changelog for more details.

    Changelog

    Features:

    • Use square selection to crop avatars (fazevedo)
    • Use background with rounded corners for chat list items (melvo)
    • Remove colored availability indicator from chat list item (melvo)
    • Display group chat participant picker above text cursor in large windows (melvo)
    • Do not allow to enter/send messages without visible characters (melvo)
    • Remove leading/trailing whitespace from exchanged messages (melvo)
    • Ignore received messages without displayable content if they cannot be otherwise processed (melvo)
    • Allow to show/hide buttons to pin/move chat list items (melvo)

    Bugfixes:

    • Fix style for Flatpak (melvo)
    • Fix displaying video thumbnails and opening files for Flatpak (melvo)
    • Fix message reaction details not opening a second time (melvo)
    • Fix opening contact addition view on receiving XMPP URIs (melvo)
    • Fix format of text following emojis (melvo)
    • Fix eliding last message text for chat list item (melvo)
    • Fix unit tests (mlaurent, fazevedo, melvo)
    • Fix storing downloaded files with unique names (melvo)
    • Fix overlay to change/open avatars shown before hovered in account/contact details (melvo)
    • Fix verification of moved contacts (fazevedo)
    • Fix setting up end-to-end encryption (OMEMO 2) after account migration (melvo)

    Notes:

    • Kaidan requires KWindowSystem and KDSingleApplication now (mlaurent)
    • Kaidan requires KDE Frameworks 6.11 now
    • Kaidan requires KQuickImageEditor 0.5 now
    • Kaidan requires QXmpp 1.10.3 now

    Download

    Or install Kaidan for your distribution:

    Packaging status

    • Pl chevron_right

      Erlang Solutions: DORA Compliance: What Fintech Businesses Need to Know

      news.movim.eu / PlanetJabber • 12 February, 2025 • 7 minutes

    The Digital Operational Resilience Act (DORA) is now in effect as of 17th January 2025, making compliance mandatory for fintech companies, financial institutions, and ICT providers across the UK and EU. With over 22,000 businesses impacted, DORA sets clear expectations for how firms must manage operational resilience and protect against cyber threats.

    As cybercriminals become more sophisticated, regulatory action has followed. DORA is designed to ensure that businesses have the right security measures in place to handle disruptions, prevent data breaches, and stay operational under pressure.

    Yet, despite having time to prepare, 43% of organisations admit they won’t be fully compliant for at least another three months. But non-compliance isn’t just a delay. It comes with serious risks, including penalties and reputational damage.

    So, what does DORA mean for your fintech business? Why is compliance so important, and how can you make sure you meet the requirements?

    What is DORA?

    With technology at the heart of financial services, the risks associated with cyber threats and ICT disruptions have never been higher. The European Parliament introduced the Digital Operational Resilience Act (DORA ) to strengthen the financial sector’s ability to withstand and recover from these digital risks.

    Originally drafted in September 2020 and ratified in 2022, DORA officially came into force in January 2025. It establishes strict requirements for managing ICT risks, ensuring financial institutions follow clear protection, detection, containment, recovery, and repair guidelines.

    A New Approach to Cybersecurity

    This regulation marks a major step forward in cybersecurity, prioritising operational resilience to keep businesses running even in the face of severe cyber threats or major ICT failures. Compliance will be monitored through a unified supervisory approach, with the European Banking Authority (EBA), the European Insurance and Occupational Pensions Authority (EIOPA), and the European Securities and Markets Authority (ESMA) working alongside national regulators to enforce the new standards.

    A report from the European Supervisory Authorities (EBA, EIOPA, and ESMA) highlighted that in 2024, of the registers analysed during a ‘dry run’ exercise involving nearly 1,000 financial entities across the EU, just 6.5% passed all data quality checks . This shows just how demanding the requirements are, and the importance of getting it right early for a smooth path to compliance.

    The Five Pillars of DORA

    DORA introduces firm rules on ICT risk management, incident reporting, resilience testing, and oversight of third-party providers. Rather than a one-size-fits-all approach, compliance depends on factors like company size, risk tolerance, and the type of ICT systems used. However, at its core, DORA is built around five key pillars that form the foundation of a strong operational resilience framework.

    Five Pillars of DORA for business

    Source: Zapoj

    These pillars also serve as the basis for a DORA compliance checklist , which businesses can use to ensure they meet regulatory requirements.

    Below is a breakdown of each pillar and what businesses need to do to comply:

    1. ICT Risk Management

    Businesses must establish a framework to identify, assess, and mitigate ICT risks. This includes:

    • Conducting regular risk assessments to spot vulnerabilities.
    • Implementing security controls to address identified risks.
    • Developing a clear incident response plan to handle disruptions effectively.

    2. ICT-Related Incident Reporting

    Companies must have structured processes to detect, report, and investigate ICT-related incidents. This involves:

    • Setting up clear reporting channels for ICT issues.
    • Classifying incidents by severity to determine response urgency.
    • Notifying relevant authorities promptly when serious incidents occur.

    3. Digital Operational Resilience Testing

    Financial institutions are required to test their ICT systems regularly to ensure they can withstand cyber threats and operational disruptions . This includes:

    • Running simulated attack scenarios to test security defences.
    • Assessing the effectiveness of existing resilience measures.
    • Continuously improving systems based on test results.

    4. ICT Third-Party Risk Management

    DORA highlights the importance of managing risks linked to third-party ICT providers . Businesses must:

    • Conduct due diligence before working with external service providers.
    • Establish contractual agreements outlining security expectations.
    • Continuously monitor third-party performance to ensure compliance.

    5. Information Sharing

    Collaboration is a key part of DORA, with financial institutions encouraged to share cyber threat intelligence . This may include:

    • Participating in industry forums to stay informed about emerging threats.
    • Sharing threat intelligence with peers to strengthen collective defences.
    • Conducting joint cybersecurity exercises to improve incident response.

    By following these five pillars, businesses can build a strong foundation for digital resilience . Compliance isn’t just about meeting regulatory requirements, it’s about safeguarding operations, protecting customers, and strengthening the financial sector against growing cyber threats.

    How to Achieve DORA Compliance for Your Business

    Regardless of the stage of compliance a business is in, there are a few key areas that must be focused on to protect themselves. Here’s what you need to do:

    Understand DORA’s Scope and Requirements

    The first step to DORA compliance is understanding what’s required. Take the time to familiarise yourself with its requirements and ask any questions.

    Conduct a Risk Assessment

    A solid risk assessment is at the heart of DORA compliance. Identify and evaluate risks across your ICT systems—this includes everything from cyber threats to software glitches. Understanding these risks helps you plan how to minimise their impact on your operations.

    Create a Resilience Strategy

    With your risk assessment in hand, develop a tailored resilience strategy. This should include:

    • Preventive Measures : Set up cyber defences and redundancy systems to prevent disruptions.
    • Detection Systems : Ensure you can quickly spot any anomalies or threats.
    • Response and Recovery Plans : Have clear plans in place to respond and recover if an incident happens.

    Invest in Cybersecurity and IT Infrastructure

    To meet DORA compliance for business, invest in strong cybersecurity tools like firewalls and encryption. Ensure your IT infrastructure is resilient, with reliable backup and recovery systems to minimise disruptions.

    Strengthen Incident Reporting

    DORA stresses the importance of quick and accurate incident reporting. Establish clear channels for detecting and reporting ICT incidents, ensuring timely updates to authorities when needed.

    Build a Culture of Resilience

    Resilience is an ongoing effort. To stay compliant, create a culture where resilience is top of mind:

    • Provide regular staff training .
    • Regularly test and audit your systems.
    • Stay updated on emerging risks and technologies.

    Partner with IT Experts

    DORA compliance can be tricky, especially if your team lacks in-house expertise. Partnering with IT service providers who specialise in compliance can help you meet DORA’s requirements more smoothly.

    Consequences for Non-Compliance

    We’ve already established the importance of meeting DORA’s strict mandates. But failing to comply with these regulations can have serious consequences for businesses- from hefty fines to operational restrictions. Here’s what businesses need to be aware of to protect their organisation:

    Fines for Non-Compliance

    • Up to 2% of global turnover or €10 million, whichever is higher, for non-compliant financial institutions.
    • Third-party ICT providers could face fines as high as €5 million or 1% of daily global turnover for each day of non-compliance.
    • Failure to report major incidents within 4 hours can lead to further penalties.

    Reputational Damage and Leadership Liability

    • Public notices of breaches can cause lasting reputational damage, affecting business trust and relationships.
    • Business leaders can face personal fines of up to €1 million for failing to ensure compliance.

    Operational Restrictions

    • Regulators can limit or suspend business activities until compliance is achieved.
    • Data traffic records can be requested from telecommunications operators if there’s suspicion of a breach.

    How Erlang Solutions Can Help You with DORA Compliance

    Don’t panic, prioritise. If you’ve identified that your business may be at risk of non-compliance, taking action now is key. Erlang Solutions can support you in meeting DORA’s requirements through our Security Audit for Erlang and Elixir (SAFE) .

    With extensive experience in the financial sector, we understand the critical need for resilient, scalable systems. Our expertise with Erlang and Elixir has helped leading fintech institutions, including Klarna, Vocalink, and Ericsson , build fault-tolerant, high-performing and compliant systems.

    SAFE is aligned with several key areas of DORA, including ICT risk management, resilience testing, and third-party risk management:

    • Proactive Risk Identification and Mitigation : SAFE identifies vulnerabilities and provides recommendations to address risks before they become critical. This proactive approach supports DORA’s requirements for continuous ICT risk management.
    • Continuous Monitoring Capabilities : SAFE allows ongoing monitoring of your systems, which aligns with DORA’s emphasis on continuous risk detection and mitigation.
    • Detailed Incident Response Recommendations : SAFE’s detailed findings help you refine your incident response and recovery plans, ensuring your systems are prepared to quickly recover from cyberattacks or disruptions.

    Third-Party Risk Management : The security audit can provide insights into your third-party integrations, helping to ensure they meet necessary security standards and comply with DORA’s requirements.

    Conclusion

    DORA compliance is now in effect, making it essential to act if your business isn’t fully compliant. Delays can lead to penalties and increased risk exposure. Prioritising ICT risk management, strengthening resilience, and ensuring proper incident reporting will bring you closer to compliance. But this isn’t just about meeting requirements, it’s about safeguarding your organisation and building long-term operational resilience.

    If you have compliance concerns or just want to talk through your next steps, we’re here to help. Contact us to talk through your options.

    The post DORA Compliance: What Fintech Businesses Need to Know appeared first on Erlang Solutions .

    • Pl chevron_right

      Erlang Solutions: Understanding Digital Wallets

      news.movim.eu / PlanetJabber • 23 January, 2025 • 7 minutes

    Digital wallets, once considered futuristic, have now become essential tools for both consumers and businesses. But what are digital wallets , and why should you care about them? Customer expectations are changing. Many companies are turning to them to streamline transactions and enhance the customer experience

    This guide unpacks the fundamentals of digital wallets, highlighting their benefits, market trends, and implications for businesses.

    What Are Digital Wallets?

    Digital wallets (or e-wallets) have changed the way we make and receive payments. By 2025, digital payments are expected to account for 50% of global payments .

    At their core, digital wallets store a user’s payment information, securely encrypted for seamless transactions. This could involve credit card details, bank accounts, or even cryptocurrencies.

    Apple Pay , Google Wallet , PayPal , and Samsung Pay have become household names, but the ecosystem is much broader and growing rapidly as more industries recognise their potential. Digital wallets simplify purchases and integrate with loyalty programmes, personal finance management, and even identity verification , offering a comprehensive solution for consumers and businesses alike.

    How Do Digital Wallets Work?

    Digital wallets offer a secure and straightforward way to manage transactions. In a time when data breaches are increasingly common, security has never been more important. With cybercrime damages projected to reach $10.5 trillion annually in 2025 , they play a major role in keeping financial information safe.

    Here’s how they work. First, you link your financial details to the wallet. This could mean adding a credit card or connecting a bank account. Once your details are in, the wallet uses encryption and tokenisation to protect your sensitive information, converting it into a secure format that’s almost impossible for unauthorised parties to access.

    When you make a payment, the process is quick and simple: tap, scan, or click. Behind the scenes, your digital wallet securely communicates with the payment processor to authorise the transaction. With advanced security measures like encryption and tokenisation, digital wallets not only reduce the risk of fraud but also allow for a seamless and reliable user experience.

    Types of Digital Wallets

    Now let’s explore the various types of digital wallets available:

    1. Closed wallets

    Amazon closed wallets example, Understanding Digital Wallets

    2. Semi-closed wallets

    Semi-closed wallets like Paytm or Venmo, allow payments at select merchant locations or online stores that accept their platform.

    Venmo semi-closed wallets example, Understanding Digital Wallets

    3. Open wallets

    Backed by major financial institutions, open wallets allow broader transactions, including withdrawals, online purchases, and transfers. Popular examples include PayPal and Google Pay .

    4. Prepaid Wallets

    Prepaid wallets let you load funds in advance, so you use only what’s available. Once the balance is depleted, you just reload the wallet. This approach is great for budgeting.

    Choosing the right digital wallet depends on your business model.

    Whether you’re looking for customer loyalty through closed wallets or broader international reach with open wallets, selecting the right type will drive better engagement and efficiency.

    Why Should Businesses Care?

    The rise of digital wallets represents a strategic opportunity for businesses to serve their customers better and improve their bottom line. Here’s why:

    Enhanced customer experience

    Digital wallets streamline the checkout process, reducing friction and improving customer satisfaction. Features like one-click payments and loyalty integrations can drive repeat business.

    Improved security

    Tokenisation and encryption reduce the risks associated with traditional payment methods. This not only protects users but also helps businesses build trust.

    Cost efficiency

    Payment processors for digital wallets often charge lower fees than those for traditional credit card transactions, which can run as high as 3%. Depending on the provider, digital wallets can significantly cut these costs.

    Global reach

    For companies aiming to expand internationally, digital wallets simplify cross-border transactions by supporting multiple currencies.

    Digital wallets offer tangible benefits: enhanced customer experience, improved security, and cost efficiency. Businesses that integrate them can streamline payments and improve retention and satisfaction, driving growth.

    Integrating Digital Wallets into Your Business

    Before jumping into digital wallets, it’s worth taking a moment to plan things out. A bit of strategy can go a long way.

    Here are some key things to keep in mind:

    • Know what your customers want : Look at your data or run a quick survey to find out which wallets your customers use most.
    • Pick the right payment processor : Go for a provider that supports lots of wallets. This gives you flexibility and makes it easier to grow.
    • Focus on security : Work with experts, like Erlang Solutions , to help build secure systems that keep data safe and meet the necessary guidelines around payments.
    • Test, optimise and refine : Start with a proof of concept to see how things work. We can help you get this done quickly so you can adjust and stay ahead of the game.

    By understanding what your customers need and choosing flexible payment options, you can bring digital wallets into your business without any hiccups. Picking the right tech also means your operations keep running smoothly while you embrace innovations.

    Challenges and Considerations

    While digital wallets offer numerous benefits, they’re not without challenges:

    • Adoption barriers : Older demographics or tech-averse users may still prefer traditional payment methods. According to AARP , about 50% of older adults in the U.S. feel uncomfortable with new payment technologies. Businesses need strategies to educate and ease this transition.
    • Risk of fraud : While secure, digital wallets are not immune to hacking or phishing attacks. Companies must ensure continuous security updates and user education on best practices.
    • Regulatory compliance : Navigating the global landscape of payment regulations can be complex. From GDPR to PSP2 , businesses must comply with relevant laws, especially when handling international transactions.

    While digital wallets offer advantages, businesses must address adoption barriers, security concerns, and regulatory compliance. Preparing for these challenges allows for a smooth transition and mitigates potential risks.

    Industries Using Digital Wallets

    We’ve established how digital wallets are revolutionising the way we handle payments, making transactions faster, safer, and more convenient. There are some industries to highlight that are making the most of this technology.

    Fintech

    In the fintech world, digital wallets have become indispensable. For instance, Erlang Solutions collaborated with TeleWare to enhance their Re:Call app with secure instant messaging capabilities for a major UK financial services group. By integrating MongooseIM, they ensured compliance with strict regulatory requirements while improving user experience.

    Teleware industries using Fintech wallets


    E-commerce

    Online shopping has been transformed by digital wallets. In 2021, a quarter of all UK transactions were made using digital wallets, and this trend is expected to grow by 18.9% through 2028. Features like biometric authentication not only make the checkout process quicker but also enhance security, leading to happier customers and increased loyalty.

    Gaming

    Gamers love convenience, and digital wallets deliver just that.

    By consolidating various payment methods, wallets like PayPal and Google Pay make in-game purchases seamless. This ease of use not only reduces transaction fees but also keeps players engaged, boosting customer retention.

    Banking

    Traditional banks are catching up by integrating digital wallets into their services. These wallets often combine payment processing with features like loyalty programmes and travel card integration. Advanced security measures, including biometric authentication, ensure that customers feel secure while enjoying personalised, cashless payment solutions.

    The Future of Digital Wallets

    The future of digital wallets lies in innovation.

    Here are just some of the trends we are poised to see shape the landscape in the next few years:

    • Integration with wearable tech: Smartwatches and fitness trackers will make payments even more convenient.
    • Biometric authentication : Consumers increasingly demand convenience without sacrificing security. Biometric features such as fingerprint recognition, voice ID, and facial scans will become commonplace, providing higher protection.
    • Cryptocurrency support : As digital currencies gain acceptance, more wallets are supporting crypto transactions. With over 300 million cryptocurrency users worldwide, businesses must be ready to accommodate this growing market.

    You can explore even more key digital payment trends here .

    Staying ahead of these trends will position your business as a forward-thinking leader in the digital economy.

    To conclude

    Digital wallets aren’t just another way to pay; they’re a game-changer for improving customer experience, boosting security, and driving growth. Nearly half the world’s consumers are already using them, and with transaction values expected to hit over $10 trillion by 2026, they’re becoming a must-have for businesses.

    The big question for leaders isn’t whether to integrate them, but how to do it right. Now’s the perfect time to get started. By focusing on secure tech, understanding your customers, and keeping an eye on trends, you can unlock massive benefits. Erlang Solutions has the expertise to help you build digital wallet solutions that are secure and scalable. Ready to chat about your strategy? Drop us a message today .


    The post Understanding Digital Wallets appeared first on Erlang Solutions .

    • Pl chevron_right

      ProcessOne: How Big Tech Pulled Off the Billion-User Heist

      news.movim.eu / PlanetJabber • 16 January, 2025 • 10 minutes

    How Big Tech Pulled Off the Billion-User Heist

    For many years, I have heard countless justifications for keeping messaging systems closed. Many of us have tried to rationalize walled gardens for various reasons:

    • Closed messaging systems supposedly enable faster progress, as there’s no need to collaborate on shared specifications or APIs. You can change course more easily.
    • Closed messaging systems are better for security, spam, or whatever other risks we imagine, because owners feel they have better control of what goes in and out.
    • Closed messaging systems are said to foster innovation by protecting the network owner’s investments.

    But is any of this really true? Let’s take a step back and examine these claims.

    A Brief History of Messaging Tools

    Until the 1990s, messaging systems were primarily focused on building communities. The dominant protocol of the time was IRC (Internet Relay Chat) . While IRC allowed private messaging, its main purpose was to facilitate large chatrooms where people with shared interests could hang out and interact.

    In the 1990s, messaging evolved into a true communication tool, offering an alternative to phone calls. It enabled users to stay in touch with friends and family while forging new connections online. With the limitations of the dial-up era, where users weren’t always connected, asynchronous communication became the norm. Features like offline messages and presence indicators emerged, allowing users to see at a glance who was online, available, or busy.

    The revolution began with ICQ , quickly followed by competitors like Yahoo! Messenger and MSN Messenger . However, this proliferation of platforms created a frustrating experience: your contacts were spread across different networks, requiring multiple accounts and clients. Multiprotocol clients like Meebo and Pidgin emerged, offering a unified interface for these networks. Still, they often relied on unofficial protocol implementations, which were unreliable and lacked key features compared to native clients.

    To address these issues, a group of innovators in 1999 set out to design a better solution—an open instant messaging protocol that revolved around two fundamental principles:

    1. Federation : A federated protocol would allow users on any server to communicate seamlessly with users on other servers. This design was essential for scalability, as supporting billions of users on a single platform was unimaginable at the time.
    2. Gateway Support : The protocol would include gateways to existing networks, enabling users to connect with contacts on other platforms transparently, without needing to juggle multiple applications. The gateways were implemented on the server-side, allowing fast iterations on gateway code.

    This initiative, originally branded as Jabber , gave rise to XMPP (Extensible Messaging and Presence Protocol) , a protocol standardized by the IETF. XMPP gained traction, with support from several open-source servers and clients. Major players adopted the protocol—Google for Google Talk and Facebook for Facebook Messenger , enabling third-party XMPP clients to connect to their services. The future of open messaging looked promising.

    Fast Forward 20 Years

    Today, that optimism has faded. Few people know about XMPP or its newer counterpart, Matrix. Google’s messaging services have abandoned XMPP, Facebook has closed its XMPP gateways, and the landscape has returned to the fragmentation of the past.

    Instead of Yahoo! Messenger and MSN, we now deal with WhatsApp , Facebook Messenger , Telegram , Google Chat , Signal , and even messaging features within social networks like Instagram and LinkedIn. Our contacts are scattered across these platforms, forcing us to switch between apps just as we did in the 1990s.

    What Went Wrong?

    Many of these platforms initially adopted XMPP, including Google, Facebook, and even WhatsApp. However, their focus on growth led them to abandon federation. Requiring users to create platform-specific accounts became a key strategy for locking in users and driving their friends to join the same network. Federation, while technically advantageous, was seen as a barrier to user acquisition and growth.

    The Big Heist

    The smartphone era marked a turning point in messaging, fueled by always-on connectivity and the rise of app stores. Previously, deploying an app at scale required agreements with mobile carriers to preload the app on the phones they sold. Carriers acted as gatekeepers, tightly controlling app distribution. However, the introduction of app stores and data plans changed everything. These innovations empowered developers to bypass carriers and build their own networks on top of carrier infrastructure—a phenomenon known as over-the-top (OTT) applications .

    Among these new apps was WhatsApp , which revolutionized messaging in several ways. Initially, WhatsApp relied on Apple’s Push Notification Service to deliver messages in real time, bypassing the need for a complex infrastructure at launch. Its true breakthrough, however, was the decision to use phone numbers as user identifiers —a bold move that set a significant precedent. At the time, most messaging platforms avoided this approach because phone numbers were closely tied to SMS, and validating them via SMS codes came with significant costs.

    WhatsApp cleverly leveraged this existing, international system of telecommunication identifiers to bootstrap its proprietary network. By using phone numbers, it eliminated the need for users to create, manage and share separate accounts, simplifying onboarding. WhatsApp also capitalized on the high cost of SMS at the time. Since short messages were often not unlimited, and international SMS was especially expensive, many users found it cheaper to rely on data plans or Wi-Fi to message friends and family—particularly across borders.

    When we launched our own messaging app, TextOne (now discontinued), we considered using phone numbers as identifiers but ultimately decided against it. Forcing users to disclose such personal information felt intrusive and misaligned with privacy principles. By then, the phone had shifted from being a shared household device to a deeply personal one, making phone numbers uniquely tied to individual identities.

    Later, Whatsapp launched its own infrastructure based on ejabberd, but they kept their service closed. At that time, we also considered using phone number when launching our own messaging app, the now discontinued TextOne, but refused to use that. It did not feel right, as you were forcing people to disclose an important private information. As the phone had become a personnal device, instead of a household device, the phone number played the role of unique identifier for a single individual.

    Unfortunately, most major players seeking to scale their messaging platforms adopted the phone number as a universal identifier. WhatsApp’s early adoption of this strategy helped it rapidly amass a billion users, giving it a decisive first-mover advantage. However, it wasn’t the only player to recognize and exploit the power of phone numbers in building massive-scale networks. Today, the phone number is arguably the most accurate global identifier for individuals, serving as a cornerstone of the flourishing data economy.

    What’s Wrong With Using Phone Numbers as IDs?

    Phone numbers are a common good —a foundation of global communication. They rely on the principle of universal accessibility: you can reach anyone, anywhere in the world, regardless of their phone provider or location. This system was built on international cooperation, with a branch of the United Nations playing a key role in maintaining a provider-agnostic, interoperable platform. At its core is a globally unique phone numbering system, created through collaborative standards and protocols.

    However, over-the-top (OTT) companies have exploited this infrastructure to build private networks on top of the public system. They’ve leveraged the universal identification scheme of phone numbers—and, by extension, the global interoperable network—to construct proprietary, closed ecosystems.

    To me, this feels like a misuse of a common good. Phone numbers, produced through international cooperation, should not be appropriated freely by private corporations without accountability. While it may be too late to reverse this trend, we should consider a contribution system for companies that store and use phone numbers as identifiers.

    For example, companies that maintain databases with millions of unique phone numbers could be required to pay an annual fee for each phone number they store. This fee could be distributed to the countries associated with those numbers. Such a system would achieve two things:

    1. Encourage Accountability : Companies would need to evaluate whether collecting and storing phone numbers is truly essential for their business. If the data isn’t valuable enough to justify the cost, they might choose not to collect it.
    2. Promote Fairness : For companies that rely heavily on phone numbers to track, match, and build private, non-interoperable services, this fee would act as a fair contribution, akin to taxes paid for using public road infrastructure.

    It looks a lot to me that the phone number is a common good produced and use by international cooperation. It is too late to prevent it to be used by Big Tech companies. However, it may seem fair to imagine a contribution from company storing phone number. This is a data that is not their property and not theirs to use. Shouldn&apost we consider a tax on phone numbers storage and usage ? For example, if a company store a millions unique phone number in their database, why not require a yearly fee, to be paid to each country that any phone number is associated to, one yearly fee per phone number ?

    Company would have to think twice about storing such personnal data. Is it valuable for your business ? If it is not valuable enough, fair enough, delete them and do not ask them, but if you need it to trakt and match user and build a private non interoperable service, then paying a fair contribution for their usage should be considered. It would be like the tax they pay to leverage road infrastructure in countries where they operate.

    Beyond Taxes: The Push for Interoperability

    Of course, a contribution system alone won’t solve the larger issue. We also need a significant push toward interoperable and federated messaging . While the European Digital Markets Act (DMA) includes an interoperability requirement, it doesn’t go far enough. Interoperability alone cannot address the challenges of closed ecosystems.

    I’ll delve deeper into why interoperability must be paired with federation in a future article, as this is a critical piece of the puzzle.

    Interoperability vs. Velocity

    To conclude, I’d like to reference the introduction of the IETF SPIN draft , which perfectly encapsulates the trade-offs between interoperability and innovation:

    Voice, video and messaging today is commonplace on the Internet, enabled by two distinct classes of software. The first are those provided by telecommunications carriers that make heavy use of standards, such as the Session Initiation Protocol (SIP) [RFC3261]. In this approach - which we call the telco model - there is interoperability between different telcos, but the set of features and functionality is limited by the rate of definition and adoption of standards, often measured in years or decades. The second model - the app model - allows a single entity to offer an application, delivering both the server side software and its corresponding client-side software. The client-side software is delivered either as a web application, or as a mobile application through a mobile operating system app store. The app model has proven incredibly successful by any measure. It trades off interoperability for innovation and velocity.

    The downside of the loss of interoperability is that entry into the market place by new providers is difficult. Applications like WhatsApp, Facebook Messenger, and Facetime, have user bases numbering in the hundreds of millions to billions of users. Any new application cannot connect with these user bases, requiring the vendor of the new app to bootstrap its own network effects.

    This summary aligns closely with the ideas I’ve explored in this article.

    I believe we’ve reached a point where we need interoperability far more than continued innovation in voice, video, and messaging. While innovation in these areas has been remarkable, we have perhaps been too eager—or too blind—to sacrifice interoperability in the name of progress.

    Now, the pendulum is poised to swing back. Centralization must give way to federation if we are to maintain the universality that once defined global communication. Without federation, there can be no true global and universal service, and without universality, we risk regressing, fragmenting all our communication systems into isolated and proprietary silos.

    It’s time to prioritize interoperability, to reclaim the vision of a truly connected world where communication is open, accessible, and universal.

    • Pl chevron_right

      ProcessOne: Fluux multiple Subscriptions/Services

      news.movim.eu / PlanetJabber • 15 January, 2025

    Fluux is our ejabberd Business Edition cloud service. With a subscription, we deploy, manage, update and scale an instance of our most scalable messaging server. Up to now, if you wanted to deploy several services, you had to create another account with a different email. Starting today, you can manage and pay for different servers from a single Fluux account.

    Here is how to use that feature. On Fluux dashboard main page after the list of your service/platforms you may have noticed a "New" button.

    alt

    You will be then redirected on a page to choose your plan.

    alt

    Once terms and conditions are approved, you will be able to fill your card information on a page hosted by our payment provider.

    alt

    When payment is succeeded, you will be then redirected to Fluux console and a link create your service:

    alt

    On this last page you will be able to provide a technical name that will be used to provision your Fluux service.

    alt

    After 10 minutes you can enjoy your new service at techname.m.in-app.io (such test1.m.in-app.io in above screenshot)