call_end

    • Pl chevron_right

      ProcessOne: ProcessOne Unveils New Website

      news.movim.eu / PlanetJabber • 22 October, 2024 • 1 minute

    We’re excited to announce the relaunch of our website, designed to better showcase our expertise in large-scale messaging solutions, highlighting our full spectrum of supported protocols—from XMPP to MQTT and Matrix. This reflects our core strength: delivering reliable messaging at scale.

    The last major redesign was back in October 2017, so this update was long overdue. As we say farewell to the old design, here’s a screenshot of the previous version to commemorate the journey so far.

    alt

    In addition to refreshing the layout and structure, we’ve made a significant change under the hood by migrating from WordPress to Ghost. After using Ghost for my personal blog and being thoroughly impressed, we knew it was the right choice for ProcessOne. The new platform offers not only long-term maintainability but also a much more streamlined, enjoyable day-to-day experience, thanks to its faster and more efficient authoring tools.

    All of our previous blog content has been successfully migrated, and we’re now in a great position to deliver more frequent updates on topics such as messaging, XMPP, ejabberd, MQTT, and Matrix. Stay tuned for exciting new posts!

    We’d love to hear your feedback and suggestions on what topics you’d like us to cover next. To join the conversation, simply create an account on our site and share your thoughts.

    • Pl chevron_right

      Erlang Solutions: Client Case Studies with Erlang Solutions

      news.movim.eu / PlanetJabber • 17 October, 2024 • 2 minutes

    At Erlang Solutions, we’ve worked with diverse clients, solving business challenges and delivering impactful results. We would like to share just some of our top client case studies in this latest post with you.

    Get a glimpse into how our leading technologies—Erlang, Elixir, MongooseIM, and more—combined with our expert team, have transformed the outcomes for major industry players.

    Transforming streaming with zero downtime for TV4

    Our first client case study is our partnership with TV4 . The leading Nordic broadcaster needed to address major challenges in the competitive streaming industry. With global giants like Netflix and Disney Plus on the rise, TV4 needed to unify user data from multiple platforms into a seamless streaming experience for millions of subscribers.

    Using Elixir, we ensured a smooth migration and helped TV4 reduce infrastructure costs and improve efficiency.

    TV4 Erlang Solutions client case study

    Check out the full TV4 case study .

    Financial services with secure messaging solutions with Teleware

    Erlang Solutions partnered with Teleware to enhance their Reapp with secure instant messaging (IM) capabilities for a major UK financial services group. As TeleWare aimed to meet strict regulatory requirements while improving user experience, they needed a robust, scalable solution that could seamlessly integrate into their existing infrastructure.

    We utilised MongooseIM ’s out-of-the-box functionality, and Teleware quickly integrated group chat features that allowed secure collaboration while meeting the Financial Conduct Authority (FCA) compliance standards.

    Teleware Erlang Solutions

    Take a look at the full Teleware case study .

    Gaming experiences with enhanced scalability and performance for FACEIT

    FACEIT , the leading independent competitive gaming platform with over 25 million users, had some scalability and performance challenges. As its user base grew, FACEIT needed to upgrade their systems to handle hundreds of thousands of players seamlessly.

    By upgrading to the latest version of MongooseIM and Erlang , we delivered a solution that managed large user lists and improved overall system efficiency.

    FACEIT Erlang Solutions client case study

    Explore the full FACEIT case study .

    Rapid growth with scalable systems for BET Software

    In another one of our client case studies, we worked with BET Software , a leading betting software provider in South Africa, to address the challenges posed by rapid growth and increasing user demand. As the main technology provider for Hollywoodbets, BET Software needed a more resilient and scalable system to support peak betting periods.

    By utilising Elixir to support and transition to a distributed data architecture, we helped BET Software eliminate bottlenecks and ensure seamless service- even during the busiest betting events.

    BET Software Erlang Solutions client case study

    Read the BET Software case study in full.

    Innovation and competitive edge with International Registries Inc.

    The final client case study of this series is with International Registries Inc. (IRI) . They are global leaders in maritime and corporate registry services, who were looking to enhance its technological infrastructure and strengthen their competitive advantage.

    Erlang Solutions helped IRI by using Elixir to reduce costs, improve system maintainability, and decommission servers.

    International Registries Inc Erlang Solutions

    Discover the complete IRI case study.

    Real results from client case studies

    Our client case study examples show how we help companies like TV4, FACEIT, TeleWare, BET Software, and International Registries Inc. solve tough tech challenges and excel in competitive markets. Whether it’s boosting performance, securing communications, or scaling for growth, our solutions unlock new possibilities.

    You can explore more Erlang Solutions case studies here .

    If you’d like to chat with the Erlang Solutions team about what we can do for you, feel free to drop us a message .

    The post Client Case Studies with Erlang Solutions appeared first on Erlang Solutions .

    • Pl chevron_right

      Ignite Realtime Blog: Smack 4.5.0-beta5 released

      news.movim.eu / PlanetJabber • 17 October, 2024

    The Ignite Realtime developer community is happy to announce that Smack 4.5 entered its beta phase. Smack is a XMPP client API written in Java that is able to run on Java SE and Android. Smack’s beta phase started already a few weeks ago, but 4.5.0-beta5 is considered to be a good candidate to announce, as many smaller issues have been ironed out.

    With Smack 4.5 we bumped the minimum Java version to 11. Furthermore Smack now requires a minimum Android API of 26 to run.

    If you are using Smack 4.4 (or maybe an even older version), then right now is the perfect time to create an experimental branch with Smack 4.5 to ease the transition.

    Smack 4.5 APIs is considered stable, however small adjustments are still possible during the beta phase.

    1 post - 1 participant

    Read full topic

    • Pl chevron_right

      Erlang Solutions: Why Open Source Technologies is a Smart Choice for Fintech Businesses

      news.movim.eu / PlanetJabber • 10 October, 2024 • 11 minutes

    Traditionally, the fintech industry relied on proprietary software , with usage and distribution restricted by paid licences. Fintech open-source technologies were distrusted due to security concerns over visible code in complex systems.

    But fast-forward to today and financial institutions, including neobanks like Revolut and Monzo, have embraced open source solutions. These banks have built technology stacks on open-source platforms, using new software and innovation to strengthen their competitive edge.

    While proprietary software has its role, it faces challenges exemplified by Oracle/Java’s subscription model changes, which have led to significant cost hikes. In contrast, open source Delivers flexibility, scalability, and more control, making it a great choice for fintechs aiming to remain adaptable.

    Curious why open source is the smart choice for fintech? Let’s look into how this shift can help future-proof operations, drive innovation, and enhance customer-centric services.

    The impact of Oracle Java’s pricing changes

    Before we understand why open source is a smart choice for fintech, let’s look at a recent example that highlights the risks of relying on proprietary software—Oracle Java’s subscription model changes.

    A change to subscription

    Java, known as the “language of business,” has been the top choice for developers and 90% of Fortune 500 companies for over 28 years, due to its stability, performance, and strong Oracle Java community.

    In January 2023, Oracle quietly shifted its Java SE subscription model to an employee-based system, charging businesses based on total headcount, not just the number of users. This change alarmed many subscribers and resulted in steep increases in licensing fees. According to Gartner , these changes made operations two to five times more expensive for most organisations.

    Fintech open source Java SE universal products

    Oracle Java SE Universal Subscription Global Price List (by volume)

    Impact on Oracle Java SE user base

    By January 2024, many Oracle Java SE subscribers had switched to OpenJDK, the open-source version of Java. Online sentiment towards Oracle has been unfavourable, with many users expressing dissatisfaction in forums. Those who stuck with Oracle are now facing hefty subscription fee increases with little added benefit.

    Lessons from Oracle Java SE

    For fintech companies, Oracle Java’s pricing changes have highlighted the risks of proprietary software. In particular, there are unexpected cost hikes, less flexibility, and disruptions to critical infrastructure. Open source solutions, on the other hand, give fintech firms more control, reduce vendor lock-in, and allow them to adapt to future changes while keeping costs in check.

    The advantages of open source technologies for Fintech

    Open source software is gaining attention in financial institutions, thanks to the rise of digital financial services and fintech advancements.

    It is expected to grow by 24% by 2025 and companies that embrace open-source benefit from enhanced security, support for cryptocurrency trading, and a boost to fintech innovation.

    Cost-effectiveness

    The cost advantages of open-source software have been a major draw for companies looking to shift from proprietary systems. For fintech companies, open-source reduces operational expenses compared to the unpredictable, high costs of proprietary solutions like Oracle Java SE.

    Open source software is often free, allowing fintech startups and established firms to lower development costs and redirect funds to key areas such as compliance, security, and user experience. It also avoids fees like:

    • Multi-user licences
    • Administrative charges
    • Ongoing annual software support charges

    These savings help reduce operating expenses while enabling investment in valuable services like user training, ongoing support, and customised development, driving growth and efficiency.

    A solution to big tech monopolies

    Monopolies in tech, particularly in fintech, are increasing. As reported by CB Insights , about 80% of global payment transactions are controlled by just a few major players. These monopolies stifle innovation and drive up costs.

    Open-source software decentralises development, preventing any single entity from holding total control. It offers fintech companies an alternative to proprietary systems, reducing reliance on monopolistic players and fostering healthy competition. Open-source models promote transparency, innovation, and lower costs, helping create more inclusive and competitive systems.

    Transparent and secure solutions

    Security concerns have been a major roadblock that causes companies and startups to hesitate in adopting open-source software.

    A common myth about open source is that its public code makes it insecure. But, open-source benefits from transparency, as it allows for continuous public scrutiny. Security flaws are discovered and addressed quickly by the community, unlike proprietary software, where vulnerabilities may remain hidden.

    An example is Vocalink , which powers real-time global payment systems. Vocalink uses Erlang , an open-source language designed for high-availability systems, ensuring secure, scalable payment handling. The transparency of open source allows businesses to audit security, ensure compliance, and quickly implement fixes, leading to more secure fintech infrastructure.

    Ongoing community support

    Beyond security, open source benefits from vibrant communities of developers and users who share knowledge and collaborate to enhance software. This fosters innovation and accelerates development, allowing for faster adaptation to trends or market demands.

    Since the code is open, fintech firms can build custom solutions, which can be contributed back to the community for others to use. The rapid pace of innovation within these communities helps keep the software relevant and adaptable.

    Interoperability

    Interoperability is a game-changer for open-source solutions in financial institutions, allowing for the seamless integration of diverse applications and systems- essential for financial services with complex tech stacks.

    By adopting open standards (publicly accessible guidelines ensuring compatibility), financial institutions can eliminate costly manual integrations and enable plug-and-play functionality. This enhances agility, allowing institutions to adopt the best applications without being tied to a single vendor.

    A notable example is NatWest’s Backplane , an open-source interoperability solution built on FDC3 standards. Backplane enables customers and fintechs to integrate their desktop apps with various banking and fintech applications, enhancing the financial desktop experience. This approach fosters innovation, saves time and resources, and creates a more flexible, customer-centric ecosystem.

    Future-proofing for longevity

    Open-source software has long-term viability. Since the source code is accessible, even if the original team disbands, other organisations, developers or the community at large can maintain and update the software. This ensures the software remains usable and up-to-date, preventing reliance on unsupported tools.

    Open Source powering Fintech trends

    According to the latest study by McKinsey and Company , Artificial Intelligence (AI), machine learning (ML), blockchain technology, and hyper-personalisation will be among some of the key technologies driving financial services in the next decade.

    Open-source platforms will play a key role in supporting and accelerating these developments, making them more accessible and innovative.

    AI and fintech innovation

    • Cost-effective AI/ML : Open-source AI frameworks like TensorFlow , PyTorch , and Scikit -learn enable startups to prototype and deploy AI models affordably, with the flexibility to scale as they grow. This democratisation of AI allows smaller players to compete with larger firms.
    • Fraud detection and personalisation : AI-powered fraud detection and personalised services are central to fintech innovation. Open-source AI libraries help companies like Stripe and PayPal detect fraudulent transactions by analysing patterns, while AI enables dynamic pricing and custom loan offers based on user behaviour.
    • Efficient operations : AI streamlines back-office tasks through automation, knowledge graphs, and natural language processing (NLP), improving fraud detection and overall operational efficiency.
    • Privacy-aware AI : Emerging technologies like federated learning and encryption tools help keep sensitive data secure, for rapid AI innovation while ensuring privacy and compliance.

    Blockchain and fintech

    Open-source blockchain platforms allow fintech startups to innovate without the hefty cost of proprietary systems:

    • Open-source blockchain platforms : Platforms like Ethereum , Bitcoin Core, and Hyperledger are decentralising finance, providing transparency, reducing reliance on intermediaries, and reshaping financial services.
    • Decentralised finance (DeFi) :  DeFi is projected to see an impressive rise, with P2P lending growing from $43.16 billion in 2018 to an estimated $567.3 billion by 2026 . Platforms like Uniswap and Aave , built on Ethereum, are pioneering decentralised lending and asset management, offering an alternative to traditional banking. By 2023, Ethereum alone locked $23 billion in DeFi assets, proving its growing influence in the fintech space. Enterprise blockchain solutions: Open source frameworks like Hyperledger Fabric and Corda are enabling enterprises to develop private, permissioned blockchain solutions, enhancing security and scalability across industries, including finance.

    Cost-effective innovation: Startups leveraging open-source blockchain technologies can build innovative financial services while keeping costs low, helping them compete effectively with traditional financial institutions.

    Hyper-personalisation

    Hyper-personalisation is another key trend in fintech, with AI and open-source technologies enabling companies to create highly tailored financial products. This shift moves away from the traditional “one-size-fits-all” model, helping fintechs solve niche customer challenges and deliver more precise services.

    Consumer demand for personalisation

    A Salesforce survey found that 65% of consumers expect businesses to personalise their services, while 86% are willing to share data to receive more customised experiences.

    Salesforce survey fintech open source businesses

    source- State of the connected customer

    The expectation for personalised services is shaping how financial institutions approach customer engagement and product development.

    Real-world examples of open-source fintech

    Companies like Robinhood and Chime leverage open-source tools to analyse user data and create personalised financial recommendations. These platforms use technologies like Apache Kafka and Apache Spark to process real-time data, improving the accuracy and relevance of their personalised offerings-from customised investment options to tailored loan products.

    Implementing hyper-personalisation lets fintech companies strengthen customer relationships, boost retention, and increase deposits. By leveraging real-time, data-driven technologies, they can offer highly relevant products that foster customer loyalty and maximise value throughout the customer lifecycle. With the scalability and flexibility of open-source solutions, companies can provide precise, cost-effective personalised services, positioning themselves for success in a competitive market.

    Erlang and Elixir: Open Source solutions for fintech applications

    Released as open-source in 1998 , Erlang has become essential for fintech companies that need scalable, high-concurrency, and fault-tolerant systems. Its open-source nature, combined with the capabilities of Elixir (which builds on Erlang’s robust architecture), enables fintech firms to innovate without relying on proprietary software, providing the flexibility to develop custom and efficient solutions.

    Both Erlang and Elixir’s architecture are designed to ensure potentially zero downtime, making them well-suited for real-time financial transactions.

    Why Erlang and Elixir are ideal for Fintech:

    • Reliability : Erlang’s and Elixir’s design ensures that applications continue to function smoothly even during hardware or network failures, crucial for financial services that operate 24/7, guaranteeing uninterrupted service. Elixir inherits Erlang’s reliability while providing a more modern syntax for development.
    • Scalability : Erlang and Elixir can handle thousands of concurrent processes, making them perfect for fintech companies looking to scale quickly, especially when dealing with growing data volumes and transactions. Elixir enhances Erlang’s scalability with modern tooling and enhanced performance for certain types of workloads.
    • Fault tolerance: Built-in error detection and recovery features ensure that unexpected failures are managed with minimal disruption. This is vital for fintech applications, where downtime can lead to significant financial losses. Erlang’s auto restoration philosophy and Elixir’s features enable 100% availability and no transaction is lost.
    • Concurrency & distribution : Both Erlang and Elixir excel at managing multiple concurrent processes across distributed systems. This makes them ideal for fintechs with global operations that require real-time data processing across various locations.

    Open-source fintech use cases

    Several leading fintech companies have already used Erlang to build scalable, reliable systems that support their complex operations and real-time transactions.

    • Klarna : This major European fintech relies on Erlang to manage real-time e-commerce payment solutions, where scalability and reliability are critical for managing millions of transactions daily.
    • Goldman Sachs : Erlang is utilised in Goldman Sachs’ high-frequency trading platform, allowing for ultra-low latency and real-time processing essential for responding to market conditions in microseconds.
    • Kivra : Erlang/ Elixir supports Kivra’s backend services, managing secure digital communications for millions of users, and ensuring constant uptime and data security.

    Erlang and Elixir -supporting future fintech trends

    The features of Erlang and Elixir align well with emerging fintech trends:

    • DeFi and Decentralised Applications (dApps) : With the growth of decentralised finance (DeFi), Erlang’s and Elixir’s fault tolerance and real-time scalability make them ideal for building dApps that require secure, distributed networks capable of handling large transaction volumes without failure.
    • Hyperpersonalisation : As demand for hyperpersonalised financial services grows, Erlang and Elixir’s ability to process vast amounts of real-time data across users simultaneously makes them vital for delivering tailored, data-driven experiences.
    • Open banking : Erlang and Elixir’s concurrency support enables fintechs to build seamless, scalable platforms in the open banking era, where various financial systems must interact across multiple applications and services to provide integrated solutions.

    Erlang and Elixir can handle thousands of real-time transactions with zero downtime making them well-suited for trends like DeFi, hyperpersonalisation, and open banking. Their flexibility and active developer community ensure that fintechs can innovate without being locked into costly proprietary software.

    To conclude

    Fintech businesses are navigating an increasingly complex and competitive landscape where traditional solutions no longer provide a competitive edge. If you’re a company still reliant on proprietary software, ask yourself: Is your system equipped to expect the unexpected? Can your existing solutions keep up with market demands?

    Open-source technologies offer a solution to these challenges. Fintech firms can reduce costs, improve security, and, most importantly, innovate and scale according to their needs. Whether by reducing vendor lock-ins, tapping into a vibrant developer community, or leveraging customisation, open-source software is set to transform the fintech experience, providing the tools necessary to stay ahead in a digital-first world. If you’re interested in exploring how open-source solutions like Erlang or Elixir can help future-proof your fintech systems, contact the Erlang Solutions team .

    The post Why Open Source Technologies is a Smart Choice for Fintech Businesses appeared first on Erlang Solutions .

    • Pl chevron_right

      Erlang Solutions: Why do systems fail? Tandem NonStop system and fault tolerance

      news.movim.eu / PlanetJabber • 3 October, 2024 • 6 minutes

    If you’re an Elixir, Gleam, or Erlang developer, you’ve probably heard about the capabilities of the BEAM virtual machine, such as concurrency, distribution, and fault tolerance. Fault tolerance was one of the biggest concerns of Tandem Computers. They created their Tandem Non-Stop architecture for high availability in their systems, which included ATMs and mainframes.

    In this post, I’ll be sharing the fundamentals of the NonStop architecture design with you. Their approach to achieving high availability in the presence of failures is similar to some implementations in the Erlang Virtual Machine, as both rely on concepts of processes and modularity.

    Systems with High Availability

    Why do systems fail? This question should probably be asked more often, considering all the factors it involves. It was central to the NonStop architecture because achieving high availability depends on understanding system failures.

    For tandem systems , any system has critical components that could potentially cause failures. How often do you ask yourself how long can your system operate before a failure? There is a metric known as MTBF (mean time between failures), which is calculated by dividing the total operating hours of the system by the number of failures. The result represents the hours of uninterrupted operation.

    Many factors can affect the MTBF, including administration, configuration, maintenance, power outages, hardware failures, and more. So, how can you survive these eventualities to achieve at least virtual high availability in your systems?

    Tandem NonStop critical components

    High availability in hardware has taught us important insights about continuous operation. Some hardware implementations rely on decomposing the system into modules, allowing for modularity to contain failures and maintain operation through backup modules instead of breaking the whole system and needing to restart it. The main concept, from this point of view, is to use modules as units of failure and replacement.

    Tandem NonStop system in modules

    High Availability for Software Systems

    But what about the software’s high availability? Just as with hardware, we can find important lessons from operative system designers who decompose systems into modules as units of service. This approach provides a mechanism for having a unit of protection and fault containment.

    To achieve fault tolerance in software, it’s important to address similar insights from the NonStop design:

    • Modularity through processes and messages.
    • Fault containment.
    • Process pairs for fault tolerance.
    • Data integrity.

    Can you recognise some similarities so far?

    The NonStop architecture essentially relies on these concepts. The key to high availability, as I mentioned before, is modularity as a unit of service failure and protection.

    A process should have a fail-fast mechanism, meaning it should be able to detect a failure during its operation, send a failure signal and then stop its operation. In this way, a system can achieve fault detection through fault containment and by sharing no state.

    Tandem NonStop primary backup

    Another important consideration for your system is how long it takes to recover from a failure. Jim Gray, software designer and researcher at Tandem Computers, in his paper ”Why computers stop and what can be done about it?” proposed a model of failure affected by two kinds of bugs: Bohrbugs, which cause critical failures during operation, and Heisenbugs, which are more soft and can persist in the system for years.

    Implementing Processes-Pairs Strategies

    The previous categorisation helps us to understand better strategies for implementing processes-pairs design, based on a primary process and a backup process:

    • Lockstep: Primary and backup processes execute the same task, so if the primary fails, the backup continues the execution. This is good for hardware failures, but in the presence of Heisenbugs, both processes will remain the failure.
    • State checkpointing: A requestor entity is connected to a processes-pair. When the primary process stops operation, the requestor switches to the backup process. You need to design the requestor logic.
    • Automatic checkpointing: Similar to the previous, but using the kernel to manage the checkpointing.
    • Delta checkpointing : Similar to state checkpointing but using logical rather than physical updates.
    • Persistence: When the primary process fails, the backup process starts its operation without a state. The system must implement a way to synchronise all the modules and avoid corrupt interaction.
    Tandem NonStop processes pairs

    All of these insights are drawn from Jim Gray’s paper, written in 1985 and referenced in Joe Armstrong’s 2003 thesis, “Making Reliable Distributed Systems in the presence of software errors” . Joe emphasised the importance of the Tandem NonStop system design as an inspiration for the OTP design principles.

    Elixir and High Availability

    So if you’re a software developer learning Elixir, you’ll probably be amazed by all the capabilities and great tooling available to build software systems. By leveraging frameworks like Phoenix and toolkits such as Ecto, you can build full-stack systems in Elixir. However, to fully harness the power of the Erlang virtual machine (BEAM) you must understand processes.

    Just as the Tandem computer system relied on transactions, fault containment and a fail-fast mechanism, Erlang achieves high availability through processes. Both systems consider it important to modularise systems into units of service and failure: processes.

    About the process

    A process is the basic unit of abstraction in Erlang, a crucial concept because the Erlang virtual machine (BEAM) operates around this. Elixir and Gleam share the same virtual machine, which is why this concept is important for the entire ecosystem.

    A process is:

    • A strongly isolated entity.
    • Creation and destruction is a lightweight operation.
    • Message passing is the only way to interact with processes.
    • Share no state.
    • Do what they are supposed to do or fail.

    Just remember, these are the fundamentals of Erlang, which is considered a message-oriented language, and its virtual machine (BEAM), on which Elixir runs.

    Tandem NonStop BEAM

    If you want to read more about processes in Elixir I recommend reading this article I wrote: Understanding Processes for Elixir Developers.

    I consider it important to read papers like Jim Gray’s article because they teach us the history behind implementations that attempt to solve problems. I find it interesting to read and share these insights with the community because it’s crucial to understand the context behind the tools we use. Recognising that implementations exist for a reason and have stories behind them is essential.

    You can find many similarities between Tandem and Erlang design principles:

    • Both aim to achieve high availability .
    • Isolation of operations is extremely important to contain failure.
    • Processes that share no state are crucial for building modular systems.
    • Process interactions are key to maintaining operation in the presence of errors. While Tandem computers implemented process-pairs design, Erlang implemented OTP patterns .

    To conclude

    Take some time to read about the Tandem computer design. It’s interesting because these features share significant similarities with OTP design principles for achieving high availability. Failure is something we need to deal with in any kind of system, and it’s important to be aware of the reasons and know what you can do to manage it and continue your operation. This is crucial for any software developer, but if you’re an Elixir developer, you’ll probably dive deeper into how processes work and how to start designing components with them and OTP.

    Thanks for reading about the Tandem NonStop system. If you like this kind of content, I’d appreciate it if you shared it with your community or teammates. You can visit this public repository on GitHub where I’m adding my graphic recordings and insights related to the Erlang ecosystem or contact the Erlang Solutions team to chat more about Erlang and Elixir.

    Tandem NonStop Joe Armstrong

    Illustrations by Visual Partner-Ship @visual_partner

    Jaguares, ESL Americas Office

    @carlogilmar

    The post Why do systems fail? Tandem NonStop system and fault tolerance appeared first on Erlang Solutions .

    • Pl chevron_right

      Ignite Realtime Blog: Dan is voted in the XSF's Council!

      news.movim.eu / PlanetJabber • 21 December, 2023

    Our very own @danc was voted into the XMPP Standards Foundation Council not to long ago!

    The XMPP Standards Foundation is an independent, nonprofit standards development organisation whose primary mission is to define open protocols for presence, instant messaging, and real-time communication and collaboration on top of the IETF’s Extensible Messaging and Presence Protocol (XMPP). Most of the projects that we’re maintaining in the Ignite Realtime community have a strong dependency on XMPP.

    The XMPP Council, that Dan now is a member of, is the technical steering group that approves XMPP Extension Protocols. With that, he’s now on the forefront of new developments within the XMPP community! Congrats to you, Dan!

    For other release announcements and news follow us on Mastodon or X

    4 posts - 4 participants

    Read full topic

    • Pl chevron_right

      ProcessOne: Instant Messaging: Protocols are “Commons”, Let’s Take Them Seriously

      news.movim.eu / PlanetJabber • 20 December, 2023 • 8 minutes

    TLDR;

    Thirty years after the advent of the first instant messaging services, we still haven’t reached the stage where instant messaging platforms can freely communicate with each other, as is the case with email. In 1999, the Jabber/XMPP protocol was created and standardized for this purpose by the Internet Engineering Task Force (IETF). Since then, proprietary messaging services have continuously leveraged the power of internet giants to dominate the market. Why do neither XMPP nor the more recent Matrix, which aimed to improve upon it, break through this barrier, when it’s clear that protocols must be open to enable exchange? Without this fundamental principle, the Internet itself wouldn’t exist.

    In the following article, I revisit how the French government recently promoted the instant messaging service Olvid and what this reveals about our approach to digital technology. It’s frustrating to see France promote a secure, yet proprietary messaging service that offers no progress in terms of interoperability, especially at a time when the European Union is striving to open up the sector by requiring all messaging services to be capable of intercommunication, through the Digital Markets Act .

    I conclude with reflections on our inability in Europe to collaborate on “commons,” our difficulty in building a foundation, an ecosystem that allows for healthy co-opetition, a blend of competition and collaboration, which is the only way to regain significance in the digital economy. Short-term political thinking forces our companies into an every-man-for-himself approach, preferring to dominate a small market rather than share a larger one.

    Today, perhaps, it’s time for a change?

    Cables

    Thirty years and counting since the emergence of the first instant messaging services, we still lack a universally accepted exchange protocol, as is the case with email. The Jabber protocol, later renamed XMPP (eXtensible Messaging and Presence Protocol) and made a standard, was born with the hope of breaking the proliferation of isolated silos like MSN, ICQ, Yahoo!, which did not communicate with each other. Today, other silos have emerged, but the problem persists: it is still impossible to exchange messages between accounts from different major messaging providers. Why? Let me tell you the story of a clumsy communication operation around a French messaging service, Olvid, which illustrates well the familiar patterns we often find ourselves stuck in.

    The French Government’s Endorsement of a Proprietary Messaging Service: A Closer Look

    I discovered the messaging service Olvid in late November 2023, following a flood of articles in the French press. I wondered how a company of 15 employees, created in 2019, had managed to get such press coverage. It was promoted directly by Prime Minister Elisabeth Borne: “Popular messaging applications like WhatsApp, Telegram or Signal have ‘security flaws’,” justified the office of Elisabeth Borne, who urged her ministers to download the French application.” ( Les Échos, November 30, 2023 ). In November 2023, Matignon asked government members and ministerial offices to install this system on their phones and computers “to replace other instant messaging services to enhance the security of exchanges.” Then came the superlatives: “The most secure messaging service in the world” (Jean-Noël Barrot). “A step towards greater French sovereignty” (Elisabeth Borne). And it needs to be done quickly. Elisabeth Borne asked ministers to “take all necessary steps” to deploy Olvid in their ministry “by December 8, 2023, at the latest” ( Ouest France , November 29, 2023).

    Why Olvid? The articles I read on the subject remain relatively vague; I know mainly that it is certified by ANSII, the organization guaranteeing the state’s IT security. Yet, it’s far from the first secure messaging service I’ve come across, and it’s the first time I’ve heard of Olvid. What about other services and especially Signal, which is recognized worldwide for its security, backed by audits? Among secure messengers, the list is long: Signal, Threema, Wire, Berty, etc. So, what security flaws are we talking about?

    Signal Hits Back: A Strong Response to Security Claims

    Signal’s response was swift, with a direct and clear position from Meredith Whittaker, president of the Signal Foundation:

    The French PM is mandating ministers use a small French messaging app. OK. But I’m alarmed that she’s claiming “security flaws” in Signal (et al) to justify the move. This claim is not backed by any evidence, and is dangerously misleading esp. coming from gov.
    If you want to use a French product go for it! But don’t spread misinfo in the process. Signal is independently audited, open source, and our protocol has been tested for >10yrs. We are serious about responsible disclosure and we prioritize all reports to security@signal.org
    Numérama, December 1, 2023

    Double Ratchet

    Regarding Olvid’s security, the main argument seems to be as follows: The system does not rely on centralized directories, operates without identifiers, which means no user account is hosted in the cloud.

    First, it seems to me that this is the principle of key-based authentication. Message routing is done solely based on a key, in the cryptographic sense. If it is lost, it’s impossible to recover the account. Nothing revolutionary, then; it’s cryptography, dating back to the encryption software PGP (Pretty Good Privacy) of the 1990s and even before.

    Then, such a system generally requires the physical exchange of public keys. Where Olvid seems to stand out is in the alternative ways proposed to simplify and lighten the burden of key exchange by meeting physically. This can work, first because the product is not free, so the user base is limited, where Signal, for example, offers a global platform and says it needs an identifier, the phone number to limit spam. Then, these alternative methods rely on mobile device management (MDM) tools, interfacing with an enterprise version of the Olvid server. In one way or another, this goes through a central point of distribution and reintroduces a weakness. It’s far from a completely decentralized protocol like what the team building the Berty messaging service is trying to do, for instance.

    Browsing their site to find the protocol, I admit I choked a bit on some mentions thrown a little freely on their site, for example, Post Quantum Cryptography , cryptography that resists quantum computing. It’s nice, it’s pleasant, but in practice, what’s the reality? I didn’t find more detail under this mention, but personally, being hit with such buzzwords makes me rather flee, as it smells of a commercial who got a bit carried away. But let’s assume, the Olvid team is composed of encryption experts. I skimmed their specifications, but I admit I’m not a mathematician, so who am I to judge their math formulas?

    What I do understand, however, is that almost all secure messaging systems, including Olvid, rely on the Double Ratchet algorithm, which was first introduced by… Signal.

    At the Heart of Messaging: The Critical Role of Protocols

    In terms of protocol, however, I am an expert. I have been working on instant messaging protocols since 1999. And, it’s not beautiful… Olvid’s protocol is the antithesis of what I would like to see in an ambitious messaging protocol. It is a proprietary, ad hoc protocol, not based on any standard, minimalist for now, and condemns itself to reinventing the wheel, poorly. The burning question is, why not choose an open protocol that already works on a large scale, like XMPP, adding their value on top? The Internet protocol, TCP/IP, is open, all machines in the world can communicate, yet there are competing internet service providers. I am still looking for an answer. Because XMPP is too complex, some will say? I think any sufficiently advanced chat protocol tends to become a derivative of XMPP, less accomplished. Come on, why not even use Matrix, a competing protocol to my favorite? Apart from simple ignorance, I see no reason. Unless it’s to lock down the platform, perhaps? But, locking a communication protocol makes no sense. It’s replaying the battle of internet protocols, TCP/IP versus X.25. A communication protocol is meant to be open and interoperable. Personally, I would invite Olvid to adopt a messaging standard. Let them turn to the W3C or IETF, to XMPP or MLS. These organizations do good work. And it’s a guarantee of sustainability and above all, of interoperability.

    We come to a very sore point. The European Commission, and therefore France as well, is discussing the implementation of the Digital Market Act. Among the points the European Union wants to impose is… the interoperability of instant messaging services. How can the French government promote a messaging solution that is not interoperable? And preferably standardized and open.

    I talked about Olvid’s proprietary protocol, which is actually more of an API (Application Programming Interface), that is, a document that describes how to automate certain functions of their server. What about the implementation? The client is open source (on iOS and Android), but seeing in their exchange interface calls to URLs named /Freetrial. This implies payment. I am not sure that Olvid would welcome the idea of compiling and deploying one’s own version of the client. That’s the principle of Open Source, but such an initiative could try to circumvent payments to Olvid. As anyway, no open-source server is available and the only one running is operated by Olvid, the client code is of little use. Especially since the client code is published by Olvid, but to what extent can we know if it is 100% identical to the version distributed in the iOS and Android app stores? We don’t really have a way of knowing.

    I know that Olvid promises one day to release the server as Open Source. What I’ve seen of the protocol, their business model, and what they say about their implementation, very tied to the Amazon infrastructure (an infrastructure managed by an American company, so much for sovereignty), makes me think that this will not happen, at least not for a very long time. I hope, of course, to be wrong.

    Toward Openness and Collaboration in Digital Communication

    In the meantime? I would really like us to be serious about instant messaging, that finally all players in the sector row in the same direction, those who work on open protocols, offering free servers and clients, that we build real collaboration, worthy of the construction of internet protocols, to build the foundation of a universal, open, open-source and truly interoperable messaging service. It doesn’t take much, to develop the culture of “coopetition,” collaboration around a common good between competing companies.


    Found a mistake? I’m not perfect and would be happy to correct it. Contact us!

    — Photo by Steve Johnson on Unsplash

    The post Instant Messaging: Protocols are “Commons”, Let’s Take Them Seriously first appeared on ProcessOne .
    • Pl chevron_right

      Isode: Red/Black – 2.1 New Capabilities

      news.movim.eu / PlanetJabber • 13 December, 2023 • 3 minutes

    Overview

    This release adds important new functionality and adds further device drivers to Red/Black, a management tool that allows you to monitor and control devices and servers across a network, with a particular focus on HF Radio Systems.  A general summary is given in the white paper Red/Black Overview .

    Rules

    Red/Black 2.1 adds a Rules capability that allows rules to be specified in the Lua programming language, which allows flexible control.    Standard rules are provided along with sample rules to help creation of rules useful for a deployment.  There are a number of rule capabilities:

    • A basic rule capability is control based on device parameter values.   Rules can generate alerts, for example to alert at operator at selected severity when a message queue exceeds a certain size.
    • For devices with parameters that clearly show faults or exception status,  standard device type rules are provided that will alert the operator to the fault condition.   This standard rule can be selected for devices of that type.
    • Rules can set parameters on devices, including control of device actions.   For example, this can be used to turn off  a device when a thermometer device records a high temperature.
    • Rules can reference devices connected in the communications chain.  For example a rule can be created to alert an operator if the frequency used on a radio does not match the supported frequency range of a connected antenna.
    • Rules can be used to reconfigure (soft) connectivity, for example to switch in a replacement device when a device fails.

    Snapshot

    Configuration snapshots can be taken, reflecting the current Red/Black configuration, and Red/Black configuration can be reset to a snapshot. The capability is intended to record standard operational status of a setup to allow convenient reversion after temporary changes.

    eLogic Radio Gateway driver

    The eLogic Radio Gateway provides conversion between synchronous serial and TCP, with multiple convertors in a single SNMP-managed box.  A key target for this is data connectivity to remote Tx/Rx sites.  The Red/Black driver enables configuration as TCP to Serial and Serial to TCP modes, enabling a Red/Black operator to change selected modem/radios.

    Web (http) Drivers

    Red/Black 2.1 has added an internal Isode framework for managing devices with an HTTP interface, which is being used in a number of new drivers.  This is Isode’s preferred approach for managing devices.   New drivers are:

    1. M-Link.   Allows monitoring of M-Link servers, showing:
      1. Number of connected users.
      2. Number of peer connections.
      3. Number of queued stanzas.
    2. Icon-5066.  Controlling  STANAG 5066 product:
      1. Enable/Disable node
      2. Show STANAG 5066 Address
      3. Show Number connected SIS clients
      4. Show If flow is on or off
    3. Icon-PEP.  Providing:
      1. Enable/Disable service
      2. Show number of TCP connections
      3. Show current transfer rate
    4. Sodium Sync.   Providing:
      1. Number of synchronizations
      2. Last synchronization that made changes
      3. List of synchronizations not working correctly
      4. Alerts for failed synchronizations
    5. Supported Modems.   This replaces drivers working directly with modems included in Icon-5066 3.0.   The new driver talks directly to Proxy Modem or to Icon-5066 where Proxy Modem is not used.  This displays a wide range of modem parameters.   Various modem types can be selected to display appropriate information from the connected device:
      1. Narrowband Modem.
      2. Narrowband Modem with ALE.
      3. Wideband Modem.
      4. Modem/Radio combined variants of the previous three types.

    Other

    • Parameter Encryption.   Red/Black can securely store parameters, such as passwords, to prevent exposure as command line arguments to device drivers.
    • Device Ordering.   Devices are now listed in alphabetical order.
    • Alert Source.  Alerts now clearly show where they are generated (Red/Black; Rule; Device Driver; Device).
    • Link to device management.   Where Red/Black monitored devices have Web management, the URL of the Web interface can be configured in Red/Black so that the management UI can be accessed with single click from Red/Black.
    • Pl chevron_right

      Erlang Solutions: MongooseIM 6.2: Easy to set up, use and manage

      news.movim.eu / PlanetJabber • 13 December, 2023 • 10 minutes

    MongooseIM, which is our scalable, flexible and cost-efficient instant messaging server, is now easier to use than ever before. The latest release 6.2 introduces a completely new CETS in-memory storage backend, letting you easily deploy it with modern cloud infrastructure solutions such as Kubernetes. The XMPP extensions are also updated, which means that we support new features of the XMPP protocol.

    The new version of MongooseIM is very easy to try out, as there are two new options:

    • Firstly, you can check out trymongoose.im – a live demo installation of the latest version, which lets you create your own XMPP domain and experiment with it. It also showcases how a Phoenix web application can be integrated with MongooseIM using its GraphQL API.
    • If you want to set up your own MongooseIM installation, you can now easily set it up in Kubernetes with Helm. Our new Helm chart automatically templates the configuration files, making it possible to quickly set up a running cluster of several nodes connected to a database.

    One of the biggest new features is the support for CETS, which makes management of MongooseIM much easier than before. To fully appreciate this improvement, we need to start with an overview of the clustered storage options in MongooseIM. We will follow with a brief guide, helping you quickly set up a running server with the latest features enabled.

    From Mnesia to CETS

    MongooseIM is implemented in Erlang, making it possible to handle millions of connected clients exchanging messages.  However, a typical user should not need any Erlang knowledge to deploy and maintain a messaging server. Up to version 6.1 there is one component, which breaks this assumption, making management and maintenance much harder. This component is the built-in Erlang database, Mnesia , which is convenient when you are starting your journey with MongooseIM, because it resides on the local disk and does not need to be started as a separate service. All MongooseIM nodes are clustered together, and they replicate Mnesia tables between them.

    Issues with Mnesia

    When you go beyond small experiments on your local machine, it is essential not to store any persistent data in Mnesia, because it is not designed for storing large volumes of data. Also, network connectivity issues or incorrect restarts might make your database inconsistent, leading to unexpected errors and cluster nodes refusing to start. It is also difficult to migrate your data to another database. That is why it is strongly recommended to use a relational database management system (RDBMS) such as PostgreSQL or MySQL, which you can host yourself or use cloud-based solutions such as Amazon RDS. However, when you configure MongooseIM 6.1 and its extension modules to use RDBMS, you will find out that the server still needs Mnesia for its operation. This is because Mnesia is also used to store in-memory data shared between the cluster nodes. For example, by sharing user sessions MongooseIM can route messages between users connected to different nodes of the cluster.

    When Mnesia was first created, a server node used to be a long-running physical unit that is very rarely restarted – actually one of the main advantages of Erlang was the ability to significantly reduce downtime. With the introduction of virtualisation and containers, a server node is no longer tied to the underlying hardware, and new nodes can be dynamically added or removed. This means that the cluster is much more dynamic, and nodes can be started more often. This brings us to another issue of Mnesia – the need for storing the database schema on disk, which contains the information about all nodes in the cluster and their tables. This is mostly a problem with platforms like Kubernetes, where adding disk storage requires use of persistent volumes, which are costly and need to be manually deleted when a node is removed from the cluster. As a result, the whole management process becomes more error-prone.

    Another problem is the additional cluster management required for each node. When a new node starts up, it is not a member of any cluster. There is a join_cluster command that needs to be executed. Same happens with node removal, when leave_cluster needs to be called. For the convenience of the user, our Helm charts automatically call these commands for the started nodes, but they still need to be started in a particular order, which has to be respected when doing restarts and upgrades as well. If for some reason you change that order, the nodes might be locked until all of them are online (see the documentation ) – which is inconvenient, might result in overload and can even cause the whole cluster to be down if the final node does not start up for some reason. Finally, network connectivity issues might result in an inconsistent database or other errors (even without persistent tables), which can be difficult to understand for anyone but Erlang developers and may require manual intervention on the affected nodes. The solution is usually to stop the affected node, clean up the Mnesia volume, and start it again – which adds unwanted downtime for the server and workload for the operator.

    It is important to note that we have these issues not because Mnesia is inherently bad, but because our use case has drifted away from its intended purpose, i.e. we need no persistence and transactions, but we would benefit from automatic features like simple conflict resolution and dynamic cluster discovery. This situation led us to develop a new library, which precisely meets our requirements.

    Introducing Cluster ETS

    CETS is a lightweight replication layer for ETS (Erlang Term Storage) tables. The main principle of this library is to replicate ETS data to other nodes of the cluster with simple and automatic conflict resolution. In most cases the conflicts are not even possible, because the key of each stored key-value tuple uniquely identifies the creating node. In MongooseIM, we are using the RDBMS cluster node discovery mechanism. This means that each cluster node updates the database periodically, storing its name and IP address in the discovery_nodes table. Other nodes check this table periodically to determine the cluster nodes, and connect to them. Nodes that are down for a long time (by default 1 hour) are removed from the table to avoid trying to connect them. The database used for CETS is the same one that is used to store other persistent data, so in a typical case there should be no extra databases required.

    The first benefit visible to the user is that the nodes don’t need to be added to the cluster anymore. You don’t need commands like join_cluster or leave_cluster – actually you cannot use them anymore. Another immediate benefit is the lack of persistent volumes required by MongooseIM, which means that any node can be immediately replaced by another fresh instance. It is also no longer possible to have consistency errors, because there is no persistent schema and any (unlikely) conflicts are resolved automatically.

    Using CETS

    Let’s see how quickly the new MongooseIM with CETS can be set up. This simple example assumes that you have Docker and Kubernetes installed locally. These tools simplify the setup process a lot, but if you cannot use them, you can manually configure MongooseIM to use CETS as well – see the tutorial . In this example we will use PostgreSQL for all persistent storage in MongooseIM, including CETS node discovery. You only need to download the database schema file pg.sql to your current directory and execute the following command:

    $ docker run -d --name mongooseim-postgres -e POSTGRES_PASSWORD=mongooseim_secret \
        -e POSTGRES_USER=mongooseim -v `pwd`/pg.sql:/docker-entrypoint-initdb.d/pgsql.sql:ro \
        -p 5432:5432 postgres

    The database should be up and running – let’s check it with psql :

    $ PGPASSWORD=mongooseim_secret psql -U mongooseim -h localhost
    (...)
    mongooseim=#

    Next, let’s install MongooseIM in Kubernetes with Helm. The volatileDatabase and persistentDatabase options are used to populate the generated MongooseIM configuration file with the required database options. Since we have set the DB to use the default MongooseIM credentials, we don’t need to provide them here. If you want to use a different user name, password or other parameters, see the chart documentation for a complete list of options.

    $ helm repo add mongoose https://esl.github.io/MongooseHelm/
    $ helm install mim mongoose/mongooseim --set replicaCount=3 --set volatileDatabase=cets \
        --set persistentDatabase=rdbms
    NAME: test-mim
    LAST DEPLOYED: Tue Nov 28 08:56:16 2023
    NAMESPACE: default
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    NOTES:
    Thank you for installing MongooseIM 6.2.0
    (...)
    

    Your three-node cluster using CETS and RDBMS should start up quickly. You can monitor its progress with Kubernetes:

    $ watch kubectl get sts,pod,svc
    
    NAME                          READY   AGE
    statefulset.apps/mongooseim   3/3     2m
    
    NAME               READY   STATUS    RESTARTS   AGE
    pod/mongooseim-0   1/1     Running   0          2m
    pod/mongooseim-1   1/1     Running   0          2m
    pod/mongooseim-2   1/1     Running   0          1m
    
    NAME                    TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                    AGE
    service/kubernetes      ClusterIP      10.96.0.1        <none>        443/TCP                    91d
    service/mongooseim      ClusterIP      None             <none>        4369/TCP,5222/TCP, (...)   2m
    service/mongooseim-lb   LoadBalancer   10.102.205.139   localhost     5222:32178/TCP, (...)      2m 

    When the XMPP port 5222 is open on localhost by the load balancer, the whole service is ready to use. You can check CETS cluster status on each node with the CLI (or the GraphQL API ). The following command checks the status on mongooseim-0 (the first node in the cluster):

    $ kubectl exec -it mongooseim-0 -- /usr/lib/mongooseim/bin/mongooseimctl cets systemInfo
    {
      "data" : {
        "cets" : {
          "systemInfo" : {
            "unavailableNodes" : [],
            "remoteUnknownTables" : [],
            "remoteNodesWithoutDisco" : [],
            "remoteNodesWithUnknownTables" : [],
            "remoteNodesWithMissingTables" : [],
            "remoteMissingTables" : [],
            "joinedNodes" : [
              "mongooseim@mongooseim-0.mongooseim.default.svc.cluster.local",
              "mongooseim@mongooseim-1.mongooseim.default.svc.cluster.local",
              "mongooseim@mongooseim-2.mongooseim.default.svc.cluster.local"
            ],
            "discoveryWorks" : true,
            "discoveredNodes" : [
              "mongooseim@mongooseim-0.mongooseim.default.svc.cluster.local",
              "mongooseim@mongooseim-1.mongooseim.default.svc.cluster.local",
              "mongooseim@mongooseim-2.mongooseim.default.svc.cluster.local"
            ],
            "conflictTables" : [],
            "conflictNodes" : [],
            "availableNodes" : [
              "mongooseim@mongooseim-0.mongooseim.default.svc.cluster.local",
              "mongooseim@mongooseim-1.mongooseim.default.svc.cluster.local",
              "mongooseim@mongooseim-2.mongooseim.default.svc.cluster.local"
            ]
          }
        }
      }
    }

    You should see all nodes listed in joinedNodes, discoveredNodes and availableNodes . Other lists should be empty. There is tableInfo as well. This command shows information about each table:

    $ kubectl exec -it mongooseim-0 -- /usr/lib/mongooseim/bin/mongooseimctl cets tableInfo
    {
      "data" : {
        "cets" : {
          "tableInfo" : [
            {
              "tableName" : "cets_bosh",
              "size" : 0,
              "nodes" : [
                "mongooseim@mongooseim-0.mongooseim.default.svc.cluster.local",
                "mongooseim@mongooseim-1.mongooseim.default.svc.cluster.local",
                "mongooseim@mongooseim-2.mongooseim.default.svc.cluster.local"
              ],
              "memory" : 141
            },
            {
              "tableName" : "cets_cluster_id",
              "size" : 1,
              "nodes" : [
                "mongooseim@mongooseim-0.mongooseim.default.svc.cluster.local",
                "mongooseim@mongooseim-1.mongooseim.default.svc.cluster.local",
                "mongooseim@mongooseim-2.mongooseim.default.svc.cluster.local"
              ],
              "memory" : 156
            },
            {
              "tableName" : "cets_external_component",
              "size" : 0,
              "nodes" : [
                "mongooseim@mongooseim-0.mongooseim.default.svc.cluster.local",
                "mongooseim@mongooseim-1.mongooseim.default.svc.cluster.local",
                "mongooseim@mongooseim-2.mongooseim.default.svc.cluster.local"
              ],
              "memory" : 307
            },
            (...)
          ]
        }
      }
    }


    You can find more information about these commands in our GraphQL docs , because the CLI is actually using the GraphQL commands. To complete our example, let’s create our first XMPP user account:

    $ kubectl exec -it mongooseim-0 -- /usr/lib/mongooseim/bin/mongooseimctl account registerUser \
      --username alice --domain localhost --password secret
    {
      "data" : {
        "account" : {
          "registerUser" : {
            "message" : "User alice@localhost successfully registered",
            "jid" : "alice@localhost"
          }
        }
      }
    }


    Now you can connect to the server with an XMPP client as alice@localhost – see https://trymongoose.im/client-apps or https://xmpp.org/software/?platform=all-platforms for client software.

    New extensions

    MongooseIM 6.2 satisfies the XMPP Compliance Suites 2023 , as reported at xmpp.org . Thanks to the new extensible architecture of mongoose_c2s, we are implementing new extensions faster than before. For example, we have recently added support for XEP-0386: Bind 2 and XEP-0388: Extensible SASL Profile , allowing the client to authenticate, bind the resource and enable extensions like message carbons , stream management and client state indication . All of this can be done in a single step without the need for redundant roundtrips (see the example ). This way your clients can establish their sessions faster than before, putting less load on the client and the server. We have also updated multiple extensions to their latest versions, and we will continue the effort to keep them up to date, adding new ones as well. Do you think we should support a new XMPP extension? Feel free to request a feature , so we can put it on our roadmap, and if you really want it now, we can discuss possible sponsoring options.

    Summary

    With the latest release 6.2 we have brought MongooseIM closer to you. Now you can try it out online as well as easily install it in Kubernetes without caring about persistent state and volumes. Your next step is to try our live demo, install MongooseIM with Helm and experiment with it. You can do it all for free and without Erlang knowledge, so go ahead and use it as the foundation of your new messaging solution. You are also not left alone – should you have any questions, please feel free to contact us , and we will be happy to deploy, load-test, health-check, optimise and customise MongooseIM to fit your needs.

    The post MongooseIM 6.2: Easy to set up, use and manage appeared first on Erlang Solutions .