Skip to main content

4 posts tagged with "spawn"

View All Tags

· 9 min read
Adriano Santos

The Actor Model, introduced by Carl Hewitt in 1973, is a conceptual framework for dealing with concurrent computation. Unlike traditional models that rely on shared state and locks, the Actor Model provides a highly modular and scalable approach to designing systems, making it particularly well-suited for distributed and concurrent applications.

At its core, the Actor Model revolves around the concept of "actors"—independent entities that encapsulate state and behavior. Each actor can:

  1. Receive messages from other actors.
  2. Process these messages using its internal behavior.
  3. Send messages to other actors.
  4. Create new actors.

This model inherently avoids issues like race conditions and deadlocks by ensuring that each actor processes only one message at a time. Consequently, the state is modified in a thread-safe manner, as actors do not share state directly but communicate exclusively through message passing.

Key Benefits

  1. Concurrency and Scalability: Since actors operate independently, systems based on the Actor Model can easily scale across multiple processors or machines. This makes it ideal for cloud-native applications and services that demand high concurrency.

  2. Fault Tolerance: Actors can be designed to monitor and manage other actors, allowing for robust error handling and recovery strategies. This self-healing capability is a cornerstone of resilient system design.

  3. Modularity: Actors encapsulate state and behavior, promoting a modular structure that is easier to reason about, test, and maintain. This separation of concerns aligns well with microservices architecture, where each service can be seen as an actor or some group of actors.

Using the Actor Model to Build Business Applications

In the realm of business applications, the Actor Model can be employed to create robust, scalable, and maintainable systems. Here’s how it can be leveraged:

  1. Order Processing Systems: In e-commerce platforms, each order can be represented as an actor. The order actor can handle various stages of the order lifecycle, including validation, payment processing, inventory adjustment, and shipping. By encapsulating these processes within an actor, the system can handle a large number of orders concurrently without interference.

  2. Customer Service Applications: Each customer session can be modeled as an actor, managing individual interactions and maintaining state throughout the session. This is particularly useful in chatbots and automated customer service systems where maintaining context and state consistency is crucial.

  3. Financial Systems: In banking applications, transactions can be modeled as actors. This allows for secure, concurrent processing of transactions, ensuring that the system can handle multiple operations simultaneously without risking data corruption.

  4. Game Applications: In multiplayer online games, actors can represent various entities such as players, non-player characters (NPCs), items, and game rooms. Each actor encapsulates its state and behavior, interacting through messages to create a dynamic and responsive game world.

Architectural Decisions Influenced by Actor Model Constraints

Implementing the Actor Model in business applications necessitates careful architectural planning due to its unique constraints and capabilities:

  1. State Management: Actors maintain their state internally and do not share state. This requires designing the system in such a way that all necessary information is passed through messages. This may involve breaking down complex processes into smaller, message-driven interactions.

  2. Message Passing: Since actors communicate solely through asynchronous messages, it’s essential to design a robust messaging infrastructure. This includes choosing appropriate messaging protocols, ensuring message delivery guarantees (e.g., at-least-once, at-most-once, or exactly-once delivery), and handling message serialization/deserialization.

  3. Error Handling and Supervision: Actors can supervise other actors, which means designing a supervision strategy is crucial. This involves defining how actors should respond to failures—whether they should restart, escalate the error, or stop entirely. Effective supervision trees help in building resilient applications.

  4. Distributed Coordination: In distributed systems, coordinating actors across multiple nodes can be challenging. Architectural decisions should account for network partitions, latency, and data consistency. Utilizing Spawn, Akka, Orleans, Dapr, Kalix frameworks or using languages like Erlang, Elixir, Ponny or Gleam can simplify these aspects by providing built-in support for distributed actor systems.

  5. Scalability Considerations: Scaling an actor-based system requires monitoring actor workloads and ensuring that actors are efficiently distributed across available resources. Load balancing strategies and dynamic actor creation/destruction policies need to be defined to maintain optimal performance.

Granularity of Actors and Its Impact

One critical architectural decision in the Actor Model is determining the granularity of actors. Granularity refers to the size and number of actors within the system, affecting both performance and maintainability.

Fine-Grained vs. Coarse-Grained Actors

  1. Fine-Grained Actors: In this approach, actors are small, representing very specific tasks or entities. For example, each item in an inventory system might be an individual actor.

    • Pros: High concurrency, fine control over individual components, easier to scale specific parts of the system.
    • Cons: Higher overhead due to the increased number of actors, potential performance issues due to frequent message passing, complex actor management (This can be mitigated if you use a native actor runtime like Spawn or other Erlang derivatives).
  2. Coarse-Grained Actors: Here, actors represent larger aggregates, such as an entire inventory or order processing system.

    • Pros: Reduced overhead, fewer messages, simpler management.
    • Cons: Reduced concurrency, potential bottlenecks if a single actor becomes overwhelmed, more complex internal state management.

Managing Granularity

To effectively manage granularity, consider the following strategies:

  1. Profiling and Performance Testing: Continuously monitor the performance of your actor system. Identify bottlenecks and adjust the granularity accordingly. Profiling tools specific to actor systems, such as those available in Spawn, can help pinpoint performance issues.

  2. Dynamic Actor Creation: Use patterns like actor hierarchies and dynamic actor creation to balance the load. For instance, a coarse-grained actor can create fine-grained child actors to handle specific tasks when needed, thus balancing load dynamically.

  3. State Partitioning: Partition state across multiple actors where applicable. In a fine-grained system, ensure that each actor holds only the necessary state for its specific responsibility, reducing the overhead of managing large states.

  4. Batch Processing: In some cases, batch processing within actors can reduce the message-passing overhead. Group related tasks and process them in batches within a single actor, which can be particularly useful in coarse-grained systems.

Modeling Actors as Business Entities

In business applications, actors can be modeled as business entities to better align with real-world processes and operations. Here’s how to effectively model actors as business entities like users, transactions, and more:

  1. User Actors: Each user can be represented as an actor, encapsulating the user's state, preferences, and actions. This actor can handle messages related to user authentication, profile updates, and interaction with other parts of the system.

    • Example: In a social media platform, a user actor might manage friend requests, message notifications, and content creation.
  2. Transaction Actors: Transactions in financial systems can be modeled as actors to ensure secure and isolated processing. Each transaction actor handles the entire lifecycle of a transaction, from initiation to completion, including rollback in case of failures.

    • Example: In a banking application, a transaction actor can manage the transfer of funds between accounts, ensuring consistency and integrity.
  3. Order Actors: For e-commerce platforms, orders can be represented as actors. Each order actor manages the stages of the order lifecycle, such as validation, payment processing, inventory update, and shipping.

    • Example: An order actor can interact with inventory actors to check stock availability and with payment actors to process transactions.
  4. Game Entities: In a game application, actors can represent various game entities such as players, NPCs, items, and game rooms. Each game entity actor manages its state and behavior, interacting through messages to create a dynamic and interactive game environment.

    • Example: In a multiplayer online game, player actors manage individual player states, NPC actors handle non-player character behavior, and item actors represent in-game objects like weapons and power-ups.

Actor Model and Architectural Patterns

Actors can be used to implement various architectural patterns, enhancing the robustness and scalability of business applications:

  1. Saga Pattern: The Saga pattern is used to manage long-running transactions by breaking them into a series of smaller, isolated transactions that can be individually committed or compensated. Actors are well-suited for implementing the Saga pattern due to their message-passing capabilities and independent state management.

    • Implementation: Each step of a saga can be an actor, with a coordinator actor managing the overall process. The coordinator sends messages to initiate each step and handles compensation actions in case of failures.
  2. Event Sourcing: Actors can be used to implement event sourcing, where changes to the state are stored as a sequence of events. Each actor can manage its event log, ensuring that the state can be reconstructed by replaying events.

    • Implementation: An order actor, for instance, can log events such as "OrderPlaced," "PaymentProcessed," and "OrderShipped." Replaying these events can reconstruct the current state of the order.
  3. CQRS (Command Query Responsibility Segregation): In a CQRS architecture, actors can handle commands (state-changing operations) and queries (read operations) separately, improving scalability and performance.

    • Implementation: Command actors handle operations that modify the state, while query actors provide read-only views of the state. This separation can enhance performance by allowing the system to scale read and write operations independently.
  4. State Machines: Actors can represent state machines, where each state of the actor corresponds to a different behavior. This is particularly useful in scenarios where an entity has a well-defined lifecycle with distinct states.

    • Implementation: Consider a user registration process. The user actor can have states like "New," "Email Sent," "Verified," and "Active." Each state defines how the actor responds to messages, enabling clear and maintainable transitions.

Conclusion

The Actor Model offers a robust paradigm for building scalable, resilient, and maintainable software systems. By embracing message-passing and encapsulating state within actors, developers can address the complexities of concurrency and distribution effectively. In business applications, this model can streamline processes such as order handling, customer interactions, financial transactions, and game interactions. The granularity of actors significantly impacts system performance and maintainability, requiring careful management through profiling, dynamic actor creation, state partitioning, and batch processing. Additionally, actors can be modeled as business entities, aligning system design with real-world processes. By implementing architectural patterns like Sagas, Event Sourcing, CQRS, and State Machines, the Actor Model can enhance the robustness and scalability of a wide range of applications. Additionally, if you use Spawn, you are sure that it facilitates the implementation of several of the use cases presented here since it was completely developed with the aim of facilitating the creation of business applications.

In future posts we will talk more about how Spawn can be essential in developing real-world applications. These are exciting times for actor serverless computing. Happy coding! 🚀✨

· 3 min read
Adriano Santos

In recent times, the eigr community had the privilege of exploring a new addition to the Erlang/BEAM landscape, inspired by the Serverless model: FLAME. The excitement surrounding this release was tangible, especially given FLAME's connection to the renowned Phoenix framework. While Spawn, our software in the same Serverless sphere, might appear as a direct competitor, we believe there is more synergy than competition between them.

It may seem peculiar that we express satisfaction with FLAME, considering that Spawn also operates in the Serverless space. However, after more than four years of dedicated work and pioneering discussions on topics like Stateful Serverless and durable computing, we recognize that attention is now turning to concepts we championed from the outset. Additionally, FLAME is built on the Erlang virtual machine, making it a distant relative and, in a sense, an ally in our journey.

We are genuinely excited about FLAME, actively contributing to its ecosystem and collaboratively engaging with its community of developers and users. Leveraging our expertise in Kubernetes, we are even developing a backend for FLAME in this area, inviting everyone to join us in this joint endeavor.

Spawn vs. FLAME: Exploring Differences and Complementarities

Despite some superficial similarities, Spawn and FLAME are fundamentally distinct, both inspired by the Serverless model but with complementary approaches. While FLAME follows the Function as a Service (FaaS) paradigm, literally sending functions over the wire and adopting a stateless premise, Spawn is more oriented toward applications that require state management.

In Spawn, we not only embrace the Serverless model but also introduce a new layer based on actors, business contracts and a totally polyglot model of programming. This approach offers a fresh perspective on software design, encouraging developers to think in terms of business rather than the algorithms needed to ensure the infrastructure functions concerning state or external services. While FLAME moves functions across the network, in Spawn, we shift data to computation, fundamentally altering how applications are built and maintained.

In summary, although they may seem like competitors at first glance, Spawn and FLAME coexist harmoniously, addressing different needs and providing valuable approaches in the vast universe of Serverless computing in the Erlang/BEAM space. Our goal is not only to highlight the differences but also to promote collaboration between both communities, collectively building a more innovative and robust future for Serverless software development.

In future posts we will discuss more about how Spawn and FLAME can be applied in a complementary way. These are exciting times for serverless computing. Happy coding! 🚀✨

· 10 min read
Elias Dal Ben Arruda
Adriano Santos

Hello Elixir enthusiasts! 🚀 As the tech landscape evolves, so should our tools and approaches to development. Today, I'm excited to introduce you to a significant advancement in Elixir development that can reshape how we build distributed systems – I present to you Spawn.

The Elixir Actors Dilemma

We've all been there – struck by that stroke of genius while working with Elixir. The allure of in-memory state storage, backed by the reliability of Erlang/OTP, seems like a dream come true. But reality hits hard, especially in the realm of production environments where Kubernetes often plays a pivotal role. Managing multiple servers autonomously, especially in distributed systems, can quickly turn our dream into a complex nightmare.

Enter Spawn: A New Approach to Actors

Spawn is not just another framework; it's a paradigm shift in how we implement code. Imagine a world where you can care less about the underlying infrastructure and instead focus on crafting the domain-specific logic that truly matters. That's precisely what Spawn brings to the table.

Let's delve into a quick comparison between a traditional GenServer approach and the innovative Spawn Actor using the Spawn Elixir SDK.

Consider a GenServer that does the following:

defmodule Incrementor do
use GenServer

defmodule State do
defstruct [total: 0]
end

def init(_), do: {:ok, %State{total: 0}}

def handle_call({:add, value}, _from, state) do
new_total = state.total + value

{:reply, {:ok, %{total: new_total}}, %State{state | total: new_total}}
end
end

If we execute it, as you probably now already, we can add a specified value to the total stored in memory, and calling add will always return the total amount stored in memory.

iex(1)> {:ok, pid} = GenServer.start_link(Incrementor, [])
iex(2)> GenServer.call(pid, {:add, 1})
{:ok, %{total: 1}}

iex(3)> GenServer.call(pid, {:add, 1})
{:ok, %{total: 2}}

With Spawn, the same definition would look like:

Our process defined by the GenServer, we call it an Actor.

defmodule IncrementorActor do
use SpawnSdk.Actor,
name: "incrementor",
kind: :named,
state_type: :json,
deactivate_timeout: 30_000,
snapshot_timeout: 10_000

defmodule State do
@derive {Jason.Encoder, only: [:total]}
defstruct [total: 0]
end

defact init(%Context{} = ctx), do: Value.noreply_state!(ctx.state || %State{})

defact add(%{value: value}, %Context{} = ctx) do
new_total = ctx.state.total + value

Value.of()
|> Value.state(%State{total: new_total})
|> Value.response(%{total: new_total})
end
end

Our application would look like:

defmodule Example.Application do
@moduledoc false
use Application

@impl true
def start(_type, _args) do
children = [
{
SpawnSdk.System.Supervisor,
system: "spawn-system",
actors: [
IncrementorActor
]
}
]

opts = [strategy: :one_for_one, name: Example.Supervisor]
Supervisor.start_link(children, opts)
end
end

And the SDK can be installed in your Elixir project with:

[
{:spawn_sdk, "~> 1.1"},
# if using stateful actors
# {:spawn_statestores_mysql, "~> 1.1"}
# {:spawn_statestores_postgres, "~> 1.1"}
# ... others
]

When using a statestore, you need to define a statestore key in config.exs or using SPAWN_STATESTORE_KEY environment variable to make sure your actor state is properly encrypted.

NOTE: It is recommended to securely store the key in the environment where it is being used.

config :spawn_statestores, statestore_key: "secure_database_key"

Having that defined, the same for Calling or Casting a process in a GenServer, we do it with invoke.

Passing any message we want in the payload attribute, it needs to be a struct or map that can be encoded to JSON or a protobuf struct.

iex(1)> SpawnSdk.invoke("incrementor", system: "spawn-system", action: "add", payload: %{value: 1})
{:ok, %{total: 1}}

NOTE: We recommend to use protobufs as payload and also the state definition, with: state_type: Protos.YourStateType, however for this example for the sake of simplicity we are using JSON.

Unpacking the Magic: Answers to Your Questions

  1. What exactly is "spawn-system"?

Spawn operates as a platform that manages infrastructure for you. "spawn-system" is a configuration entity encapsulating multiple actors within a meaningful context.

  1. Why is it an SDK?

Spawn embraces a multi-language approach. SDKs allow different actors, even in different languages, to register with Spawn. For instance, you could have Elixir and NodeJS actors running the same logic seamlessly.

  1. How do I run it?

In development, you can use Spawn as a lib. For production, use Kubernetes CRDs provided by Spawn for easy and scalable deployment.

  1. How do we handle state persistence?

Spawn intelligently handles state persistence through well-defined timeouts and snapshot mechanisms, ensuring reliability even during rollouts.

  1. Why not just use a GenServer?

Managing a distributed system with GenServer requires tackling numerous challenges. Spawn abstracts away these complexities, allowing you to focus on your domain logic without getting lost in infrastructure intricacies.

Two items above deserve a little more comment:

Actor System

Spawn is a platform that also handles infrastructure for you, we run on top of kubernetes and have defined some Kubernetes CRDs that helps you configure the clustering and lifecycle of your actors.

A system is an entity that encapsulates multiple Actors in a context that makes sense for you.

We can configure one using the pre-defined Spawn CRDs, and we will also be configuring here which persistent database we are going to use to hold our Actors state.

SDKs

We can have multiple actors in the same system, with different SDKs registering those actors. We call each deployment that uses an SDK an "ActorHost."

In our example, we could have two SDKs, Elixir and NodeJS, running the same actor or different ones. Spawn will then forward any invocations for one of those registered SDKs, with the specified implementation for each action. You can even invoke an actor registered in a different system or in the same system from another SDK.

For instance, if we wanted to invoke the same actor we wrote but in a NodeJS ActorHost, it would look like this:

import spawn from '@eigr/spawn-sdk'

const response = await spawn.invoke('incrementor', {
action: 'add',
system: 'spawn-system',
payload: {value: 5}
})

console.log(response)
// { total: 6 }

This way, we can interact with each actor globally, across different Systems and ActorHosts, while still maintaining the same state handling mechanism. And the best part? We can achieve all of this without the need for transactions, locks, or any additional infrastructure to support sequential state write changes of that nature.

That sounds magical, how do I run it?

In development mode for Elixir, you can take advantage of using Spawn as a lib, you'll be able to use all the features you wan't in a single runtime.

However for production we recommend using our CRDs set up for you.

First of all you need to install our k8s CRD with the following manifest (using kubectl):

kubectl create ns eigr-functions && curl -L https://github.com/eigr/spawn/releases/download/v1.1.1/manifest.yaml | kubectl apply -f -

NOTE: You need to inform the desired release version. Check our github to see the latest one released.

After installing it successfully, you need now to configure your System:

apiVersion: spawn-eigr.io/v1
kind: ActorSystem
metadata:
name: spawn-system # Mandatory. Name of the ActorSystem
namespace: default # Optional. Default namespace is "default"
spec:
statestore:
type: MySql # Valid are [Postgres, MySql, MariaDB, Sqlite, MSSQL, CockroachDB]
credentialsSecretRef: mysql-connection-secret # The secret containing connection params created in the previous step.
pool: # Optional
size: "10"

You can generate the credentialsSecret or use whatever secret you are using to store your database credentials. An example would be, note that the secret needs to be created at the namespace eigr-functions.

kubectl create secret generic mysql-connection-secret -n eigr-functions \
--from-literal=database=eigr-functions-db \
--from-literal=host='mysql' \
--from-literal=port='3306' \
--from-literal=username='admin' \
--from-literal=password='admin' \
--from-literal=encryptionKey=$(openssl rand -base64 32)

After installing the system, you will need to register your ActorHost, that can look like:

apiVersion: spawn-eigr.io/v1
kind: ActorHost
metadata:
name: elixir-example
namespace: default
annotations:
spawn-eigr.io/actor-system: spawn-system
spec:
host:
image: org/your-host-image:0.0.1
embedded: true # this is only for Elixir Sdks, you can ignore this if you will use another language
ports:
- name: "http"
containerPort: 8800
autoscaler:
min: 1
max: 2

Just by applying this configuration, and having a container that has the application with the example we wrote in the start of the article, we should see the instances starting that should handle the clustering and all the heavy infrastructure work for you.

Managing State Resilience with Spawn

In the realm of Spawn, we prioritize the resilience of your application's state. Each actor in Spawn comes with configurable parameters, namely the snapshot_timeout and deactivate_timeout. Let's delve into these settings:

  • deactivate_timeout determines the duration (in milliseconds) your actor remains actively in memory, even when not in use.

  • snapshot_timeout how frequently snapshots of your actor's state are saved in your persistent storage.

The magic unfolds after an actor is deactivated, triggered either by the specified timeout or during a Kubernetes rollout. In this scenario, Spawn meticulously manages the lifecycle of each process, ensuring that the state is gracefully saved in the configured persistent storage.

Here's the key assurance: even in the face of failures, crashes, or net-splits, Spawn guarantees that the state of your application will always revert to the last valid state. This means if an instance fails, another node seamlessly takes over from where it left off, ensuring the continuity and integrity of your application's data. Our meticulous tuning of Custom Resource Definitions (CRDs) and signal handling ensures that you won't lose data during rollouts or network partitions.

With Spawn, you can confidently embrace a resilient state management model, where the reliability and consistency of your application's data are at the forefront of our design philosophy.

Unleashing Gains in Agility and Innovation with Spawn

Beyond the facade of extensive configurations lies a treasure trove of advantages awaiting exploration. Spawn not only simplifies but significantly enriches your development experience. Imagine bidding farewell to the complexities of defining Kubernetes resources, the intricacies of rollouts, the considerations of HPA, and the worries of scalability, network configurations, and system integrity assessments.

Spawn emerges as the driving force behind a newfound sense of agility and innovation. It liberates you from the burdensome aspects of infrastructure management, allowing you to redirect your focus towards what truly matters – crafting innovative solutions. Step into a future where complexities dissolve, and your journey into agile and innovative Elixir development begins with a resounding hello!

If you choose to go down that path, you will need to face at least the following challenges:

  • Ensuring proper handling of connections between multiple nodes in your Erlang cluster.
  • Ensuring reliable and synchronized data rollouts to avoid message or state loss during instances rolling out.
  • Implementing effective persistence mechanisms to recover data in the event of netsplit scenarios, preventing data loss.
  • Managing the process lifecycle to ensure predictable recovery and maintain a consistent state in case of errors.
  • Designing a well-defined API that integrates your processes seamlessly with other systems, ensuring message synchronization.
  • Establishing a reliable distribution mechanism for sending messages to actors within your own edge, with the ability to synchronize them later.
  • Mitigating process queue bottlenecks to optimize performance and prevent delays.
  • Ensuring atomicity in a distributed system, maintaining data consistency and integrity.
  • Ensuring that you can concentrate on your domain-specific code without being burdened by unnecessary complexities and boilerplate.
  • Implementing seamless integration patterns, including process pipelines, customized activators, workflows, and effective management of side effects, among others.
  • Developing and managing infrastructure code related to brokers, caching, and other components.

Conclusion

This is more than just a practical example; it's an invitation to explore the full potential of Spawn. For a deeper dive into the concepts and foundations, refer to our Spawn Full Documentation and our insightful article Beyond Monoliths and Microservices.

Ready to elevate your Elixir development experience? Embrace the future with Spawn. Happy coding! 🚀✨

· 9 min read
Adriano Santos
Marcel Lanz

Recently a article launched by the Amazon Prime team reactivated the Monoliths versus Microservices discussion. It is remarkable, and often reproachable, how many fervent feelings there are around this already old discussion. I've always wondered what the point really was between these two conflicting worldviews. And I could never understand why so much energy is expended on this type of discussion.

In this article we intend to introduce you to why it's time to move forward and how our Spawn technology can help developers let go of these age-old issues, and that do not add much value to the business.

Monoliths

Monoliths

Monolithic architecture is given by a single, undivided system that runs in a single process, a software application in which different components are linked to a single program within a single platform. As the entire system is in a single block, its initial development can be more agile, making it possible to develop an application in less time and with less initial complexity, notice the initial word.

As a monolithic application evolves, several classes, modules, methods or functions are added to this same code and process. Another point is that monolithic applications tend not to scale horizontally well, since all the general functionality of a system is tied to a single process it is expected that this type of application scales better vertically than horizontally. This particular characteristic also makes it difficult to implement this type of system in environments such as Kubernetes or other types of Cloud environments, notice that I said difficult and not unfeasible.

And this is where the problems with this type of architecture usually start to appear. In other words, the issue with monoliths seems to be directly related to the scale of the system in question. The more complexity you add, the harder it gets to maintain, which requires more complexity to mitigate the problem, which makes it harder to maintain.... well, you get the point. To get around these effects, there are many software engineering patterns that help to mitigate such failures (facades, ports, adapters, interface programming and so on), all this additional complexity turns out to be a good price to pay for its defenders, to a certain extent.

Microservices

Microservices

What differentiates the microservices architecture from more traditional monolithic approaches is how it decomposes the application into smaller units. Each unit in turn is called a service and can be created and deployed independently. In other words, each individual service can work or fail without compromising the others. This helps you embrace the technology side of DevOps and make constant iteration and constant delivery (CI/CD) more streamlined and feasible (at least in theory). In terms of scaling microservices tend to scale better horizontally than vertically and this in turn is better for infrastructures powered by Kubernetes, Serververless, or even other types of Cloud environments.

But not everything is flowers with microservices. Microservices, generally, increase the complexity in managing failures and in the control of expenses with infrastructure. Developers had to become experts in distributed and large-scale systems. To deal with it all, over the years several techniques have emerged to avoid various problems of distributed systems, as well as the advent of Observability and FinOps techniques that helped to control infrastructure costs, allowed the architecture of microservices to become extremely popular.

Spawn and Beyond

I could write dozens of pros and cons of each of these architectures and, at least for us, I would never come up with an outright winner. The undeniable fact is that both have great strengths and equally great flaws. We are without a winner and therefore we need to let go and go further.

In simple terms, what we here in the Eigr community believe is that developers, in general, need a platform that is, among other things, be simpler, dynamic, vendor lock free, focused on the business domain and not on precious technicalities, and that adapts well to the granularity your business requirements demand.

Now let's introduce our Spawn technology and try to explore a little bit of how it can help us go further.

Spawn is primarily based on three very powerful abstractions, the Sidecar Pattern, the Actor Model and the Serverless. The former allows you to deploy an application's components in a separate process or container providing isolation and encapsulation. This pattern enables applications to be composed of heterogeneous technologies and components, while allowing this separate process to handle the infrastructure layers without affecting the evolution of your business code. In turn, the second is a fascinating and relatively simple alternative for the development of distributed and concurrent systems. This model allows you to decompose your system into small units, called actors, that communicate only by passing messages. Actors encapsulate state and behavior in a single unit, and are lock-free, that is, when programming with actors you are free of semaphore, mutex and any type of synchronizing code. And finally Serverless lets developers focus on their code without worrying about the infrastructure. Using Kubernetes as an orchestrator for our serverless workloads we can provide a self managed platform without forcing the developer to be tied to any existing public cloud offerings. Free from blockages!

However, it goes beyond the basics of the Actor Model by exposing several software patterns in a simplified way for the developer. Spawn is also domain oriented to your business, allowing you to focus directly on the business problem, while the Spawn runtime handles things like state management and persistence, caching, inter-process calls, scalability, cluster management, scaling up and down, integration with external middleware, among many other non-functional requirements that software usually needs to achieve its final goals.

Spawn is also a polyglot platform, allowing you to write Actors in different languages and allowing them to communicate with each other in a totally transparent way without the need to define REST or RPC interfaces between your components. Being based on the powerful Erlang technology you get the best of what the, battle tested, Erlang Virtual Machine is capable of providing without giving up your natural domain language, be it Java, Typescript, Elixir, or another.

Now that we have a basic idea about Spawn we can move on to how it can help us move beyond the discussion of Monoliths and Microservices.

Services, Applications, and Granularity

To understand how Spawn can help us move the discussion forward, we first need to understand how Spawn organizes its deployable components. The most basic unit of Spawn is the Actor, it is through Actors that developers can express their domain problems and as we said before the Actor is responsible for encapsulating the state and its associated behavior in itself.

Actors

That said, a Spawn application in a simplified way is nothing more than a series of Actors organized in an deployment unit that we can call service or application, but which in our terminology we call ActorHost. An ActorHost is the deployable unit which is made up of the host container (where the developer works) plus the proxy container which is where the actors actually perform their tasks.

ActorHost

You can have hundreds or even thousands of actors running on a single ActorHost and that in turn can have multiple replicas running on a cluster. As Spawn Actors are activated only when there is work to be done and are deactivated after a period of inactivity, the workloads are distributed among different replicas of the same ActorHost. From the developer's point of view, it doesn't matter because the only thing he needs to be able to send a message to an actor is his name. No complicated APIs with additional worries like service discovery, circuit breakers or anything else.

Replicas

And finally your ActorHost are grouped within a more general system that we call ActorSystem. All ActorHost applications within the same actor system actively communicate with each other forming a real cluster of nodes. Think of an ActorSystem as a distributed container. An actor within an actor system can still communicate with another actor within another actor system in a transparent way for the developer, but the difference is that this communication is done across different networks. That is, ActorSystem provides the network isolation of a set of ActorHost, allowing even very large systems to maintain a high level of isolation and allowing better resource management and avoiding non-essential communication overhead.

ActorSystem

As seen above through Spawn you can talk to different grain sizes to achieve your goals without having to think too hard about how to connect the parts. You can group all your actors within a single application or break it into smaller parts, but the way you interact between these components will not change, this type of architecture has become an industry trend and we proud to say that Spawn thought of all this a long time ago and has leveraged this initiative with our Spawn technology.

Conclusion

This whole discussion around Microservices vs Monoliths is a lot of fun and a excellent mental exercise, but we as software engineers must remember that at the end of the day, it is our deliveries of value to the business that will make the difference. Spawn with its polyglot serverless experience, using the full potential of Cloud's open standards can help you take big steps in that direction. Without you having to waste precious hours around such warm discussions and far from the big goal. Solve your business problems!

In this post we demonstrate that our Spawn technology based on important industry standards and focused on bringing agility to developers' day-to-day is a valuable tool for you to achieve your business goals without having to give up the scalability and resiliency that the today's world demands. We could talk a lot more about Spawn (Activators, Workflows, exposing APIs in a declarative way and much more) but that will be for other posts, today we focus on how Spawn will help you get out of this discussion about Monolithics and Microservices. See you again soon!