Skip to content

Instance Types & Deployment Modes

Apache Fineract uses a mode system to control what each running instance is responsible for. A single process can be a read node, a write node, a batch processor, or any combination. Understanding these modes is essential for designing a scaled deployment or understanding what Finecko provisions on your behalf.


The Three Mode Axes

Each Fineract instance independently enables or disables three responsibilities:

ModeEnvironment VariableDefaultResponsibility
ReadFINERACT_MODE_READ_ENABLEDtrueServes GET API requests
WriteFINERACT_MODE_WRITE_ENABLEDtrueServes POST/PUT/DELETE requests; runs Liquibase migrations on startup
Batch ManagerFINERACT_MODE_BATCH_MANAGER_ENABLEDtrueSchedules and partitions batch jobs (COB)
Batch WorkerFINERACT_MODE_BATCH_WORKER_ENABLEDtrueExecutes batch job partitions

By default, all four are true, which means a fresh single-node deployment handles everything. As load grows, these responsibilities can be split across dedicated nodes.


What Each Mode Does

Read Mode

A Read node accepts GET requests only. It connects to the database in read-only capacity and serves API queries: fetching loan details, client profiles, account balances, reports, and any other data-retrieval operation.

Read nodes can be pointed at a read replica database to offload query traffic from the primary. Because they accept no writes, they are safe to scale horizontally without coordination - a load balancer can distribute GET requests across any number of Read nodes.

When FINERACT_MODE_READ_ENABLED=false, the node will reject GET requests with a 405 response.

Write Mode

A Write node accepts POST, PUT, and DELETE requests. It is responsible for:

  • All state-changing API operations (creating clients, disbursing loans, recording repayments, etc.)
  • Running Liquibase database migrations on startup - this is why only write nodes should be upgraded first in a rolling deployment
  • Publishing business events to the message broker after each transaction

Write nodes must connect to the primary (writable) database instance. Running multiple Write nodes is supported - they share the same primary database and coordinate through it.

When FINERACT_MODE_WRITE_ENABLED=false, the node will reject write operations and will also skip Liquibase migrations on startup (useful for worker nodes that should not be running schema migrations).

Batch Manager Mode

The Batch Manager is responsible for:

  • Scheduling batch jobs (primarily the nightly Loan COB job)
  • Partitioning the loan portfolio into chunks
  • Distributing work partitions to Batch Worker nodes via the remote job messaging system
  • Waiting for all partitions to complete and finalising the run

Only one active Batch Manager should run at a time in a deployment. Running two managers simultaneously will cause duplicate partitioning and data integrity issues.

Batch Worker Mode

A Batch Worker receives loan portfolio partitions from the Batch Manager and executes the COB steps: arrears calculations, overdue charges, accrual postings, and delinquency updates.

Worker nodes are the main scaling lever for COB performance. A larger portfolio benefits from more workers running in parallel. Each worker node must have a unique FINERACT_NODE_ID.

Workers do not need Write or Read mode enabled - they can run as batch-only nodes with FINERACT_MODE_READ_ENABLED=false and FINERACT_MODE_WRITE_ENABLED=false, and optionally FINERACT_LIQUIBASE_ENABLED=false to skip the migration check on startup.


Common Deployment Configurations

Single Node (Development / Small Production)

All modes enabled on one process. Simplest setup, suitable for smaller portfolios.

FINERACT_MODE_READ_ENABLED=true
FINERACT_MODE_WRITE_ENABLED=true
FINERACT_MODE_BATCH_MANAGER_ENABLED=true
FINERACT_MODE_BATCH_WORKER_ENABLED=true
FINERACT_NODE_ID=1
FINERACT_REMOTE_JOB_MESSAGE_HANDLER_SPRING_EVENTS_ENABLED=true

Single-node only

FINERACT_REMOTE_JOB_MESSAGE_HANDLER_SPRING_EVENTS_ENABLED=true is only valid when the batch manager and worker run in the same JVM. For multi-node deployments, use Kafka or JMS instead.

Separate API and Batch Node

API traffic and batch processing are isolated. The API node handles all read and write requests while the batch node runs COB. Both nodes connect to the same primary database.

# API node
FINERACT_NODE_ID=1
FINERACT_MODE_READ_ENABLED=true
FINERACT_MODE_WRITE_ENABLED=true
FINERACT_MODE_BATCH_MANAGER_ENABLED=false
FINERACT_MODE_BATCH_WORKER_ENABLED=false

# Batch node (manager + worker on same process)
FINERACT_NODE_ID=2
FINERACT_MODE_READ_ENABLED=false
FINERACT_MODE_WRITE_ENABLED=false
FINERACT_MODE_BATCH_MANAGER_ENABLED=true
FINERACT_MODE_BATCH_WORKER_ENABLED=true
FINERACT_LIQUIBASE_ENABLED=false

This configuration ensures COB processing does not compete with API traffic for CPU and database connections.

Scaled: Read Replicas + Dedicated Batch Nodes

# Write API node (1 or more)
FINERACT_MODE_READ_ENABLED=false
FINERACT_MODE_WRITE_ENABLED=true
FINERACT_MODE_BATCH_MANAGER_ENABLED=false
FINERACT_MODE_BATCH_WORKER_ENABLED=false

# Read API nodes (behind load balancer, pointing at read replica DB)
FINERACT_MODE_READ_ENABLED=true
FINERACT_MODE_WRITE_ENABLED=false
FINERACT_MODE_BATCH_MANAGER_ENABLED=false
FINERACT_MODE_BATCH_WORKER_ENABLED=false

# Batch Manager (exactly one)
FINERACT_MODE_READ_ENABLED=false
FINERACT_MODE_WRITE_ENABLED=false
FINERACT_MODE_BATCH_MANAGER_ENABLED=true
FINERACT_MODE_BATCH_WORKER_ENABLED=false

# Batch Workers (one or more, unique FINERACT_NODE_ID per node)
FINERACT_MODE_READ_ENABLED=false
FINERACT_MODE_WRITE_ENABLED=false
FINERACT_MODE_BATCH_MANAGER_ENABLED=false
FINERACT_MODE_BATCH_WORKER_ENABLED=true
FINERACT_LIQUIBASE_ENABLED=false

In this configuration Kafka or JMS must be used for remote job messaging (FINERACT_REMOTE_JOB_MESSAGE_HANDLER_KAFKA_ENABLED=true or _JMS_ENABLED=true).


Remote Job Messaging

The Batch Manager and Batch Worker communicate via a messaging system. Exactly one of these must be enabled:

OptionVariableUse when
Spring EventsFINERACT_REMOTE_JOB_MESSAGE_HANDLER_SPRING_EVENTS_ENABLEDSingle-node only (manager and worker in same JVM)
KafkaFINERACT_REMOTE_JOB_MESSAGE_HANDLER_KAFKA_ENABLEDMulti-node deployments
JMS (ActiveMQ)FINERACT_REMOTE_JOB_MESSAGE_HANDLER_JMS_ENABLEDMulti-node deployments

Exactly one must be true

If more than one remote job messaging option is enabled, or none are enabled, the batch system will malfunction. COB partitions will either not be dispatched or will be dispatched to multiple handlers simultaneously.


How Finecko Manages This

On a Finecko managed deployment, instance mode configuration is handled for you:

  • Small/Standard tiers run a single combined node - simple, low-overhead, fully functional
  • Scale tiers provision separate API and Batch node pools, with COB workers scaled independently from API traffic
  • Kafka is used for remote job messaging on all multi-node deployments
  • Liquibase migrations are managed exclusively by the Write nodes during upgrades
  • Node IDs are assigned automatically to ensure uniqueness across the cluster

You do not need to configure any mode variables directly - they are set by the Finecko platform at provisioning and deployment time. If you need a custom node layout for your workload, contact support.


Choosing the Right Layout

SituationRecommended layout
Getting started / developmentSingle node, all modes enabled
Small production (< 50k loans)Single node
COB is impacting API response timesSeparate API and Batch nodes
COB window is too longAdd more Batch Worker nodes
Read traffic is highAdd Read-only nodes pointing at a read replica
Write traffic is highAdd more Write nodes (they share the primary DB)