From almost zero to 1000 e-invoices/min: optimizing Spring API with transactions and JMeter
TL;DR - I optimized e-invoice issuance (queries and transactions); several @Transactional were holding the connection during I/O. I fixed that and the 1000 invoices/min target in JMeter was reached.
On a Java project I had to optimize electronic invoice issuance for orders. Everything async, with DB schemas and integration with other teams. Before the changes, the system couldn’t emit even one invoice per minute.
To find the slow spots I used a profiler (Java VisualVM). Time was spent in DB queries and in code that held a transaction open unnecessarily. Two fronts: optimize queries and fix the use of @Transactional.
The @Transactional problem
Several service methods were annotated with @Transactional at class level or on methods that did much more than a single write. The transaction opened at the start and only committed at the end. In between, the code read data, called other services, built XML and talked to the tax authority. The pool connection stayed held the whole time; any lock or wait inside that method increased transaction hold time. Under load, the pool was exhausted and the rest waited in line.
Example of what I found (generic, but the pattern was like this):
@Service
public class NotaFiscalService {
@Transactional // transaction open for the whole method
public void processarEmissao(Pedido pedido) {
Pedido comItens = pedidoRepository.findByIdComItens(pedido.getId()); // read
validar(comItens); // read + rules
String xml = montarXmlNfe(comItens); // CPU
String respostaSefaz = sefazClient.enviar(xml); // external I/O
Nota nota = parseResposta(respostaSefaz);
notaRepository.save(nota); // write
pedidoRepository.atualizarStatus(pedido.getId(), EMITIDO); // write
}
}
Only the pair save(nota) + atualizarStatus needed an active transaction. Fetching the order, validating, building XML and calling the tax authority don’t need to hold a connection or write transaction. With @Transactional on the whole method, each call occupied a connection for all of that time.
The fix was to keep a transaction only where there is a write and leave the orchestrating method non-transactional, delegating the part that changes the DB to a transactional method:
@Service
public class NotaFiscalService {
public void processarEmissao(Pedido pedido) {
Pedido comItens = pedidoRepository.findByIdComItens(pedido.getId());
validar(comItens);
String xml = montarXmlNfe(comItens);
String respostaSefaz = sefazClient.enviar(xml);
Nota nota = parseResposta(respostaSefaz);
persistirNotaEAtualizarPedido(nota, pedido.getId());
}
@Transactional
void persistirNotaEAtualizarPedido(Nota nota, Long pedidoId) {
notaRepository.save(nota);
pedidoRepository.atualizarStatus(pedidoId, EMITIDO);
}
}
Elsewhere there was @Transactional on read-only methods (e.g. fetching an order to build a report). For reads that don’t need strict consistency I used @Transactional(readOnly = true) or removed the annotation and let each find use its own short-lived connection. The transaction (and connection) stopped staying open for no reason.
Queries and indexes
For the queries that showed up in the profiler I used EXPLAIN ANALYZE in Postgres to see execution plan and index usage. In most cases the indexes were already correct; I added one missing index for a filter used across the flow. The biggest gain came from reducing N+1 (joins or batch) and from the application holding connections less because of the transaction changes.
Load test with JMeter
After the refactor I set up a load test with JMeter. The goal was 1000 invoices per minute on the full flow: create order, process until the invoice is officially issued.
flowchart LR
Pedido[Order] --> Processos[Processes]
Processos --> NF-e[E-invoice issuance]
NF-e --> Ok[1000/min]
Estoque[Inventory] -.-> Gargalo[Bottleneck]
Contabilidade[Accounting] -.-> Gargalo
The target was easy to hit. The e-invoice issuance step stopped being the limiter; the bottleneck moved to other teams’ services (inventory and accounting). JMeter gave a concrete number and showed where the real limit was.
That doesn’t replace long stress tests or bottleneck analysis (CPU, connections, pool), but for order-of-magnitude and proving the flow can handle the target, JMeter does the job.