I feel like we have somehow thrown out the baby with the bathwater here – the baby being “Event-Driven Architecture”
You know, all that Kafka mindshare and Congruent Professional Service contracts – that Event-Driven Architecture. The same “Event Thinking” that Gartner is jumping up and down about. Why do I feel like Jeff Goldbloom in Jurassic Park asking “Hello? There are going to be some Dinosaurs on the Dinosaur tour??”
I want my Event Sourcing model to actually allow me to listen and react to Events – and in case you have not realized what you really have, I’ll tell you – God’s gift the “Event Thinking”.
I don’t wanna hear about jumping through hoops to do events right, so let me delineate the hoops.
The constructs we need are actors, agents, and “the fabric”. Agents handle the listening and dispatching to Actors. Agents declaratively specify the “pattern match” on a particular topic. Agents allow Actors to practice the Hollywood Principle and provide implicit invocation.
Actors are pampered. They are spoon fed an event, take the stage, do their work, and wait for the next gig. If they need data not contained in previous events (or discarded), it can “Query” an assistant to get the lines – “Line please”. The Data Fabric is like a Concierge at the Actor’s Guild – handling the “off topic” information needs (since events are always the minimal expression of “on topic” information).
Agents ingest events and invoke Actors. It’s a simple job of dispatching, so a Declarative DSL is indicated.
Actors ingest events, process something, and emit an event. Said another way, Actors transform Events – like ETL. That’s what streaming is. Some actors transform from the event log to a data model. This is how the data fabric gets populated. It’s all Actor based, and you must have events exposed.
To reap CQRS benefits, commands and queries are separated, which means writing and reading from possibly different data models. This write to the log, then listen to the log and write to the SQL DB historically has led to eventual consistency since ACID was not supported. You’ve blown the doors of Polyglot Persistence with not only ACID across the layers separating commands from queries, but the ability to write to/through a “projection” to achieve cross model consistency, and I’m not sure even Kleppmann has thought about this use of the term. Writing to the metadata storage layer as a common event log would be the fastest, but writing through a layer to affect the same transitive impacts on the layers is just a hop away. Suddenly, Polymorphic Persistence with ACID transaction is blooming.
Other DBs run into issues when writes are implicitly followed by a read. Traditionally, you write, and then at some future time, somebody queries the info. In EDA, there is/are one or more immediate listeners (agents) hooking each event.
It’s called “Event-Driven” for a reason. We have to get to the events in order to process them – which means replication to scale topics and triggers, not polling.
If we call that “watches” in this parlance, that’s cool. A rose… But when I hear "you gotta code it this way because if you code it he other way events can get “inconsistent”, then I have have to say that in the kingdom of Polyglot ACID Persistence, this emperor is naked! Events are used to drive replication and transactional locks for consistency – so somehow you through the baby out with the bath water.
Solve EDA, and you have the “Killer App” for Linearized, Stateful Replication at the most fundamental level.
Hope somebody can “understand the words that are coming out of my mouth.”
Duke Energy Modern Architecture