Introduction to Apache Kafka
And Real-Time ETL
for DBAs and others who are interested in new ways
of working with relational databases 1
About Myself
• Gwen Shapira – SystemArchitect @Confluent
• Committer @Apache Kafka,Apache Sqoop
• Author of “HadoopApplicationArchitectures”, “Kafka – The Definitive Guide”
• Previously:
• Software Engineer @ Cloudera
• OracleACE Director
• Senior Consultants @ Pythian
• DBA@ Mercury Interactive
• Find me:
• gwen@confluent.io
• @gwenshap
There’s a Book on That!
Apache Kafka is
publish-subscribe messaging
rethought as a
distributed commit log.
turned into
a stream data platform
An Optical Illusion
• Write-ahead Logs
• So What is Kafka?
• Awesome use-case for Kafka
• Data streams and real-time ETL
• Where can you learn more
We’ll talk about:
Write-Ahead Logging (WAL)
a standard method for ensuring data
integrity… changes to data files … must
be written only after those changes
have been logged… in the event of a
crash we will be able to recover the
database using the log.
Important Point
The write-ahead log is the only reliable
source of information about current state of
the database.
WAL is used for
• Recover consistent state of a database
• Replicate the database (Streaming Replication, Hot Standby)
If you look far enough into archived logs – you can reconstruct the
entire database.
kafka for db as postgres
That’s nice, but what is Kafka?
Kafka provides a fast, distributed, highly scalable,
highly available, publish-subscribe messaging system.
Based on the tried and true log structure.
In turn this solves part of a much harder problem:
Communication and integration between
components of large software systems
The Basics
•Messages are organized into topics
•Producers push messages
•Consumers pull messages
•Kafka runs in a cluster. Nodes are called
brokers
Topics, Partitions and Logs
Each partition is a log
Each Broker has many partitions
Partition 0 Partition 0
Partition 1 Partition 1
Partition 2
Partition 1
Partition 0
Partition 2 Partion 2
Producers load balance between partitions
Partition 0
Partition 1
Partition 2
Partition 1
Partition 0
Partition 2
Partition 0
Partition 1
Partion 2
Client
Producers load balance between partitions
Partition 0
Partition 1
Partition 2
Partition 1
Partition 0
Partition 2
Partition 0
Partition 1
Partion 2
Client
Consumers
Consumer Group Y
Consumer Group X
Consumer
Kafka Cluster
Topic
Partition A (File)
Partition B (File)
Partition C (File)
Consumer
Consumer
Consumer
Order retained with in
partition
Order retained with in
partition but not over
partitionsOffSetX
OffSetX
OffSetX
OffSetYOffSetYOffSetY
Off sets are kept per
consumer group
Kafka “Magic” – Why is it so fast?
• 250M Events per sec on one node at 3ms latency
• Scales to any number of consumers
• Stores data for set amount of time –
Without tracking who read what data
• Replicates – but no need to sync to disk
• Zero-copy writes from memory / disk to network
How do people use Kafka?
• As a message bus
• As a buffer for replication systems
• As reliable feed for event processing
• As a buffer for event processing
• Decouple apps from databases
But really,
how do they use Kafka?
21
Raise your hand if this sounds familiar
“My next project was to get a working Hadoop setup…
Having little experience in this area, we naturally
budgeted a few weeks for getting data in and out, and
the rest of our time for implementing fancy algorithms.
“
--Jay Kreps, Kafka PMC
23
Client Source
Data Pipelines Start like this.
24
Client Source
Client
Client
Client
Then we reuse them
25
Client Backend
Client
Client
Client
Then we add consumers to the
existing sources
Another
Backend
26
Client Backend
Client
Client
Client
Then it starts to look like this
Another
Backend
Another
Backend
Another
Backend
27
Client Backend
Client
Client
Client
With maybe some of this
Another
Backend
Another
Backend
Another
Backend
Queues decouple systems:
Adding new systems doesn’t require changing
Existing systems
This is where we are trying to get
29
Source System Source System Source System Source System
Kafka decouples Data Pipelines
Hadoop Security Systems
Real-time
monitoring
Data Warehouse
Kafka
Producers
Brokers
Consumers
Kafka decouples Data Pipelines
Important notes:
• Producers and Consumers dont need to know about each other
• Performance issues on Consumers dont impact Producers
• Consumers are protected from herds of Producers
• Lots of flexibility in handling load
• Messages are available for anyone –
lots of new use cases, monitoring, audit, troubleshooting
http://www.slideshare.net/gwenshap/queues-pools-caches
My Favorite Use Cases
• Shops consume inventory updates
• Clicking around an online shop? Your clicks go to Kafka and
recommendations come back.
• Flagging credit card transactions as fraudulent
• Flagging game interactions as abuse
• Least favorite: Surge pricing in Uber
• Huge list of users at kafka.apache.org
31
Got it!
But what about real-time ETL?
Remember This?
33
Source System Source System Source System Source System
Kafka decouples Data Pipelines
Hadoop Security Systems
Real-time
monitoring
Data Warehouse
Kafka
Producers
Brokers
Consumers
Kafka is smack in middle of all Data Pipelines
If data flies into Kafka in real time
Why wait 24h before pulling it into a DWH?
34
35
Why Kafka makes real-time ETL better?
• Can integrate with any data source
• RDBMS, NoSQL, Applications, web applications, logs
• Consumers can be real-time
But they do not have to
• Reading and writing to/from Kafka is cheap
• So this is a great place to store intermediate state
• You can fix mistakes by rereading some of the data again
• Same data in same order
• Adding more pipelines / aggregations has no impact on source systems =
low risk
36
It is all valuable data
Raw data
Raw data Clean data
Aggregated dataClean data Enriched data
Filtered data
Dash
board
Report
Data
scientist
Alerts
OMG
• Producers
• Log4J
• Rest Proxy
• BottledWater
• KafkaConnect and its connectors ecosystem
• Other ecosystem
OK, but how does my data get into Kafka
kafka for db as postgres
kafka for db as postgres
• However you want:
• You just consume data, modify it, and produce it back
• Built into Kafka:
• Kprocessor
• Kstream
• Popular choices:
• Storm
• SparkStreaming
But wait, how do we process the data?
One more thing...
Schema is a MUST HAVE for
data integration
Need More Kafka?
• https://kafka.apache.org/documentation.html
• My video tutorial:
http://shop.oreilly.com/product/0636920038603.do
• http://www.michael-noll.com/blog/2014/08/18/apache-kafka-
training-deck-and-tutorial/
• Our website:
http://confluent.io
• Oracle guide to real-time ETL:
http://www.oracle.com/technetwork/middleware/data-
integrator/overview/best-practices-for-realtime-data-wa-132882.pdf

More Related Content

PDF
Kafka internals
PDF
PostgreSQL + Kafka: The Delight of Change Data Capture
PDF
PPTX
Introduction to Apache Kafka
PPTX
Real time Messages at Scale with Apache Kafka and Couchbase
PPTX
Design Patterns for working with Fast Data
PDF
Introduction to Apache Kafka and why it matters - Madrid
PPTX
Kafka Streams for Java enthusiasts
Kafka internals
PostgreSQL + Kafka: The Delight of Change Data Capture
Introduction to Apache Kafka
Real time Messages at Scale with Apache Kafka and Couchbase
Design Patterns for working with Fast Data
Introduction to Apache Kafka and why it matters - Madrid
Kafka Streams for Java enthusiasts

What's hot (20)

PDF
Introduction to Apache Kafka
PPTX
Kafka at scale facebook israel
PPTX
Multi-Cluster and Failover for Apache Kafka - Kafka Summit SF 17
PPTX
Kafka 0.8.0 Presentation to Atlanta Java User's Group March 2013
PDF
Apache Kafka Introduction
ODP
Introduction to Apache Kafka- Part 1
PPTX
How Apache Kafka is transforming Hadoop, Spark and Storm
PDF
Building Stream Infrastructure across Multiple Data Centers with Apache Kafka
PPTX
Apache Kafka
PPTX
Data Architectures for Robust Decision Making
PPTX
Reducing Microservice Complexity with Kafka and Reactive Streams
PPTX
Current and Future of Apache Kafka
PPTX
Introduction Apache Kafka
KEY
Data Models and Consumer Idioms Using Apache Kafka for Continuous Data Stream...
PDF
Kafka and Spark Streaming
PDF
An Introduction to Apache Kafka
PPTX
Building Event-Driven Systems with Apache Kafka
PPTX
Apache Kafka 0.8 basic training - Verisign
PDF
A la rencontre de Kafka, le log distribué par Florian GARCIA
PPTX
Apache Kafka Best Practices
Introduction to Apache Kafka
Kafka at scale facebook israel
Multi-Cluster and Failover for Apache Kafka - Kafka Summit SF 17
Kafka 0.8.0 Presentation to Atlanta Java User's Group March 2013
Apache Kafka Introduction
Introduction to Apache Kafka- Part 1
How Apache Kafka is transforming Hadoop, Spark and Storm
Building Stream Infrastructure across Multiple Data Centers with Apache Kafka
Apache Kafka
Data Architectures for Robust Decision Making
Reducing Microservice Complexity with Kafka and Reactive Streams
Current and Future of Apache Kafka
Introduction Apache Kafka
Data Models and Consumer Idioms Using Apache Kafka for Continuous Data Stream...
Kafka and Spark Streaming
An Introduction to Apache Kafka
Building Event-Driven Systems with Apache Kafka
Apache Kafka 0.8 basic training - Verisign
A la rencontre de Kafka, le log distribué par Florian GARCIA
Apache Kafka Best Practices
Ad

Similar to kafka for db as postgres (20)

PPTX
Data Pipelines with Kafka Connect
PPTX
An evening with Jay Kreps; author of Apache Kafka, Samza, Voldemort & Azkaban.
PPTX
CouchbasetoHadoop_Matt_Michael_Justin v4
PPTX
Building streaming data applications using Kafka*[Connect + Core + Streams] b...
PPTX
Streaming Data Ingest and Processing with Apache Kafka
PPT
Kafka Explainaton
PDF
Building Streaming Data Applications Using Apache Kafka
PDF
Apache Kafka - Scalable Message-Processing and more !
PPTX
Being Ready for Apache Kafka - Apache: Big Data Europe 2015
PDF
Streaming Analytics with Spark, Kafka, Cassandra and Akka
PPTX
Apache kafka
PDF
ETL as a Platform: Pandora Plays Nicely Everywhere with Real-Time Data Pipelines
PDF
Making Apache Kafka Even Faster And More Scalable
PPTX
Removing dependencies between services: Messaging and Apache Kafka
PDF
Day in the life event-driven workshop
PPTX
Debunking Common Myths in Stream Processing
PDF
Architecting Applications With Multiple Open Source Big Data Technologies
PDF
Streaming Analytics with Spark, Kafka, Cassandra and Akka by Helena Edelson
PDF
Devoxx university - Kafka de haut en bas
PDF
14th Athens Big Data Meetup - Landoop Workshop - Apache Kafka Entering The St...
Data Pipelines with Kafka Connect
An evening with Jay Kreps; author of Apache Kafka, Samza, Voldemort & Azkaban.
CouchbasetoHadoop_Matt_Michael_Justin v4
Building streaming data applications using Kafka*[Connect + Core + Streams] b...
Streaming Data Ingest and Processing with Apache Kafka
Kafka Explainaton
Building Streaming Data Applications Using Apache Kafka
Apache Kafka - Scalable Message-Processing and more !
Being Ready for Apache Kafka - Apache: Big Data Europe 2015
Streaming Analytics with Spark, Kafka, Cassandra and Akka
Apache kafka
ETL as a Platform: Pandora Plays Nicely Everywhere with Real-Time Data Pipelines
Making Apache Kafka Even Faster And More Scalable
Removing dependencies between services: Messaging and Apache Kafka
Day in the life event-driven workshop
Debunking Common Myths in Stream Processing
Architecting Applications With Multiple Open Source Big Data Technologies
Streaming Analytics with Spark, Kafka, Cassandra and Akka by Helena Edelson
Devoxx university - Kafka de haut en bas
14th Athens Big Data Meetup - Landoop Workshop - Apache Kafka Entering The St...
Ad

More from PivotalOpenSourceHub (20)

PPTX
Zettaset Elastic Big Data Security for Greenplum Database
PPTX
New Security Framework in Apache Geode
PPTX
Apache Geode Clubhouse - WAN-based Replication
PDF
#GeodeSummit: Easy Ways to Become a Contributor to Apache Geode
PDF
#GeodeSummit Keynote: Creating the Future of Big Data Through 'The Apache Way"
PDF
#GeodeSummit: Combining Stream Processing and In-Memory Data Grids for Near-R...
PPTX
#GeodeSummit - Off-Heap Storage Current and Future Design
PDF
#GeodeSummit - Redis to Geode Adaptor
PDF
#GeodeSummit - Integration & Future Direction for Spring Cloud Data Flow & Geode
PPTX
#GeodeSummit - Spring Data GemFire API Current and Future
PDF
#GeodeSummit - Modern manufacturing powered by Spring XD and Geode
PDF
#GeodeSummit - Using Geode as Operational Data Services for Real Time Mobile ...
PDF
#GeodeSummit - Large Scale Fraud Detection using GemFire Integrated with Gree...
PDF
#GeodeSummit: Democratizing Fast Analytics with Ampool (Powered by Apache Geode)
PDF
#GeodeSummit: Architecting Data-Driven, Smarter Cloud Native Apps with Real-T...
PDF
#GeodeSummit - Apex & Geode: In-memory streaming, storage & analytics
PDF
#GeodeSummit - Where Does Geode Fit in Modern System Architectures
PDF
#GeodeSummit - Design Tradeoffs in Distributed Systems
PDF
#GeodeSummit - Wall St. Derivative Risk Solutions Using Geode
PDF
Building Apps with Distributed In-Memory Computing Using Apache Geode
Zettaset Elastic Big Data Security for Greenplum Database
New Security Framework in Apache Geode
Apache Geode Clubhouse - WAN-based Replication
#GeodeSummit: Easy Ways to Become a Contributor to Apache Geode
#GeodeSummit Keynote: Creating the Future of Big Data Through 'The Apache Way"
#GeodeSummit: Combining Stream Processing and In-Memory Data Grids for Near-R...
#GeodeSummit - Off-Heap Storage Current and Future Design
#GeodeSummit - Redis to Geode Adaptor
#GeodeSummit - Integration & Future Direction for Spring Cloud Data Flow & Geode
#GeodeSummit - Spring Data GemFire API Current and Future
#GeodeSummit - Modern manufacturing powered by Spring XD and Geode
#GeodeSummit - Using Geode as Operational Data Services for Real Time Mobile ...
#GeodeSummit - Large Scale Fraud Detection using GemFire Integrated with Gree...
#GeodeSummit: Democratizing Fast Analytics with Ampool (Powered by Apache Geode)
#GeodeSummit: Architecting Data-Driven, Smarter Cloud Native Apps with Real-T...
#GeodeSummit - Apex & Geode: In-memory streaming, storage & analytics
#GeodeSummit - Where Does Geode Fit in Modern System Architectures
#GeodeSummit - Design Tradeoffs in Distributed Systems
#GeodeSummit - Wall St. Derivative Risk Solutions Using Geode
Building Apps with Distributed In-Memory Computing Using Apache Geode

Recently uploaded (20)

PPTX
MBA JAPAN: 2025 the University of Waseda
PPTX
machinelearningoverview-250809184828-927201d2.pptx
PPTX
Machine Learning and working of machine Learning
PPT
statistic analysis for study - data collection
PDF
Grey Minimalist Professional Project Presentation (1).pdf
PPTX
Hushh.ai: Your Personal Data, Your Business
PPT
PROJECT CYCLE MANAGEMENT FRAMEWORK (PCM).ppt
PPTX
ifsm.pptx, institutional food service management
PPT
expt-design-lecture-12 hghhgfggjhjd (1).ppt
PPTX
Business_Capability_Map_Collection__pptx
PDF
Tetra Pak Index 2023 - The future of health and nutrition - Full report.pdf
PPTX
ai agent creaction with langgraph_presentation_
PDF
Concepts of Database Management, 10th Edition by Lisa Friedrichsen Test Bank.pdf
PDF
CS3352FOUNDATION OF DATA SCIENCE _1_MAterial.pdf
PPTX
AI AND ML PROPOSAL PRESENTATION MUST.pptx
PDF
Hikvision-IR-PPT---EN.pdfSADASDASSAAAAAAAAAAAAAAA
PPTX
inbound6529290805104538764.pptxmmmmmmmmm
PDF
technical specifications solar ear 2025.
PDF
REPORT CARD OF GRADE 2 2025-2026 MATATAG
PDF
2025-08 San Francisco FinOps Meetup: Tiering, Intelligently.
MBA JAPAN: 2025 the University of Waseda
machinelearningoverview-250809184828-927201d2.pptx
Machine Learning and working of machine Learning
statistic analysis for study - data collection
Grey Minimalist Professional Project Presentation (1).pdf
Hushh.ai: Your Personal Data, Your Business
PROJECT CYCLE MANAGEMENT FRAMEWORK (PCM).ppt
ifsm.pptx, institutional food service management
expt-design-lecture-12 hghhgfggjhjd (1).ppt
Business_Capability_Map_Collection__pptx
Tetra Pak Index 2023 - The future of health and nutrition - Full report.pdf
ai agent creaction with langgraph_presentation_
Concepts of Database Management, 10th Edition by Lisa Friedrichsen Test Bank.pdf
CS3352FOUNDATION OF DATA SCIENCE _1_MAterial.pdf
AI AND ML PROPOSAL PRESENTATION MUST.pptx
Hikvision-IR-PPT---EN.pdfSADASDASSAAAAAAAAAAAAAAA
inbound6529290805104538764.pptxmmmmmmmmm
technical specifications solar ear 2025.
REPORT CARD OF GRADE 2 2025-2026 MATATAG
2025-08 San Francisco FinOps Meetup: Tiering, Intelligently.

kafka for db as postgres

  • 1. Introduction to Apache Kafka And Real-Time ETL for DBAs and others who are interested in new ways of working with relational databases 1
  • 2. About Myself • Gwen Shapira – SystemArchitect @Confluent • Committer @Apache Kafka,Apache Sqoop • Author of “HadoopApplicationArchitectures”, “Kafka – The Definitive Guide” • Previously: • Software Engineer @ Cloudera • OracleACE Director • Senior Consultants @ Pythian • DBA@ Mercury Interactive • Find me: • [email protected] • @gwenshap
  • 3. There’s a Book on That!
  • 4. Apache Kafka is publish-subscribe messaging rethought as a distributed commit log. turned into a stream data platform An Optical Illusion
  • 5. • Write-ahead Logs • So What is Kafka? • Awesome use-case for Kafka • Data streams and real-time ETL • Where can you learn more We’ll talk about:
  • 6. Write-Ahead Logging (WAL) a standard method for ensuring data integrity… changes to data files … must be written only after those changes have been logged… in the event of a crash we will be able to recover the database using the log.
  • 7. Important Point The write-ahead log is the only reliable source of information about current state of the database.
  • 8. WAL is used for • Recover consistent state of a database • Replicate the database (Streaming Replication, Hot Standby) If you look far enough into archived logs – you can reconstruct the entire database.
  • 10. That’s nice, but what is Kafka?
  • 11. Kafka provides a fast, distributed, highly scalable, highly available, publish-subscribe messaging system. Based on the tried and true log structure. In turn this solves part of a much harder problem: Communication and integration between components of large software systems
  • 12. The Basics •Messages are organized into topics •Producers push messages •Consumers pull messages •Kafka runs in a cluster. Nodes are called brokers
  • 15. Each Broker has many partitions Partition 0 Partition 0 Partition 1 Partition 1 Partition 2 Partition 1 Partition 0 Partition 2 Partion 2
  • 16. Producers load balance between partitions Partition 0 Partition 1 Partition 2 Partition 1 Partition 0 Partition 2 Partition 0 Partition 1 Partion 2 Client
  • 17. Producers load balance between partitions Partition 0 Partition 1 Partition 2 Partition 1 Partition 0 Partition 2 Partition 0 Partition 1 Partion 2 Client
  • 18. Consumers Consumer Group Y Consumer Group X Consumer Kafka Cluster Topic Partition A (File) Partition B (File) Partition C (File) Consumer Consumer Consumer Order retained with in partition Order retained with in partition but not over partitionsOffSetX OffSetX OffSetX OffSetYOffSetYOffSetY Off sets are kept per consumer group
  • 19. Kafka “Magic” – Why is it so fast? • 250M Events per sec on one node at 3ms latency • Scales to any number of consumers • Stores data for set amount of time – Without tracking who read what data • Replicates – but no need to sync to disk • Zero-copy writes from memory / disk to network
  • 20. How do people use Kafka? • As a message bus • As a buffer for replication systems • As reliable feed for event processing • As a buffer for event processing • Decouple apps from databases
  • 21. But really, how do they use Kafka? 21
  • 22. Raise your hand if this sounds familiar “My next project was to get a working Hadoop setup… Having little experience in this area, we naturally budgeted a few weeks for getting data in and out, and the rest of our time for implementing fancy algorithms. “ --Jay Kreps, Kafka PMC
  • 25. 25 Client Backend Client Client Client Then we add consumers to the existing sources Another Backend
  • 26. 26 Client Backend Client Client Client Then it starts to look like this Another Backend Another Backend Another Backend
  • 27. 27 Client Backend Client Client Client With maybe some of this Another Backend Another Backend Another Backend
  • 28. Queues decouple systems: Adding new systems doesn’t require changing Existing systems
  • 29. This is where we are trying to get 29 Source System Source System Source System Source System Kafka decouples Data Pipelines Hadoop Security Systems Real-time monitoring Data Warehouse Kafka Producers Brokers Consumers Kafka decouples Data Pipelines
  • 30. Important notes: • Producers and Consumers dont need to know about each other • Performance issues on Consumers dont impact Producers • Consumers are protected from herds of Producers • Lots of flexibility in handling load • Messages are available for anyone – lots of new use cases, monitoring, audit, troubleshooting http://www.slideshare.net/gwenshap/queues-pools-caches
  • 31. My Favorite Use Cases • Shops consume inventory updates • Clicking around an online shop? Your clicks go to Kafka and recommendations come back. • Flagging credit card transactions as fraudulent • Flagging game interactions as abuse • Least favorite: Surge pricing in Uber • Huge list of users at kafka.apache.org 31
  • 32. Got it! But what about real-time ETL?
  • 33. Remember This? 33 Source System Source System Source System Source System Kafka decouples Data Pipelines Hadoop Security Systems Real-time monitoring Data Warehouse Kafka Producers Brokers Consumers Kafka is smack in middle of all Data Pipelines
  • 34. If data flies into Kafka in real time Why wait 24h before pulling it into a DWH? 34
  • 35. 35
  • 36. Why Kafka makes real-time ETL better? • Can integrate with any data source • RDBMS, NoSQL, Applications, web applications, logs • Consumers can be real-time But they do not have to • Reading and writing to/from Kafka is cheap • So this is a great place to store intermediate state • You can fix mistakes by rereading some of the data again • Same data in same order • Adding more pipelines / aggregations has no impact on source systems = low risk 36
  • 37. It is all valuable data Raw data Raw data Clean data Aggregated dataClean data Enriched data Filtered data Dash board Report Data scientist Alerts OMG
  • 38. • Producers • Log4J • Rest Proxy • BottledWater • KafkaConnect and its connectors ecosystem • Other ecosystem OK, but how does my data get into Kafka
  • 41. • However you want: • You just consume data, modify it, and produce it back • Built into Kafka: • Kprocessor • Kstream • Popular choices: • Storm • SparkStreaming But wait, how do we process the data?
  • 43. Schema is a MUST HAVE for data integration
  • 44. Need More Kafka? • https://kafka.apache.org/documentation.html • My video tutorial: http://shop.oreilly.com/product/0636920038603.do • http://www.michael-noll.com/blog/2014/08/18/apache-kafka- training-deck-and-tutorial/ • Our website: http://confluent.io • Oracle guide to real-time ETL: http://www.oracle.com/technetwork/middleware/data- integrator/overview/best-practices-for-realtime-data-wa-132882.pdf

Editor's Notes

  • #14: Topics are partitioned, each partition ordered and immutable. Messages in a partition have an ID, called Offset. Offset uniquely identifies a message within a partition
  • #15: Kafka retains all messages for fixed amount of time. Not waiting for acks from consumers. The only metadata retained per consumer is the position in the log – the offset So adding many consumers is cheap On the other hand, consumers have more responsibility and are more challenging to implement correctly And “batching” consumers is not a problem
  • #16: 3 partitions, each replicated 3 times.
  • #17: The choose how many replicas must ACK a message before its considered committed. This is the tradeoff between speed and reliability
  • #18: The choose how many replicas must ACK a message before its considered committed. This is the tradeoff between speed and reliability
  • #19: can read from one or more partition leader. You can’t have two consumers in same group reading the same partition. Leaders obviously do more work – but they are balanced between nodes We reviewed the basic components on the system, and it may seem complex. In the next section we’ll see how simple it actually is to get started with Kafka.
  • #25: Then we end up adding clients to use that source.
  • #26: But as we start to deploy our applications we realizet hat clients need data from a number of sources. So we add them as needed.
  • #28: But over time, particularly if we are segmenting services by function, we have stuff all over the place, and the dependencies are a nightmare. This makes for a fragile system.
  • #30: Kafka is a pub/sub messaging system that can decouple your data pipelines. Most of you are probably familiar with it’s history at LinkedIn and they use it as a high throughput relatively low latency commit log. It allows sources to push data without worrying about what clients are reading it. Note that producer push, and consumers pull. Kafka itself is a cluster of brokers, which handles both persisting data to disk and serving that data to consumer requests.
  • #34: Kafka is a pub/sub messaging system that can decouple your data pipelines. Most of you are probably familiar with it’s history at LinkedIn and they use it as a high throughput relatively low latency commit log. It allows sources to push data without worrying about what clients are reading it. Note that producer push, and consumers pull. Kafka itself is a cluster of brokers, which handles both persisting data to disk and serving that data to consumer requests.
  • #40: Logical decoding output client API
  • #44: Sorry, but “Schema on Read” is kind of B.S. We admit that there is a schema, but we want to “ingest fast”, so we shift the burden to the readers. But the data is written once and read many many times by many different people. They each need to figure this out on their own? This makes no sense. Also, how are you going to validate the data without a schema?
  • #45: https://github.com/schema-repo/schema-repo There’s no data dictionary for Kafka
  • #53: There are many options for handling excessing user requests. The only thing that is not an option – throw everything at the database and let the DB queue the excessive load