-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathpresent-hakka_roundhouse.slide
451 lines (264 loc) · 10.8 KB
/
present-hakka_roundhouse.slide
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
# Hakka Roundhouse
A microservices ready monolith
12 Nov 2020
Tags: software architecture,tutorial
Loh Siu Yin
Technology Consultant, Beyond Broadcast LLP
## Rank the following software engineering challenges. Most difficult first.
- Getting requirements right.
- Writing the code.
- Deploying the code (Continuous Integration / Continuous Delivery).
- Maintaining and extending the code -- continuous value creation.
## In my case, most challenging first:
1. Maintaining and extending the code:
Requirements constantly evolve and
bad architectures slow progress.
1. Getting requirements right:
Needs you to understand your user(s).
I feel quantity here leads to better
quality.
1. Writing the code: Go makes it easier.
1. Deploying the code:
There are lots of CI/CD tools.
My choice is skaffold with kustomize into kubernetes.
## Architectures: What we want
.image img/PlannedCity.jpg
## If we are not careful, we get:
.image img/UnplannedCity.jpg
## Can you tell me where the post-offices are?
.image img/PlannedCity.jpg
That was my issue when I used the microservices architecture.
## I used this architecture instead
.image img/hakka-roundhouse.jpg
With this architecture, I know exactly where to find things.
For a medium-sized project, I deployed only one "Hakka roundhouse".
## Hakka Roundhouses
.image img/hakka-roundhouse.jpg _ 300
- Tens of families within a clan live in each house
- Closed to the outside, open on the inside
- Issues within an indivdual house are an internal matter -- "easily" solved
- Issues across roundhouses less easily solved -- war?
## What are the pros and cons of software monoliths?
.image img/BigBallofMud.png
If you don't take care, you get a
Big Ball of Mud (BBoM): an unmaintainable mess.
## Avoiding BBoM: a Microservice ready monolith?
- Can many software modules can "live" within a monolith?
- _Should_ these modules "live" within a single software repository?
- Communications within a monolith is certainly fast and reliable.
- But communications with systems outside the monolith relies on the network with ensuing networking lack of guarantees.
- As needs demand, can a software module (sub-folder) can be forked perhaps to a separate repo for it to take on a life of its own?
* Can we build micro-service ready monoliths?
- *Go* has concurrent goroutines which can use channels to communicate between them.
## Traditional monolith
.play -edit cmd/trad/main.go
---
Why do software monoliths typically grow into an unmanageable mess?
Consider function signatures. Are there any constraints on what they can be?
_Too_ much freedom?
## Microservice style
.code cmd/hakka/main.go
Here I made the **sum** function with gRPC / Protocol Buffers parameters.
What is the function signature now? Is there loss of freedom? Loss of capability?
## Here is how to run the code
- Download, extract and install protoc in your $PATH.
.link https://github.com/google/protobuf/releases
- Tell go modules about the grpc dependencies
go get google.golang.org/grpc
go get github.com/golang/protobuf/protoc-gen-go
- Run:
go run cmd/hakka/main.go
## Why gRPC style parameters?
With gRPC style parameters, we can *easily* extract
the sum function into an external microservice.
This is the central idea behind the Hakka Roundhouse
architecture.
When a function gets too "big":
1. Extract it from the monolith.
1. Deploy it as a separate microservice.
## My deployed Hakka Roundhouse:
Did *not* use gRPC sytle parameters.
Instead I passed JSON messages.
To understand why, we need to understand
the different ways software modules can communicate with
each other.
## Communications
## Local Procedure Call
We have already seen this:
.play -edit cmd/trad/main.go
- It is fast, reliable and easy.
- Well suited for use *within* a function group or "family"
living in a Roundhouse unit.
I define a function group as a set of functions that
work to achieve a common goal.
In a corporate context, we
may have an Engineering function group but within that group,
we could have UI/UX, Frontend Dev, Backend Dev, QA/Test etc.
## gRPC style Intra-process Call
This is the *local* function with gRPC style
parameters.
- Communication is fast and reliable (intra-process)
- But is more formal (gRPC style with protocol buffers).
- Well suited when communicating with *other* function groups / families
within the Roundhouse.
## gRPC style External service Call
As previously mentioned, if a function gets
too "big", we can extract it and deploy it as
an external service.
- Communication is slower and less reliable (depends on the network).
However this service can now be scaled independently.
- Still requires formality of gRPC style protocol buffers.
- Well suited when communicating *across* "different" Hakka Roundhouses.
## Messaging
Thus far, we have only discussed local or remote
procedure call methods.
This is similar to a telephone call, when two people wish to communicate.
- both parties need to be online
- at the same time
Thus local and remote procedure calls are Synchronous.
Software modules can also communicate in a
"WhatsApp" or "e-mail" fashion.
Event Sourcing and event driven systems are message driven
much like "e-mail" systems.
## Hakka Roundhouse and Messaging
## Hakka Roundhouse and Event messages
We can distribute ( publish / subscribe to ) messages in two ways:
1. "Loudspeaker" style -- just shout out the message.
1. "WhatsApp Group" style -- message the Group and a history
of messages is maintained.
## "Loudspeaker" style messaging with NATS
"Loudspeaker" fire-and-forget style Pub / Sub enables you to loosely couple components.
A publisher can publish to zero or more subscribers.
----
Starting the NATS message broker:
- Download NATS, unzip and install it in your $PATH
.link https://github.com/nats-io/nats-server/releases/download/v2.1.8/nats-server-v2.1.8-linux-amd64.zip
- Run the message broker:
~/go/bin/nats-server
## NATS communication between function groups within a Roundhouse
.code cmd/hakkanats/main.go /30 O/,/40 O/
Think of **sumService** and **loadGen** as two function groups or families
within the Roundhouse.
loadGen is shouting out numbers to be added. loadGen
does not care if anyone is listening.
sumService listens to loadGen by subscribing to loadGen's channel.
## loadGen: Publishes numbers to be summed
.code cmd/hakkanats/main.go /50 O/,/60 O/
## sumService: Adds numbers
.code cmd/hakkanats/main.go /70 O/,/80 O/
## Running our NATS enabled monolith
Ensure NATS server is running, then:
go run cmd/hakkanats/main.go
----
Try stopping then re-starting the NATS message broker.
What do you think will happen?
## What did we just see?
1. Pub / Sub worked *sometimes* worked as expected with NATS was running.
1. *Sometimes* results were lost after NATS was stopped and restarted.
----
Why?
When NATS client within loadGen could not reach NATS,
the Client tried to save messages in memory.
When it could again connect to NATS,
it published the saved messages.
This is workable for short NATS outages but is likely
to run out of memory for longer outages.
It is also unreliable.
## "WhatsApp Group" style messaging with NATS Streaming
NATS **Streaming** server stores messages with one of the following methods:
1. In memory (for ultimate speed).
1. On disk (still fast as there is a cache).
1. In an SQL database (if you want to analyse the message data).
Subscribers need not be online all the time.
Each subscriber can choose to
retrieve past messages from the NATS Streaming server
when they connect.
## Installation and server start-up
Install
.link https://github.com/nats-io/nats-streaming-server/releases/download/v0.18.0/nats-streaming-server-v0.18.0-linux-amd64.zip
Run
~/go/bin/nats-streaming-server
## loadGen with NATS Streaming
.code cmd/hakkastream/main.go /10 O/,/20 O/
I used JSON encoding here instead of protocol buffers.
I will discuss this further later.
## subService with NAT Streaming
.code cmd/hakkastream/main.go /30 O/,/40 O/
Notice the subcription options:
DurableName,
DeliverAllAvailable
Other options include:
StartWithLastReceived,
StartAtSequence(n),
StartAtTime(t),
StartAtTimeDelta(dur)
## Demo: Hakka Roundhouse monolith with NATS Streaming
main.go:
.code cmd/hakkastream/main.go /s O/,/e O/
Ensure NATS Steaming server is running, then:
go run cmd/hakkastream/main.go
----
Edit cmd/hakkastream/main.go to select "DeliverAllAvailable"
then re-run.
## What did we just see?
1. Initially NATS Streaming looked like NATS.
1. When the DeliverAllAvailable option was run,
subService could retrieve all messages from the very first message.
---
* DeliverAllAvailable
* StartWithLastReceived
* StartAtSequence(n)
* StartAtTime(t)
* StartAtTimeDelta(dur)
Give great flexibility with how past message are retrieved.
## Discussion
## JSON vs Protocol Buffers:
I went with JSON because my application had only 5 major
function groups (Roundhouse "families").
I could thus avoid the discipline and overhead required of protocol buffers.
Also, many publicly available services deliver their
payloads in JSON.
----
Protocol Buffers' strict interface definitions are a
great advantage when you have many services (> 7) to
track and evolve.
We can only manage about 7 things in our minds at any one time.
## Messaging vs RPC
I chose NATS Streaming mainly because my application
has fast publishers and slow subscribers.
I can restart my monolith and the subscriber components
in the monolith will query NATS Streaming server and
continue seamlessly.
----
If I had used gRPC I would have to queue-up the fast requests,
then dequeue them to be processed.
Furthermore, gRPC does not keep a history of requests / messages
as NATS Streaming server does.
Thus I would also need a message logger to replay past messages.
## Hakka Roundhouse monolith vs Microservices
With the Hakka Roundhouse architecture, I had two deployables
to manage:
1. The application binary
2. NATS Streaming server binary
Thus far my application has been running and evolving for the past
year and a half without problems.
The number of deployables remain at 2.
----
With Microservices, I would have to maintain at least 6 deployables.
If you also make each component of a major function group a microservice,
then I would have 30 or more deployables to track and maintain.
## Continuous Integration / Continuous Delivery
I use Skaffold with Kustomize to deploy to a kubernetes cluster.
The command line being:
skaffold -p prod run
----
The above command:
1. Compiles my code.
1. Creates and version tags a docker container.
1. Pushes it to a docker registry.
1. Creates a deployment in my kubernetes cluster for both
my application binary version and NATS Streaming server.
1. Checks that kubernetes successfully runs the correct version containers.
## Presentation and code download
.link https://github.com/siuyin/present-hakka_roundhouse