。。。

This commit is contained in:
ridethepig 2023-04-13 16:15:31 +08:00
parent 11facbf2ef
commit 6caedf6565
4 changed files with 1605 additions and 5 deletions

View File

@ -2904,5 +2904,582 @@
:height 1296}),
:page 450},
:content {:text "System Architecture"},
:properties {:color "yellow"}}
{:id #uuid "64378274-897c-4aac-b246-49bda634b872",
:page 450,
:position {:bounding {:x1 411.0239486694336,
:y1 1007.0803833007812,
:x2 467.33980560302734,
:y2 1023.6518249511719,
:width 806.3999999999999,
:height 1209.6},
:rects ({:x1 411.0239486694336,
:y1 1007.0803833007812,
:x2 467.33980560302734,
:y2 1023.6518249511719,
:width 806.3999999999999,
:height 1209.6}),
:page 450},
:content {:text "manifold"},
:properties {:color "green"}}
{:id #uuid "643786f0-5f9c-4441-8898-82ccd6a1a464",
:page 452,
:position {:bounding {:x1 108.34862899780273,
:y1 690.3839416503906,
:x2 286.65130615234375,
:y2 710.9553833007812,
:width 806.3999999999999,
:height 1209.6},
:rects ({:x1 108.34862899780273,
:y1 690.3839416503906,
:x2 286.65130615234375,
:y2 710.9553833007812,
:width 806.3999999999999,
:height 1209.6}),
:page 452},
:content {:text "A Canonical Device"},
:properties {:color "yellow"}}
{:id #uuid "64378926-ce8a-4e38-a3fe-62fb5c4994e6",
:page 453,
:position {:bounding {:x1 198.56453323364258,
:y1 352.04466247558594,
:x2 370.1695899963379,
:y2 372.6160888671875,
:width 806.3999999999999,
:height 1209.6},
:rects ({:x1 198.56453323364258,
:y1 352.04466247558594,
:x2 370.1695899963379,
:y2 372.6160888671875,
:width 806.3999999999999,
:height 1209.6}),
:page 453},
:content {:text "Canonical Protocol"},
:properties {:color "yellow"}}
{:id #uuid "64378c55-677c-4ab7-94c6-02ff41b90ded",
:page 453,
:position {:bounding {:x1 276.63047790527344,
:y1 821.125,
:x2 438.38832092285156,
:y2 837.6964721679688,
:width 806.3999999999999,
:height 1209.6},
:rects ({:x1 276.63047790527344,
:y1 821.125,
:x2 438.38832092285156,
:y2 837.6964721679688,
:width 806.3999999999999,
:height 1209.6}),
:page 453},
:content {:text "programmed I/O (PIO)"},
:properties {:color "yellow"}}
{:id #uuid "64378e9e-0f95-4312-a19e-3ee9d0b4ef1e",
:page 455,
:position {:bounding {:x1 482.5089569091797,
:y1 575.2054100036621,
:x2 555.9764862060547,
:y2 591.7768211364746,
:width 806.3999999999999,
:height 1209.6},
:rects ({:x1 482.5089569091797,
:y1 575.2054100036621,
:x2 555.9764862060547,
:y2 591.7768211364746,
:width 806.3999999999999,
:height 1209.6}),
:page 455},
:content {:text "coalescing"},
:properties {:color "yellow"}}
{:id #uuid "64379241-c097-4aaa-b545-582df132b35f",
:page 456,
:position {:bounding {:x1 0,
:y1 76.5714340209961,
:x2 638.9917907714844,
:y2 329.66072845458984,
:width 806.3999999999999,
:height 1209.6},
:rects ({:x1 0,
:y1 76.5714340209961,
:x2 0,
:y2 99.42858123779297,
:width 806.3999999999999,
:height 1209.6}
{:x1 548.4464416503906,
:y1 294.49108123779297,
:x2 638.9917907714844,
:y2 311.0625228881836,
:width 806.3999999999999,
:height 1209.6}
{:x1 108.35714912414551,
:y1 313.0892868041992,
:x2 240.935640335083,
:y2 329.66072845458984,
:width 806.3999999999999,
:height 1209.6}),
:page 456},
:content {:text "Direct Memory Access (DMA)"},
:properties {:color "yellow"}}
{:id #uuid "6437989d-c18e-4cc7-9cb0-737384cc7960",
:page 457,
:position {:bounding {:x1 379.8343276977539,
:y1 517.4286041259766,
:x2 507.8338394165039,
:y2 538.0000152587891,
:width 806.3999999999999,
:height 1209.6},
:rects ({:x1 379.8343276977539,
:y1 517.4286041259766,
:x2 507.8338394165039,
:y2 538.0000152587891,
:width 806.3999999999999,
:height 1209.6}),
:page 457},
:content {:text " Device Driver"},
:properties {:color "yellow"}}
{:id #uuid "643799a7-dfae-46e0-88e6-ebf587755d75",
:page 458,
:position {:bounding {:x1 247.33929443359375,
:y1 401.67859649658203,
:x2 494.9961395263672,
:y2 418.2500228881836,
:width 806.3999999999999,
:height 1209.6},
:rects ({:x1 247.33929443359375,
:y1 401.67859649658203,
:x2 494.9961395263672,
:y2 418.2500228881836,
:width 806.3999999999999,
:height 1209.6}),
:page 458},
:content {:text "Figure 36.4: The File System Stack"},
:properties {:color "yellow"}}
{:id #uuid "64379a07-5bc3-49b2-93e2-f371ad2b5347",
:page 457,
:position {:bounding {:x1 160.4375114440918,
:y1 649.5625305175781,
:x2 227.97158432006836,
:y2 666.1339569091797,
:width 806.3999999999999,
:height 1209.6},
:rects ({:x1 160.4375114440918,
:y1 649.5625305175781,
:x2 227.97158432006836,
:y2 666.1339569091797,
:width 806.3999999999999,
:height 1209.6}),
:page 457},
:content {:text "oblivious "},
:properties {:color "green"}}
{:id #uuid "64379b8b-7c37-4d7e-8135-1d025eb42ae3",
:page 460,
:position {:bounding {:x1 464.8994674682617,
:y1 827.3839721679688,
:x2 512.903678894043,
:y2 843.9553833007812,
:width 806.3999999999999,
:height 1209.6},
:rects ({:x1 464.8994674682617,
:y1 827.3839721679688,
:x2 512.903678894043,
:y2 843.9553833007812,
:width 806.3999999999999,
:height 1209.6}),
:page 460},
:content {:text "hauled "},
:properties {:color "green"}}
{:id #uuid "64379b93-cb30-45a8-afe6-53052c08fa6f",
:page 460,
:position {:bounding {:x1 547.1767807006836,
:y1 827.3839721679688,
:x2 595.1810531616211,
:y2 843.9553833007812,
:width 806.3999999999999,
:height 1209.6},
:rects ({:x1 547.1767807006836,
:y1 827.3839721679688,
:x2 595.1810531616211,
:y2 843.9553833007812,
:width 806.3999999999999,
:height 1209.6}),
:page 460},
:content {:text "trailer"},
:properties {:color "green"}}
{:id #uuid "64379ba3-e41d-411f-ab6d-9a5f1424ac26",
:page 460,
:position {:bounding {:x1 249.13033294677734,
:y1 920.3750305175781,
:x2 298.4103469848633,
:y2 936.9465026855469,
:width 806.3999999999999,
:height 1209.6},
:rects ({:x1 249.13033294677734,
:y1 920.3750305175781,
:x2 298.4103469848633,
:y2 936.9465026855469,
:width 806.3999999999999,
:height 1209.6}),
:page 460},
:content {:text "obscure"},
:properties {:color "green"}}
{:id #uuid "64379e9a-840a-48c9-b804-03e6b179a6a6",
:page 458,
:position {:bounding {:x1 224.1345672607422,
:y1 943.2054138183594,
:x2 455.71478271484375,
:y2 963.77685546875,
:width 806.3999999999999,
:height 1209.6},
:rects ({:x1 224.1345672607422,
:y1 943.2054138183594,
:x2 455.71478271484375,
:y2 963.77685546875,
:width 806.3999999999999,
:height 1209.6}),
:page 458},
:content {:text "A Simple IDE Disk Driver"},
:properties {:color "yellow"}}
{:id #uuid "64379f7c-b440-4023-bc10-fd27071ec742",
:page 464,
:position {:bounding {:x1 445.2321548461914,
:y1 324.63395404815674,
:x2 645.885139465332,
:y2 360.0625123977661,
:width 806.3999999999999,
:height 1209.6},
:rects ({:x1 445.2321548461914,
:y1 324.63395404815674,
:x2 645.885139465332,
:y2 360.0625123977661,
:width 806.3999999999999,
:height 1209.6}),
:page 464},
:content {:text "Hard Disk Drives"},
:properties {:color "yellow"}}
{:id #uuid "6437a316-6185-4eae-bc56-eeca9c5dfc0d",
:page 464,
:position {:bounding {:x1 168.15179443359375,
:y1 875.5714721679688,
:x2 364.46311950683594,
:y2 892.3125305175781,
:width 806.3999999999999,
:height 1209.6},
:rects ({:x1 168.15179443359375,
:y1 875.5714721679688,
:x2 364.46311950683594,
:y2 892.3125305175781,
:width 806.3999999999999,
:height 1209.6}),
:page 464},
:content {:text " address space of the drive."},
:properties {:color "yellow"}}
{:id #uuid "6437a4a9-3103-4830-abc7-dba0b1067b76",
:page 465,
:position {:bounding {:x1 0,
:y1 220.5714340209961,
:x2 693.8439025878906,
:y2 485.80360412597656,
:width 806.3999999999999,
:height 1209.6},
:rects ({:x1 0,
:y1 220.5714340209961,
:x2 0,
:y2 243.42858123779297,
:width 806.3999999999999,
:height 1209.6}
{:x1 0,
:y1 236.5714340209961,
:x2 0,
:y2 259.42858123779297,
:width 806.3999999999999,
:height 1209.6}
{:x1 0,
:y1 252.5714340209961,
:x2 0,
:y2 275.42857360839844,
:width 806.3999999999999,
:height 1209.6}
{:x1 0,
:y1 268.5714340209961,
:x2 0,
:y2 291.42857360839844,
:width 806.3999999999999,
:height 1209.6}
{:x1 519.3035888671875,
:y1 391.98216247558594,
:x2 528.9941711425781,
:y2 403.4107208251953,
:width 806.3999999999999,
:height 1209.6}
{:x1 160.4375057220459,
:y1 394.83929443359375,
:x2 494.1349239349365,
:y2 411.4107208251953,
:width 806.3999999999999,
:height 1209.6}
{:x1 528.9910888671875,
:y1 394.83929443359375,
:x2 693.8439025878906,
:y2 411.5803680419922,
:width 806.3999999999999,
:height 1209.6}
{:x1 160.4375057220459,
:y1 413.43751525878906,
:x2 660.9213619232178,
:y2 430.17857360839844,
:width 806.3999999999999,
:height 1209.6}
{:x1 160.4375057220459,
:y1 432.0357208251953,
:x2 660.1910152435303,
:y2 448.7768096923828,
:width 806.3999999999999,
:height 1209.6}
{:x1 160.4375057220459,
:y1 450.6339569091797,
:x2 660.1794185638428,
:y2 467.37501525878906,
:width 806.3999999999999,
:height 1209.6}
{:x1 160.4375057220459,
:y1 469.23216247558594,
:x2 562.6104793548584,
:y2 485.80360412597656,
:width 806.3999999999999,
:height 1209.6}),
:page 465},
:content {:text "one can usually assume that accessing two blocks1 near one-another within the drives address space will be faster than accessing two blocks that are far apart. One can also usually assume that accessing blocks in a contiguous chunk (i.e., a sequential read or write) is the fastest access mode, and usually much faster than any more random access pattern."},
:properties {:color "yellow"}}
{:id #uuid "6437a4da-bca4-4f13-b018-30f3400d169f",
:page 465,
:position {:bounding {:x1 462.02288818359375,
:y1 536.9018249511719,
:x2 662.5404357910156,
:y2 553.4732360839844,
:width 806.3999999999999,
:height 1209.6},
:rects ({:x1 462.02288818359375,
:y1 536.9018249511719,
:x2 662.5404357910156,
:y2 553.4732360839844,
:width 806.3999999999999,
:height 1209.6}),
:page 465},
:content {:text "components of a modern disk."},
:properties {:color "yellow"}}
{:id #uuid "6437a4f2-5d89-495a-a984-b427a3d03e74",
:page 465,
:position {:bounding {:x1 282.6071472167969,
:y1 555.5000305175781,
:x2 329.2925262451172,
:y2 572.0714721679688,
:width 806.3999999999999,
:height 1209.6},
:rects ({:x1 282.6071472167969,
:y1 555.5000305175781,
:x2 329.2925262451172,
:y2 572.0714721679688,
:width 806.3999999999999,
:height 1209.6}),
:page 465},
:content {:text "platter"},
:properties {:color "yellow"}}
{:id #uuid "6437a4f9-3de4-451b-a7cc-faf67b8530e8",
:page 465,
:position {:bounding {:x1 0,
:y1 364.57144927978516,
:x2 695.8244323730469,
:y2 628.0268249511719,
:width 806.3999999999999,
:height 1209.6},
:rects ({:x1 0,
:y1 364.57144927978516,
:x2 0,
:y2 387.4285888671875,
:width 806.3999999999999,
:height 1209.6}
{:x1 668.0487365722656,
:y1 592.6964721679688,
:x2 695.8244323730469,
:y2 609.2678833007812,
:width 806.3999999999999,
:height 1209.6}
{:x1 160.4375057220459,
:y1 611.2857360839844,
:x2 188.97083282470703,
:y2 628.0268249511719,
:width 806.3999999999999,
:height 1209.6}),
:page 465},
:content {:text "surface"},
:properties {:color "yellow"}}
{:id #uuid "6437a4fd-450b-49ff-acd1-e46d3b507079",
:page 465,
:position {:bounding {:x1 532.2857360839844,
:y1 668.7589721679688,
:x2 585.8849182128906,
:y2 685.3303833007812,
:width 806.3999999999999,
:height 1209.6},
:rects ({:x1 532.2857360839844,
:y1 668.7589721679688,
:x2 585.8849182128906,
:y2 685.3303833007812,
:width 806.3999999999999,
:height 1209.6}),
:page 465},
:content {:text "spindle"},
:properties {:color "yellow"}}
{:id #uuid "6437a503-6b91-4b61-b288-9cea9c2ea832",
:page 465,
:position {:bounding {:x1 369.4464416503906,
:y1 819.2232666015625,
:x2 404.8913879394531,
:y2 835.794677734375,
:width 806.3999999999999,
:height 1209.6},
:rects ({:x1 369.4464416503906,
:y1 819.2232666015625,
:x2 404.8913879394531,
:y2 835.794677734375,
:width 806.3999999999999,
:height 1209.6}),
:page 465},
:content {:text "track"},
:properties {:color "yellow"}}
{:id #uuid "6437a50b-a53a-476b-8ef2-8bcbc21d7073",
:page 465,
:position {:bounding {:x1 317.1339416503906,
:y1 932.4732666015625,
:x2 386.8203582763672,
:y2 949.044677734375,
:width 806.3999999999999,
:height 1209.6},
:rects ({:x1 317.1339416503906,
:y1 932.4732666015625,
:x2 386.8203582763672,
:y2 949.044677734375,
:width 806.3999999999999,
:height 1209.6}),
:page 465},
:content {:text "disk head"},
:properties {:color "yellow"}}
{:id #uuid "6437a50f-5c6c-47ff-9179-ac48118342d7",
:page 465,
:position {:bounding {:x1 475.7476501464844,
:y1 951.0714721679688,
:x2 538.2141418457031,
:y2 967.6428833007812,
:width 806.3999999999999,
:height 1209.6},
:rects ({:x1 475.7476501464844,
:y1 951.0714721679688,
:x2 538.2141418457031,
:y2 967.6428833007812,
:width 806.3999999999999,
:height 1209.6}),
:page 465},
:content {:text "disk arm"},
:properties {:color "yellow"}}
{:id #uuid "6437a841-9b37-42dc-a8dc-339085099a5a",
:page 466,
:position {:bounding {:x1 326.42398834228516,
:y1 661.8214721679688,
:x2 474.7255325317383,
:y2 680.6785888671875,
:width 806.3999999999999,
:height 1209.6},
:rects ({:x1 326.42398834228516,
:y1 661.8214721679688,
:x2 474.7255325317383,
:y2 680.6785888671875,
:width 806.3999999999999,
:height 1209.6}),
:page 466},
:content {:text " Rotational Delay"},
:properties {:color "yellow"}}
{:id #uuid "6437aa03-61a9-40c1-ba53-98d0e1ab87b9",
:page 467,
:position {:bounding {:x1 659.3125,
:y1 549.7857513427734,
:x2 691.29248046875,
:y2 566.3571624755859,
:width 806.3999999999999,
:height 1209.6},
:rects ({:x1 659.3125,
:y1 549.7857513427734,
:x2 691.29248046875,
:y2 566.3571624755859,
:width 806.3999999999999,
:height 1209.6}),
:page 467},
:content {:text "seek"},
:properties {:color "yellow"}}
{:id #uuid "6437abff-a6b8-4d28-8a4e-8e67fe9cdd4d",
:page 467,
:position {:bounding {:x1 160.43750476837158,
:y1 870.1785888671875,
:x2 655.713963508606,
:y2 886.7500305175781,
:width 806.3999999999999,
:height 1209.6},
:rects ({:x1 160.43750476837158,
:y1 870.1785888671875,
:x2 655.713963508606,
:y2 886.7500305175781,
:width 806.3999999999999,
:height 1209.6}),
:page 467},
:content {:text "first a seek, then waiting for the rotational delay, and finally the transfer."},
:properties {:color "yellow"}}
{:id #uuid "6437acc8-bc95-466b-9d04-acfe22b0eeee",
:page 467,
:position {:bounding {:x1 221.25001525878906,
:y1 988.5000305175781,
:x2 299.41912841796875,
:y2 1005.0714569091797,
:width 806.3999999999999,
:height 1209.6},
:rects ({:x1 221.25001525878906,
:y1 988.5000305175781,
:x2 299.41912841796875,
:y2 1005.0714569091797,
:width 806.3999999999999,
:height 1209.6}),
:page 467},
:content {:text "track skew"},
:properties {:color "yellow"}}
{:id #uuid "6437ad1b-5292-4a42-80c4-8a1ff9f7f691",
:page 468,
:position {:bounding {:x1 459.5382385253906,
:y1 654.5982666015625,
:x2 547.2351989746094,
:y2 671.169677734375,
:width 806.3999999999999,
:height 1209.6},
:rects ({:x1 459.5382385253906,
:y1 654.5982666015625,
:x2 547.2351989746094,
:y2 671.169677734375,
:width 806.3999999999999,
:height 1209.6}),
:page 468},
:content {:text "multi-zoned"},
:properties {:color "yellow"}}
{:id #uuid "6437ada7-4a51-4032-bdcc-110b47796be9",
:page 468,
:position {:bounding {:x1 572.2857360839844,
:y1 728.9910888671875,
:x2 611.1742858886719,
:y2 745.5625305175781,
:width 806.3999999999999,
:height 1209.6},
:rects ({:x1 572.2857360839844,
:y1 728.9910888671875,
:x2 611.1742858886719,
:y2 745.5625305175781,
:width 806.3999999999999,
:height 1209.6}),
:page 468},
:content {:text "cache"},
:properties {:color "yellow"}}],
:extra {:page 450}}
:extra {:page 471}}

View File

@ -9,7 +9,7 @@
;; Preferred workflow style.
;; Value is either ":now" for NOW/LATER style,
;; or ":todo" for TODO/DOING style.
:preferred-workflow :now
:preferred-workflow :todo
;; The app will ignore those directories or files.
;; E.g. :hidden ["/archived" "/test.md" "../assets/archived"]
@ -35,7 +35,7 @@
:ui/show-full-blocks? false
;; Expand block references automatically when zoom-in
:ui/auto-expand-block-refs? true
:ui/auto-expand-block-refs? false
;; Enable Block timestamp
:feature/enable-block-timestamps? false

View File

@ -0,0 +1,848 @@
file:: [ostep_1681115599584_0.pdf](../assets/ostep_1681115599584_0.pdf)
file-path:: ../assets/ostep_1681115599584_0.pdf
- # Part II
- thread
ls-type:: annotation
hl-page:: 311
hl-color:: yellow
id:: 6433ca28-1bdf-433d-8ed9-0d54bf5ba940
- share the same address space and thus can access the same data
- context switch: the address space remains the same
hl-page:: 311
ls-type:: annotation
id:: 6433cb70-d168-4863-8268-1e969df6ce06
hl-color:: yellow
- thread control blocks
ls-type:: annotation
hl-page:: 311
hl-color:: yellow
id:: 6433cb56-fbef-46da-83c2-13fa2dba2967
- thread-local storage: one stack per thread in the address space
hl-page:: 312
ls-type:: annotation
id:: 6433cba2-61bd-4549-a29f-2ad85b3e30cd
hl-color:: yellow
- Why thread?
- possible speedup through parallelization
- enable overlap of IO in a single program
- Though these could be done through multi-processing, threading makes share data easier
- KEY CONCURRENCY TERMS
ls-type:: annotation
hl-page:: 323
hl-color:: yellow
id:: 6433eabf-48d6-4776-b66f-a5f7804d1ddc
- **indeterminate**: the results depend on the timing execution of the code.
- race condition
ls-type:: annotation
hl-page:: 320
hl-color:: yellow
id:: 6433e4cc-69e4-4057-8cc6-1766240d82f4
- A **critical section** is a piece of code that accesses a shared variable (or resource) and must not be concurrently executed by more than one thread.
hl-page:: 320
ls-type:: annotation
id:: 6433e52b-1f38-4f7c-b168-0aed624f9bdf
hl-color:: yellow
- **mutual exclusion**: This property guarantees that if one thread is executing within the *critical section*, the others will be prevented from doing so.
hl-page:: 320
ls-type:: annotation
id:: 6433e566-e6ef-45b3-84b1-eba981be914a
hl-color:: yellow
- Atomicity: *as a unit*, or, *all or none*
hl-page:: 321
ls-type:: annotation
id:: 6433e6a1-407c-4936-b184-dee868ef4107
hl-color:: yellow
- synchronization primitives
ls-type:: annotation
hl-page:: 322
hl-color:: yellow
id:: 6433e729-7043-453b-8d60-6e6c41560543
- sane 精神健全的;神志正常的;明智的;理智的
ls-type:: annotation
hl-page:: 322
hl-color:: green
id:: 6433e6e7-d995-4b69-96b3-261b79f94c1d
- Thread API
hl-page:: 327
ls-type:: annotation
id:: 6433f35b-403b-4b25-b9f9-076e9e34777e
hl-color:: yellow
- `pthread_create` `pthread_join` `pthread_mutex_lock` `pthread_cond_*`
- Locks
ls-type:: annotation
hl-page:: 339
hl-color:: yellow
id:: 6433f45b-0345-4790-8379-3d1a94e57ef5
- A lock is just a variable
hl-page:: 339
ls-type:: annotation
id:: 6433f4ba-f2e4-4743-a536-e2b7747433b7
hl-color:: yellow
- **lock variable**: some type of variable, which holds the *state* of the lock(and maybe additional data such as its holder or a queue for acquisition)
- **lock state**: available (or unlocked or free); acquired (or locked or held)
- **lock routines**:
- `lock()` tries to acquire the lock. If no other thread holds the lock, the thread will acquire the lock and enter the critical section(become the owner of the lock). Otherwise, it will not return while the lock is held by another thread.
- `unlock()` : The owner of the lock calls `unlock()`, then it is *available* again. If there are waiting threads, one of them will (eventually) notice (or be informed of) this change of the lock's state, acquire the lock, and enter the critical section.
- Locks help transform the chaos that is traditional OS scheduling into a more controlled activity
hl-page:: 340
ls-type:: annotation
id:: 6433f5e6-bc06-42a9-866e-e9a3053f528f
hl-color:: yellow
- Controlling Interrupts
ls-type:: annotation
hl-page:: 342
hl-color:: yellow
id:: 6433fbfd-a1bf-4fd9-a54d-e15189c77b15
- For *single-processor* systems, **disable interrupts** for critical sections.
- Problems
- disable interrupts is privileged. In the worst case, the OS may never regain control when the interrupt isn't going to be enabled.
- does NOT work on multi-processor systems, each CPU has its own interrupt state
- importance interrupts may get lost
- inefficient
- Just Using Loads/Stores(Fail)
hl-page:: 343
ls-type:: annotation
id:: 6433fe7e-2221-41ee-ad6b-7deaa4459aa5
hl-color:: yellow
- use a simple variable (flag) to indicate whether some thread has possession of a lock
hl-page:: 343
ls-type:: annotation
id:: 6433ff4a-856d-4e4b-af30-6cb600aefeb5
hl-color:: yellow
- On acquisition, load, test the flag. If free, set the flag; If not free, spin-wait(loop load and test).
- On releasing, clear the flag.
- Problem
- When interrupted between load and test, *mutual exclusion* is broken.
- Low efficiency because of spin-waiting.
- **spin lock**
- ((6436aafd-c85f-414c-8aee-acdc71e9138e))
- Requires a preemptive scheduler(or it may spin forever) and NO fairness guarantee
- For single processor systems, terrible performance, because the thread holding the lock cannot make any progress to release the lock until it is scheduled again and thus all other threads waiting for the lock can do nothing but spinning even they are scheduled.
- For multi-processor systems, spin lock may work well when thread B on CPU1 waits for thread A on CPU0, and the critical section is short. Because lock owner keeps making progress, spinning doesn't waste many cycles.
- **Priority Inversion**: Threads with high priority wait for locks held by threads with low priority.
hl-page:: 355
ls-type:: annotation
id:: 6435099b-0834-483e-9ef2-98a0b795cf00
hl-color:: yellow
Solution: **priority inheritance** or give up the priority?
- **Test-And-Set (Atomic Exchange)**
hl-page:: 344
ls-type:: annotation
id:: 643401e0-fcec-41d3-9898-d5c4175ac464
hl-color:: yellow
- Returns the old value pointed to by the `old_ptr`, and simultaneously updates said value to `new`.
- "test" the old value (which is what is returned) while simultaneously "set" the memory location to a new value
- ((6436af87-3f1b-4ee8-a2c8-4de0f1961f1a))
- **Compare-And-Swap**
hl-page:: 348
ls-type:: annotation
id:: 6434f8ac-d762-40a4-abb0-2955c2c8b396
hl-color:: yellow
- Test whether the value at the address specified by `ptr` is equal to `expected`.
hl-page:: 348
ls-type:: annotation
id:: 6434fab0-08de-4f28-8d8e-f48f7e04aaaa
hl-color:: yellow
If so, update the memory location with the `new` value.
If not, do nothing.
Return the old value at the memory location.
- ```c
int CompareAndSwap(int *ptr, int expected, int new) {
int original = *ptr;
if (original == expected) *ptr = new;
return orginial
}
```
- Compare-and-swap flavor spin lock
```C
void lock(lock_t *lock) {
while (CompareAndSwap(&lock->status, 0, 1) == 1) ;
}
```
- **load-linked** and **store-conditional**
hl-page:: 349
ls-type:: annotation
id:: 6434fde1-9d19-4381-805e-f2a972875dc2
hl-color:: yellow
- The **load-linked** operates much like a typical load instruction, and simply fetches a value from memory and places it in a register.
ls-type:: annotation
hl-page:: 349
hl-color:: yellow
id:: 6434fe1c-47f3-422c-a317-be72f08d6aef
- **store-conditional** only succeeds if no intervening store to the address has taken place.
hl-page:: 349
ls-type:: annotation
id:: 6434fe62-0e92-4414-86cc-b0c37fcf51ec
hl-color:: yellow
On success, return 1 and update the value at `ptr` to value.
On failure, return 0 and the value at `ptr` is not updated.
- ```c
int LL(int *ptr) { return *ptr; }
int SC(int *ptr, int value) {
if (/*no update to *ptr since LoadLinked to this address*/) {
*ptr = value;
return 1; // success!
} else {
return 0; // failed to update
}
}
```
- LL/SC flavor spin lock: very similar to the errant Load/Store lock, but the special instructions here can detect intervening
```c
void lock(lock_t *lock) {
while (true) {
while (LL(&lock->status) == 1) ; // test
if (SC(&lock->status, 1) == 1) // set
break;
// else retry, in case lock->status is changed
}
}
```
- **Fetch-And-Add**
ls-type:: annotation
hl-page:: 350
hl-color:: yellow
id:: 64350170-c853-4080-9ed1-2777ea3a18c8
- Atomically increments a value while returning the old value at a particular address
- ```c
int FetchAndAdd(int *ptr) {
int old = *ptr;
*ptr = old + 1;
return old;
}
```
- **ticket lock**
hl-page:: 351
ls-type:: annotation
id:: 64350331-8fbb-4c41-9ac1-1a4ba852f772
hl-color:: yellow
- ((6436af5c-0000-4bfb-9a27-1d7cf0a830db))
- Ensure progress for all threads. Once a thread is assigned its ticket value, it will be scheduled at some point in the future (i.e. it will definitely get its turn as `unlock()` operations increase global `turn` value).
hl-page:: 351
ls-type:: annotation
id:: 64350420-ca8a-4cac-af2f-f4e7deb5d1be
hl-color:: yellow
In contrast, test-and-set spin lock may starve, if it is very unlucky.(never succeeds in contention)
- Simple Yield Lock
hl-page:: 353
ls-type:: annotation
id:: 64350781-6995-41db-8b8e-2de0eb84136a
hl-color:: yellow
- `yield`: a system call that moves the caller from the running state to the ready state, and thus promotes another thread to running.
hl-page:: 353
ls-type:: annotation
id:: 643507af-1153-46c1-b232-31a9a203e5df
hl-color:: yellow
- ```C
void lock(lock_t *lock) {
while (TestAndSet(&lock->status, 1) == 1)
yield();
}
```
- Problem: Starvation is still possible; Context switch overhead, though better than spinning
- Lock With Queues, Test-and-set, Yield, And Wakeup
ls-type:: annotation
hl-page:: 354
hl-color:: yellow
id:: 64350b44-dfae-4544-93f9-ff2b343fefd4
- The real problem is: We have not much control over which thread to run next and thus causes potential waste.
hl-page:: 353
ls-type:: annotation
id:: 64350b4e-9559-49d9-aa37-eda9fe425b7f
hl-color:: yellow
- `park()`: put a calling thread to sleep
hl-page:: 354
ls-type:: annotation
id:: 64350bfb-64f7-4d41-8cc2-260dbec3372d
hl-color:: yellow
- `unpark(threadID)`: wake a particular thread
hl-page:: 354
ls-type:: annotation
id:: 64350c01-39bb-4d15-b554-0287b13806ee
hl-color:: yellow
- ((6436b05f-2873-4af4-952c-86d82685b583))
- When a thread is woken up, it will be as if it is returning from `park()`. Thus when `unpark` a thread, pass the lock directly from the thread releasing the lock to the next thread acquiring it; flag is not set to 0 in-between.
- wakeup/waiting race: If the thread is scheduled out just before it calls `park`, and then the lock owner calls `unpark` on that thread, it would sleep forever.
hl-page:: 356
ls-type:: annotation
id:: 64351ba3-d4b5-4999-bc61-7733d5e0a061
hl-color:: yellow
- One solution is to use `setpark()`: indicate the thread is about to `park`. If it happens to be interrupted and another thread calls `unpark` before `park` is actually called, the subsequent park returns immediately instead of sleeping.
- Peterson's algorithm: mutual exclusion lock for 2 threads without hardware atomic instruction. Use 2 intention flags and a turn flag.
hl-page:: 345
ls-type:: annotation
id:: 6434edd3-2a7b-4e11-af18-29854e628bc6
hl-color:: yellow
- **two-phase lock**
hl-page:: 358
ls-type:: annotation
id:: 643522a7-4b16-4998-9b2f-47a852681a16
hl-color:: yellow
- A combination of spin lock and sleep lock
- In the first phase, the lock spins for a while, hoping that it can acquire the lock.
hl-page:: 358
ls-type:: annotation
id:: 6435230e-d84a-4c91-8329-b7608b0d543a
hl-color:: yellow
- A second phase is entered if the lock is not acquired, where the caller is put to sleep, and only woken up when the lock becomes free later.
ls-type:: annotation
hl-page:: 358
hl-color:: yellow
id:: 64352344-d140-468c-987c-e8afa05c2171
- Linux System Call **futex**
hl-page:: 356
ls-type:: annotation
id:: 64351e9a-6505-4176-a6fb-ddf63f3245a8
hl-color:: yellow
- each `futex` is associated with ==a specific physical memory location==, and ==an in-kernel queue==
- `futex_wake(address)` wakes one thread that is waiting on the queue.
- `futex_wait(address, expected)` puts the calling thread to sleep, assuming the value at `address` is equal to `expected`. If it is not equal, the call returns immediately.
- Figure 28.10: Linux-based Futex Locks
ls-type:: annotation
hl-page:: 357
hl-color:: yellow
id:: 64352221-d590-4371-a5f0-29e9cfa75ccb
- efficacy 功效,效力
ls-type:: annotation
hl-page:: 341
hl-color:: green
id:: 6433fb69-1425-46b4-996f-f91da5d3e8d0
- foil
ls-type:: annotation
hl-page:: 347
hl-color:: green
id:: 6434f523-44b7-40ab-8fea-528969c5acfd
- delve 钻研;探究;挖
ls-type:: annotation
hl-page:: 349
hl-color:: green
id:: 6434fb8c-2b3b-4d80-83fb-3b34da4dcd28
- brag 吹嘘;自吹自擂
ls-type:: annotation
hl-page:: 351
hl-color:: green
id:: 643501c1-f11b-4e85-8125-d2a5a31f69b0
- scourge 鞭打;鞭笞;折磨;使受苦难
- Lock-based Concurrent Data Structures
ls-type:: annotation
hl-page:: 361
hl-color:: yellow
id:: 643525b0-e245-489b-877d-a2a1d63e7ea6
- **Concurrent Counters**
hl-page:: 361
ls-type:: annotation
id:: 643525e5-fb85-48d4-905a-2a88b9ac0b0d
hl-color:: yellow
collapsed:: true
- **Counter with lock**
- Wrap the all the operations with a single lock.
- Performance is bad due to lock contention and it gets worse when the number of threads increases.
- **perfect scaling**: the increase in thread number doesn't harm the performance
hl-page:: 363
ls-type:: annotation
id:: 64352751-d9bd-4d5e-a8ba-cd18f86b1a15
hl-color:: yellow
- **approximate counter**
hl-page:: 363
ls-type:: annotation
id:: 64352794-d7c8-42f9-8321-f874967cebf2
hl-color:: yellow
- represent a single logical counter via ==numerous local physical counters==(one per CPU core), as well as ==a single global counter==. Each actual counter has a ==lock==.
- To add the counter, acquire the ==local lock== and increase it, thus avoiding contention.
- To read the counter, acquire the ==global lock== and read.
- To keep the global counter up to date, the local values are periodically transferred to the global counter and reset, which requires ==global lock and local lock==. A threshold `S` determines how often this transfer happens, tuning the trade-off between scalability and precision.
- **Concurrent Linked Lists**
ls-type:: annotation
hl-page:: 367
hl-color:: yellow
id:: 643530d8-9d09-4c8a-9e92-47dfe814ef50
collapsed:: true
- Again, the simplest way to implement this is to wrap all operations on the list with a single lock.
- Assuming the `malloc` is ==thread-safe==, we can improve the code a little by narrowing critical section: only operations on global structure need to be locked.
- **hand-over-hand locking**: a lock per node.
hl-page:: 369
ls-type:: annotation
id:: 64353237-4b74-4148-b7c1-5854d83a18c7
hl-color:: yellow
- When traversing the list, the code first grabs the next node's lock and then releases the current node's lock.
- In practice, it ==doesn't work== due to prohibitive overhead
- **Concurrent Queues**
ls-type:: annotation
hl-page:: 370
hl-color:: yellow
id:: 64353353-9de2-421b-967d-dc80a597eecd
- Two locks, head and tail, for `enqueue` and `dequeue` operation.
- Add a dummy node to separate head and tail operation. Without this, `dequeue` operation needs to acquire both locks in case the queue is empty.
- **Concurrent Hash Table**
hl-page:: 372
ls-type:: annotation
id:: 6435360d-c176-494a-9d61-b1fd0107a9bd
hl-color:: yellow
- instead of having a single lock for the entire structure, it uses a lock per hash bucket
ls-type:: annotation
hl-page:: 372
hl-color:: yellow
id:: 6435363d-c697-42a6-bfd0-8a2332cef394
- ubiquitous 似乎无所不在的;十分普遍的
ls-type:: annotation
hl-page:: 372
hl-color:: green
id:: 6435365a-b5d6-46fc-a9a1-25b0d23aa529
- humble 谦逊;低声下气;虚心;贬低
ls-type:: annotation
hl-page:: 373
hl-color:: green
id:: 6435367f-dd9e-449d-b0e4-3d8c9e14f6c2
- sloppy 马虎的,草率的;(衣服)宽松肥大的;太稀的,不够稠的;
hl-page:: 376
ls-type:: annotation
id:: 643536c8-fc05-4bbe-8d1d-0f4f6d1c4fee
hl-color:: green
- gross 总的,毛的;严重的,极端的;粗鲁的;臃肿的;粗略的;
hl-page:: 378
ls-type:: annotation
id:: 643537d3-7d01-442b-b47e-59433c2aa6db
hl-color:: green
- **condition variable**
hl-page:: 378
ls-type:: annotation
id:: 643537ff-1028-4725-8d7a-c0338cc946d3
hl-color:: yellow
- A ==condition variable== is an explicit queue that threads can put themselves on when some state of execution(condition) is not as desired (by *waiting on the condition*); some other thread, when it changes said state, can then wake one (or more) of those waiting threads and thus allow them to continue (by *signaling*).
hl-page:: 378
ls-type:: annotation
id:: 64353882-7697-4c16-8e53-c8f59ea256c1
hl-color:: yellow
- Operations
- `wait()` put the caller to sleep. `pthread_cond_wait(pthread_cond_t *c, pthread_mutex_t *m)`
hl-page:: 378
ls-type:: annotation
id:: 643538d5-9ea3-4399-9fa2-d75fdf0e1dd4
hl-color:: yellow
- `signal()` wake up a sleeping thread waiting on this condition. `pthread_cond_signal(pthread_cond_t *c);`
hl-page:: 379
ls-type:: annotation
id:: 643538de-cc40-4dd2-8f03-9492004f209b
hl-color:: yellow
- The `wait()` also takes a mutex as a parameter; it assumes that this mutex is locked when `wait()` is called. The responsibility of `wait()` is to ==release the lock and put the calling thread to sleep== (atomically); when the thread wakes up, it must ==re-acquire the lock before returning== to the caller. The design is helpful to avoid some race conditions when trying to sleep.
- use a while loop instead of just an if statement when deciding whether to wait on the condition.
ls-type:: annotation
hl-page:: 380
hl-color:: yellow
id:: 643547c5-1613-49e9-899e-0e86f59a1462
- stem (花草的)茎;(花或叶的)梗,柄;阻止;封堵;遏止;
hl-page:: 379
ls-type:: annotation
id:: 64353eb8-8ed8-4680-a3c0-91608b429408
hl-color:: green
- **stem from sth ** 是…的结果;起源于;根源是
- **producer/consumer problem**
hl-page:: 382
ls-type:: annotation
id:: 64354974-adea-4b20-90f4-a12ebe1e4d5b
hl-color:: yellow
- **Mesa semantics**: Signaling a thread only wakes them up; it is thus a hint that the state of the world has ==changed==, but there is ==no guarantee== that when the woken thread runs, the state will ==still be as desired==. (Another guy may run before the thread and change the state again)
hl-page:: 385
ls-type:: annotation
id:: 64354cc4-14c5-408d-b879-7d4d011b2b5c
hl-color:: yellow
- So, always use while loops. While loops make sure the thread wake up in the desired state of world, which tackles the ((64355502-f41f-40dd-b71f-e0abdbc76716)) and provides support for ((64355441-5a1b-4015-baa1-65917526079c))
hl-page:: 386
ls-type:: annotation
id:: 64354db0-8c74-4c14-b063-d26378a10555
hl-color:: yellow
- **Hoare semantics**: provides a stronger guarantee that the woken thread will run immediately upon being woken
hl-page:: 386
ls-type:: annotation
id:: 64354d46-4286-44fd-9e82-2ba562a50f25
hl-color:: yellow
- Incorrect Solution: single condition variable. The problem arises from the ==undirected wakeup operation==: God knows which thread is to be woken up.
- Envision multiple consumers and one producer:
1. producer `P1` increases count to 1, signals the CV and sleeps
2. consumer `C1` is awaken, reduces count to 0, signals the CV and sleeps
3. another consumer `C2` is woken up ==by accident==, finds out count is 0, sleeps
4. In this case, they all sleep and thus nobody will signal any of them
- If in step 3, the producer `P1` is woken up, everything is fine. Obviously, one solution is to ==exert control over which thread is to be woken up==. Well, wake up all threads may also solve this problem, see ((64355441-5a1b-4015-baa1-65917526079c)).
- Correct solution: 2 condition variable.
- Producer threads wait on the condition `empty`, and signals `fill`. Conversely, consumer threads wait on `fill` and signal `empty`.
- ((6436b07d-9279-46bb-9c6b-985eb2324df8))
- **spurious wakeups**
hl-page:: 390
ls-type:: annotation
id:: 64355502-f41f-40dd-b71f-e0abdbc76716
hl-color:: yellow
- In some thread packages, due to details of the implementation, it is possible that two threads get woken up though just a single signal has taken place.
- **covering condition**
hl-page:: 391
ls-type:: annotation
id:: 64355441-5a1b-4015-baa1-65917526079c
hl-color:: yellow
- covers all the cases where a thread needs to wake up, those unneeded simply wake up, re-check condition and go back to sleep
- `pthread_cond_broadcast()` wakes up all waiting threads
- albeit 尽管;虽然
ls-type:: annotation
hl-page:: 390
hl-color:: green
id:: 64354f54-b26c-48dc-a328-4ae355b680f3
- spurious 虚假的;伪造的;建立在错误的观念(或思想方法)之上的;谬误的
hl-page:: 390
ls-type:: annotation
id:: 643554f4-75a7-48fa-9366-87058ee723fb
hl-color:: green
- Semaphores
ls-type:: annotation
hl-page:: 396
hl-color:: yellow
id:: 64356d96-cce8-48ad-80f1-e3e02a1a4684
- A semaphore is an ==object with an integer value== that we can manipulate with two routines `sem_wait()` and `sem_post()`. The initial value determines its behavior, so we need to give it an initial value through `sem_init()`
hl-page:: 396
ls-type:: annotation
id:: 64356dba-48b4-49b8-8182-c962f12f03a5
hl-color:: yellow
- Semaphore: Definitions Of **Wait And Post**
ls-type:: annotation
hl-page:: 397
hl-color:: yellow
id:: 6435744b-a300-40ad-ba91-157666d8cd2a
- `sem_wait(sem_t *s)`: First decrement the value of the semaphore by one. Then wait if the value of semaphore is negative
- `sem_post(sem_t*s)`: First increment the value of the semaphore by one. If there is any thread waiting, wait up one of them
- The value of the semaphore, *when negative*, is equal to the ==number of waiting threads==
hl-page:: 397
ls-type:: annotation
id:: 64357512-e25b-4226-961a-caec367fc8a3
hl-color:: yellow
- **Binary Semaphores (Locks)**
ls-type:: annotation
hl-page:: 398
hl-color:: yellow
id:: 6435753a-65b5-4e46-82bc-54c11c1cd533
- Initialize semaphore to 1, indicating we only have one piece of resource (the critical section).
- Wrap the critical section with `sem_wait` and `sem_post`
- When the lock is acquired, the semaphore is 0. On another acquisition request, the value goes to -1, which makes the caller sleep. When the lock is free, the value is decreased to 0 on acquisition, which will not get stuck.
- **Semaphores For Ordering (Condition Variable, or Ordering Primitive)**
hl-page:: 399
ls-type:: annotation
id:: 64357930-2d96-4867-bc3d-2fe89990ce5f
hl-color:: yellow
- Initialize the semaphore to 0
- Consider the *join* operation. The parent calls `sem_wait`and the child calls `sem_post`. In either case, no matter which thread is scheduled first, the semaphore guarantees the desired result.
- **The Producer/Consumer (Bounded Buffer) Problem (Again)**
hl-page:: 401
ls-type:: annotation
id:: 64357c6d-381e-492e-b901-095454f5315e
hl-color:: yellow
- 2 semaphores `empty` and `full` for coordination between consumer and producer, and 1 semaphore for lock
- Initialize `empty <- MAX`, and `full <- 0`
- Consumer waits for `full` and posts `empty` and conversely, produce waits for `empty` and posts `full`
- Special case for `MAX=1`
- When only one slot is available in the buffer, we don't even need a lock! Actually, it is binary semaphore which not only controls the buffer entry but also works as a lock.
- Otherwise, there will be a ==data race== inside the `put/get` operation due to potential multi-thread access to these procedures (when `MAX > 1`, the `sem_wait(&empty)` may allow in more than one thread).
- Deadlock avoidance
- If the lock semaphore is the outmost semaphore, deadlock occurs (the thread may sleep in `sem_wait(&empty)` with `mutex` unrelease). Therefore, put the lock inside the `empty/full` semaphore pair.
- Implement
- **Reader-Writer Locks**
ls-type:: annotation
hl-page:: 406
hl-color:: yellow
id:: 643583b4-26b1-4cbf-801c-11ed6e63976e
- Either allow ==multiple readers to read== concurrently, or allow ==only one writer to write==.
- Two sets of operation
- `rwlock_acquire/release_writelock()`: simply `wait/post` the `writelock`
- `rwlock_acquire/release_readlock()`: acquire `writelock` when the ==first reader acquires==, and release it when the ==last reader releases==
- Implement
```C
typedef struct _rwlock_t {
sem_t guard; // binary semaphore (basic lock)
sem_t writelock; // allow ONE writer/MANY readers
int readers; // #readers in critical section
} rwlock_t;
void rwlock_acquire_readlock(rwlock_t *rw) {
sem_wait(&rw->guard);
if (++rw->readers == 1) sem_wait(&rw->writelock);
sem_post(&rw->guard);
}
void rwlock_release_readlock(rwlock_t *rw) {
sem_wait(&rw->guard);
if (--rw->readers == 0) sem_post(&rw->writelock);
sem_post(&rw->guard);
}
void rwlock_acquire_writelock(rwlock_t *rw) { sem_wait(&rw->writelock); }
void rwlock_release_writelock(rwlock_t *rw) { sem_post(&rw->writelock); }
```
- Problem: More overhead; Unfairness, writer is much more likely to starve.
- To tackle the writer starvation problem, we may manually wake up the writers (if ever suspended) every time read lock releases. [Wiki](https://en.wikipedia.org/wiki/Readers%E2%80%93writer_lock)
- **The Dining Philosophers**
hl-page:: 408
ls-type:: annotation
id:: 643587a7-ade4-4f09-be50-aea233ff02c0
hl-color:: yellow
- Background setting
hl-page:: 408
ls-type:: annotation
id:: 6435889f-1375-4b94-8630-b3d0d7bdfa56
hl-color:: yellow
- 5 "philosophers" around a table.
Between each pair of philosophers is a single fork (and thus, 5 total).
The philosophers each have times where they think (dont need forks), and times where they eat.
In order to eat, a philosopher needs two forks (left and right).
The contention for these forks is our synchronization problem.
- Solution
- A semaphore per fork, and helper function `left/right(p)` which is the fork on philosopher `p`'s left/right.
- Deadlock: if each philosopher tries to grab the fork on their left first, there will be a deadlock. When all of them get their left-side forks, all of the forks are locked and no one could get their right-side fork.
- Non-deadlock: force one philosopher to try to grab the right-side fork first
- Implement
```C
void put_forks(int p) {
sem_post(&forks[left(p)]);
sem_post(&forks[right(p)]);
}
void get_forks(int p) {
if (p == 4) {
sem_wait(&forks[right(p)]);
sem_wait(&forks[left(p)]);
} else {
sem_wait(&forks[left(p)]);
sem_wait(&forks[right(p)]);
}
}
void philosopher() {
while (1) {
think();
get_forks(p);
eat();
put_forks(p);
}
}
```
- Implement Semaphores
ls-type:: annotation
hl-page:: 411
hl-color:: yellow
id:: 643589a6-31e6-4603-9259-999e9c8860f7
- Implementing Zemaphores With One Lock And One CV: the book authors provide us a simple implement for semaphore.
hl-page:: 412
ls-type:: annotation
id:: 64358de1-f418-44fd-8a77-bc0faa368059
hl-color:: yellow
- Implement
```C
struct sem_t{
int value;
cond_t cond;
mutex_t lock;
};
void sem_wait(sem_t *sem) {
mutex_lock(&sem->lock);
while (s->value <= 0)
cond_wait(&sem->cond, &sem->lock);
s->value --;
mutex_unlock(&sem->lock);
}
void sem_post(sem_t *sem) {
mutex_lock(&sem->lock);
sem->value ++;
cond_signal(&sem->cond);
mutex_unlock(&sem->lock);
}
```
- salient 最重要的;显着的;突出的:
ls-type:: annotation
hl-page:: 397
hl-color:: green
id:: 64357404-d348-42b3-96a3-ba28575baa66
- ensue 跟着发生,接着发生;
ls-type:: annotation
hl-page:: 408
hl-color:: green
id:: 64358802-3b22-46ed-a0e2-71cc9df69a7b
- Throttle 节流阀;风门;喉咙;使窒息;使节流;
hl-page:: 411
ls-type:: annotation
id:: 64358758-cb9c-4e8d-aaa4-f8e50457db88
hl-color:: green
- bog 沼泽;泥塘;使陷于泥沼;使动弹不得
hl-page:: 411
ls-type:: annotation
id:: 64358755-1fae-4ea2-93a3-8c9d3d3e11c3
hl-color:: green
- ramification (众多复杂而又难以预料的)结果,后果
hl-page:: 410
ls-type:: annotation
id:: 64358b0c-e441-4d0a-852d-ecfde369306c
hl-color:: green
- Non-Deadlock Bugs: A large fraction (97%) of non-deadlock bugs studied by Lu et al. are either ==atomicity violations== or ==order violations==.
hl-page:: 420
ls-type:: annotation
id:: 64361e4c-62eb-4599-9809-0f77f9ce1cd0
hl-color:: yellow
- **Deadlock**
ls-type:: annotation
hl-page:: 420
hl-color:: yellow
id:: 64361fb7-5aa6-45cd-8b1e-aa0d0c300ad2
- **Conditions for Deadlock**
hl-page:: 422
ls-type:: annotation
id:: 64361fd1-49ff-4023-8493-840ac423086a
hl-color:: yellow
- If any of these four conditions are not met, deadlock cannot occur.
- **Mutual exclusion**: Threads claim exclusive control of resources that they require
- **Hold-and-wait**: Threads hold resources allocated to them while waiting for additional resources
- **No preemption**: Resources cannot be forcibly removed from threads that are holding them.
- **Circular wait**: There exists a circular chain of threads such that each thread holds one or more resources that are being requested by the next thread in the chain.
- **Prevention**: break the conditions for deadlock
hl-page:: 422
ls-type:: annotation
id:: 643620d9-cdb6-4073-89f4-f9f8ac223073
hl-color:: yellow
- **Circular Wait**: Never induce a circular wait.
hl-page:: 422
ls-type:: annotation
id:: 643620fb-edc6-43b2-b4b2-43b010cfc46e
hl-color:: yellow
- total ordering and partial ordering of lock acquisition (think about your Discrete Math, total ordering is a restricted form of partial ordering, in partial ordering, some pairs of elements are not comparable)
- Anyways, follow some kind of ordering when acquire lock in order to avoid cycles.
- ENFORCE LOCK ORDERING BY LOCK ADDRESS
ls-type:: annotation
hl-page:: 423
hl-color:: yellow
id:: 64362497-58cd-45da-8ab5-84f96e899e16
- **Hold-and-wait**: acquiring all locks at once, atomically.
hl-page:: 423
ls-type:: annotation
id:: 643625fe-423c-4b18-8c22-32d38720c5d0
hl-color:: yellow
- Not practical
- **No Preemption**
hl-page:: 424
ls-type:: annotation
id:: 64362632-50e8-41dd-a1bc-bbf3d4312b0f
hl-color:: yellow
- `trylock` either grabs the lock (if it is available) and returns success or returns an error code indicating the lock is held
- Instead of blocking at the lock call, give up all previous locks and try over again if some of the locks is not available.
- ```C
while (true) {
mutex_lock(&lock1);
if (mutex_trylock(&lock2) == 0) break;
else mutex_unlock(&lock1);
}
```
- **livelock** problem: in some special cases, two threads may keep trying and giving up locks due to each other's intervention
hl-page:: 424
ls-type:: annotation
id:: 6436281f-4fdc-4586-83fb-b686cec3b76b
hl-color:: yellow
- random delay before looping back and trying the entire thing over again
- **Mutual Exclusion**: lock-free data structures
hl-page:: 425
ls-type:: annotation
id:: 643629ba-e746-41a6-b073-1199b3db3691
hl-color:: yellow
- use atomic instructions provided by hardware
- **Avoidance**
hl-page:: 427
ls-type:: annotation
id:: 64362af4-9b35-4e27-8ba2-0f5f8817526a
hl-color:: yellow
- By careful scheduling, deadlock could be avoided.
- Limited usage: OS does not always have sufficient knowledge to make deadlock-free scheduling. Such approaches also limit concurrency.
- [[Banker's Algorithm]]
- **Detect and Recover**
ls-type:: annotation
hl-page:: 428
hl-color:: yellow
id:: 64362c62-3a12-4bcb-95ae-baf1ca69312e
- Allow deadlocks to occasionally occur, and then take some action once such a deadlock has been detected.
- terrific 极好的;绝妙的;了不起的;很大的
ls-type:: annotation
hl-page:: 428
hl-color:: green
id:: 64362b38-6dfb-4c00-8aa6-b756e8983de4
- maxim 格言;箴言;座右铭
ls-type:: annotation
hl-page:: 428
hl-color:: green
id:: 64362b40-5f07-418f-83f3-c83eb5927c94
- nasty 极差的;令人厌恶的;令人不悦的;不友好的
ls-type:: annotation
hl-page:: 432
hl-color:: green
id:: 64364569-01b4-45e1-83f8-ac1bd8af5850
- **Event-based Concurrency**
ls-type:: annotation
hl-page:: 432
hl-color:: yellow
id:: 64364585-ace4-4920-87fe-87aad004dffd
- event loop: waits for something to do and then, for each event returned, processes them, one at a time
hl-page:: 433
ls-type:: annotation
id:: 643658f3-4761-4d0c-b044-4cadcfea27aa
hl-color:: yellow
- event handler
ls-type:: annotation
hl-page:: 433
hl-color:: yellow
id:: 643658f9-5eee-4d1a-a3d6-4f8eb9ed3d7b
- `select` or `poll`
hl-page:: 433
ls-type:: annotation
id:: 64365db8-a249-46bc-bd9c-237251c544b5
hl-color:: yellow
- Check whether there is any incoming I/O that should be attended to.
- ```C
int select(
int nfds,
fd_set *restrict readfds,
fd_set *restrict writefds,
fd_set *restrict errorfds,
struct timeval *restrict timeout);
```
- Examine if some of their descriptors are ready for reading/writing or have an exceptional condition pending. The first n descriptors are checked in each set
hl-page:: 434
ls-type:: annotation
id:: 64365eb6-5310-4893-9d11-5e332ef84c4a
hl-color:: yellow
- `select` places the given descriptor sets with ==subsets of ready descriptors==. `select()` ==returns the total number of ready descriptors== in all the sets.
hl-page:: 434
ls-type:: annotation
id:: 64365ef8-3c62-4d78-8bc6-d0a4b2c81d49
hl-color:: yellow
- Block IO: NO blocking calls are allowed in event-based systems, because it will just stop the whole process.
- **Asynchronous I/O**
ls-type:: annotation
hl-page:: 437
hl-color:: yellow
id:: 643693db-d363-46ee-b0d6-910b30408946
- Issue an I/O request and return control immediately to the caller, before completion. Additional interfaces to determine whether the IOs have completed.
hl-page:: 437
ls-type:: annotation
id:: 64369701-8a39-4aa4-9985-129572c04f53
hl-color:: yellow
- AIO control block `aiocb`
- `int aio_read(struct aiocb *aiocbp);` issues an asynchronous read request
- `int aio_error(const struct aiocb *aiocbp);` checks whether the request (designated by the `aiocb`) has completed
- Checking IO completion is inefficient, perhaps we need interrupt-based approaches (e.g. UNIX signals) to inform applications when async IO completes.
- Problems
- State management
- manual stack management: when an event handler issues an asynchronous I/O, it must package up some program state for the next event handler to use when the I/O finally completes; this additional work is not needed in thread-based programs, as the state the program needs is on the stack of the thread.
hl-page:: 438
ls-type:: annotation
id:: 6436a3d9-ee29-4378-af79-4efc770cc209
hl-color:: yellow
- continuation: record the needed information to finish processing this event in some data structure; when the event happens (i.e., when the disk I/O completes), look up the needed information and process the event.
hl-page:: 440
ls-type:: annotation
id:: 6436a40a-121f-4fab-b428-b278e4cb65d3
hl-color:: yellow
- Utilizing multiple CPUs
hl-page:: 440
ls-type:: annotation
id:: 6436a46c-f845-4c7b-8bb1-97da71589c67
hl-color:: yellow
- Implicit blocking such as paging
hl-page:: 440
ls-type:: annotation
id:: 6436a485-7a70-4974-93d2-9e11b010a948
hl-color:: yellow
- Messy code base due to complicated asynchronous logic

View File

@ -375,6 +375,7 @@ file-path:: ../assets/ostep_1681115599584_0.pdf
ls-type:: annotation
id:: 643537ff-1028-4725-8d7a-c0338cc946d3
hl-color:: yellow
collapsed:: true
- A ==condition variable== is an explicit queue that threads can put themselves on when some state of execution(condition) is not as desired (by *waiting on the condition*); some other thread, when it changes said state, can then wake one (or more) of those waiting threads and thus allow them to continue (by *signaling*).
hl-page:: 378
ls-type:: annotation
@ -408,6 +409,7 @@ file-path:: ../assets/ostep_1681115599584_0.pdf
ls-type:: annotation
id:: 64354974-adea-4b20-90f4-a12ebe1e4d5b
hl-color:: yellow
collapsed:: true
- **Mesa semantics**: Signaling a thread only wakes them up; it is thus a hint that the state of the world has ==changed==, but there is ==no guarantee== that when the woken thread runs, the state will ==still be as desired==. (Another guy may run before the thread and change the state again)
hl-page:: 385
ls-type:: annotation
@ -680,6 +682,7 @@ file-path:: ../assets/ostep_1681115599584_0.pdf
ls-type:: annotation
id:: 64364585-ace4-4920-87fe-87aad004dffd
hl-color:: yellow
collapsed:: true
- event loop: waits for something to do and then, for each event returned, processes them, one at a time
hl-page:: 433
ls-type:: annotation
@ -762,8 +765,180 @@ file-path:: ../assets/ostep_1681115599584_0.pdf
hl-page:: 448
hl-color:: green
id:: 6436caa1-6fe0-4de8-9ad4-2a057960fc1a
- System Architecture
- ## System Architecture
ls-type:: annotation
hl-page:: 450
hl-color:: yellow
id:: 6436cc2e-b1af-4555-9d1d-808e6de120b1
collapsed:: true
- memory bus, general IO bus, peripheral bus
- **Canonical Device**
hl-page:: 452
ls-type:: annotation
id:: 643786f0-5f9c-4441-8898-82ccd6a1a464
hl-color:: yellow
- Hardware interface with protocols which allows OS software to control and internal structure which implements the abstraction
- **Canonical Protocol**
hl-page:: 453
ls-type:: annotation
id:: 64378926-ce8a-4e38-a3fe-62fb5c4994e6
hl-color:: yellow
- Interface is comprised of 3 registers: *status*, *command*, *data*.
- 1. Poll the device, i.e. repeatedly read the *status* register to see if the device is ready
2. Transfer some data to *data* register
3. Write a command to the *command* register, informing the device to work
4. Poll again to see if it is completed
- programmed I/O (PIO): CPU is involved with the data movement
hl-page:: 453
ls-type:: annotation
id:: 64378c55-677c-4ab7-94c6-02ff41b90ded
hl-color:: yellow
- **Interrupt** instead of poll
- Polling wastes CPU time, then interrupts come up. The OS ==issues a request, put the caller to sleep, and context switch==. When the device is done, it raises a hardware interrupt, causing CPU jump to the ==interrupt service routine==(ISR), which ==finishes the request and wakes up the process==.
- Interrupt is no panacea.
- Not suitable for ==high speed devices== which may complete the work on first poll. Interrupt only adds to the overhead
- Not suitable for network due to possible *livelock*: with ==huge amount of packets incoming==, the systems may find itself ==only processing interrupts== and never allowing a user process to service these requests.
- Interrupt coalescing: raise a single interrupt for multiple tasks.
hl-page:: 455
ls-type:: annotation
id:: 64378e9e-0f95-4312-a19e-3ee9d0b4ef1e
hl-color:: yellow
- **Direct Memory Access (DMA)**
hl-page:: 456
ls-type:: annotation
id:: 64379241-c097-4aaa-b545-582df132b35f
hl-color:: yellow
- Programmed IO also wastes CPU: it does nothing but tediously copying data.
- To transfer data to device, OS tells DMA controller the data address and size and then context switch. Then DMA does the rest copying work which overlaps with CPU.
- IO instructions and memory-mapped IO
- **Device Driver**
hl-page:: 457
ls-type:: annotation
id:: 6437989d-c18e-4cc7-9cb0-737384cc7960
hl-color:: yellow
- Encapsulates any ==specifics of device== interaction. ==Software in OS== which knows detail of device at the ==lowest level==.
- Figure 36.4: The Linux File System Stack
ls-type:: annotation
hl-page:: 458
hl-color:: yellow
id:: 643799a7-dfae-46e0-88e6-ebf587755d75
- System Call API, File System/Raw, Generic Block Interface(block r/w), Generic Block Layer, Specific Block Interface (protocol r/w), Device Driver
- A Simple IDE Disk Driver
ls-type:: annotation
hl-page:: 458
hl-color:: yellow
id:: 64379e9a-840a-48c9-b804-03e6b179a6a6
- An introduction to the xv6 IDE driver, which gives an intuition about how the stuff works, quite trivial.
- manifold
ls-type:: annotation
hl-page:: 450
hl-color:: green
id:: 64378274-897c-4aac-b246-49bda634b872
- oblivious
ls-type:: annotation
hl-page:: 457
hl-color:: green
id:: 64379a07-5bc3-49b2-93e2-f371ad2b5347
- haul
ls-type:: annotation
hl-page:: 460
hl-color:: green
id:: 64379b8b-7c37-4d7e-8135-1d025eb42ae3
- trailer
ls-type:: annotation
hl-page:: 460
hl-color:: green
id:: 64379b93-cb30-45a8-afe6-53052c08fa6f
- obscure
ls-type:: annotation
hl-page:: 460
hl-color:: green
id:: 64379ba3-e41d-411f-ab6d-9a5f1424ac26
- ## Hard Disk Drives
ls-type:: annotation
hl-page:: 464
hl-color:: yellow
id:: 64379f7c-b440-4023-bc10-fd27071ec742
- Address Space of HDD: Array of sectors (512-byte block), numbered from 0 to n-1, which can be read/written as a unit.
hl-page:: 464
ls-type:: annotation
id:: 6437a316-6185-4eae-bc56-eeca9c5dfc0d
hl-color:: yellow
- Only a ==single sector write is atomic==, though multi-sector operations are possible (e.g. widely-used 4KB r/w)
- one can usually assume that accessing two blocks near one-another within the drives address space will be faster than accessing two blocks that are far apart. One can also usually assume that accessing blocks in a contiguous chunk (i.e., a sequential read or write) is the fastest access mode, and usually much faster than any more random access pattern.
ls-type:: annotation
hl-page:: 465
hl-color:: yellow
id:: 6437a4a9-3103-4830-abc7-dba0b1067b76
- **Components of Disk**
hl-page:: 465
ls-type:: annotation
id:: 6437a4da-bca4-4f13-b018-30f3400d169f
hl-color:: yellow
- platter (大平盘): a circular hard surface on which data is stored, an HDD is comprised of one or more platters
hl-page:: 465
ls-type:: annotation
id:: 6437a4f2-5d89-495a-a984-b427a3d03e74
hl-color:: yellow
- surface: 2 sides of a platter
hl-page:: 465
ls-type:: annotation
id:: 6437a4f9-3de4-451b-a7cc-faf67b8530e8
hl-color:: yellow
- spindle (轴;纺锤): connected with a motor that spins the platters bound around it. rotations per minute (RPM)
hl-page:: 465
ls-type:: annotation
id:: 6437a4fd-450b-49ff-acd1-e46d3b507079
hl-color:: yellow
- track: a concentric circle of sectors, a surface consists of many tracks.
hl-page:: 465
ls-type:: annotation
id:: 6437a503-6b91-4b61-b288-9cea9c2ea832
hl-color:: yellow
- disk head: magnetic sensor, one per surface
hl-page:: 465
ls-type:: annotation
id:: 6437a50b-a53a-476b-8ef2-8bcbc21d7073
hl-color:: yellow
- disk arm: all disk heads connect to the disk arm, which moves disk head to get to the desired track
hl-page:: 465
ls-type:: annotation
id:: 6437a50f-5c6c-47ff-9179-ac48118342d7
hl-color:: yellow
- **IO time**
- **Rotational Delay**: wait for the desired sector to rotate under the disk head
hl-page:: 466
ls-type:: annotation
id:: 6437a841-9b37-42dc-a8dc-339085099a5a
hl-color:: yellow
- **Seek operation**: move the *disk head* to the ==desired track==.
hl-page:: 467
ls-type:: annotation
id:: 6437aa03-61a9-40c1-ba53-98d0e1ab87b9
hl-color:: yellow
- Seek phases: Acceleration (start), Coasting (move at full speed), Deceleration (slow down), Settling (stop carefully, often take most of the time)
- **General IO process**: 1. seek; 2. waiting for the rotational delay; 3. finally the transfer.
hl-page:: 467
ls-type:: annotation
id:: 6437abff-a6b8-4d28-8a4e-8e67fe9cdd4d
hl-color:: yellow
- Mathematical Analysis
- IO time: $T_{IO} = T_{seek} + T_{rotation} + T_{transfer}$
- IO rate: $R_{IO} = \frac{Size_{\text{trans}}}{T_{IO}}$
- $T_{\text{trans}} \approx \frac{Size_{\text{trans}}}{\text{Peak Transfer Rate}},T_{\text{rotation}} \approx \frac{1}{2}\frac{1}{\text{RPM}/60}$ and $T_{seek}$ is measured by manufacturer
- Miscellaneous details about HDD
- **Track skew**: optimization for continuous read across track boundary
hl-page:: 467
ls-type:: annotation
id:: 6437acc8-bc95-466b-9d04-acfe22b0eeee
hl-color:: yellow
- **Multi-zoned Disk**: outer tracks tend to have more sectors than inner tracks. a zone is a set of tracks with the same number of sectors, and a disk is organized into multiple zones
hl-page:: 468
ls-type:: annotation
id:: 6437ad1b-5292-4a42-80c4-8a1ff9f7f691
hl-color:: yellow
- cache, write back and write through
hl-page:: 468
ls-type:: annotation
id:: 6437ada7-4a51-4032-bdcc-110b47796be9
hl-color:: yellow