Nov 12 22:03:13.016513 kernel: microcode: updated early: 0xf4 -> 0xfc, date = 2023-07-27 Nov 12 22:03:13.016528 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 16:20:46 -00 2024 Nov 12 22:03:13.016535 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 22:03:13.016540 kernel: BIOS-provided physical RAM map: Nov 12 22:03:13.016544 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Nov 12 22:03:13.016548 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Nov 12 22:03:13.016553 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Nov 12 22:03:13.016557 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Nov 12 22:03:13.016561 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Nov 12 22:03:13.016565 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b27fff] usable Nov 12 22:03:13.016569 kernel: BIOS-e820: [mem 0x0000000081b28000-0x0000000081b28fff] ACPI NVS Nov 12 22:03:13.016574 kernel: BIOS-e820: [mem 0x0000000081b29000-0x0000000081b29fff] reserved Nov 12 22:03:13.016578 kernel: BIOS-e820: [mem 0x0000000081b2a000-0x000000008afccfff] usable Nov 12 22:03:13.016582 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Nov 12 22:03:13.016588 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Nov 12 22:03:13.016592 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Nov 12 22:03:13.016598 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Nov 12 22:03:13.016603 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Nov 12 22:03:13.016607 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Nov 12 22:03:13.016612 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 12 22:03:13.016616 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Nov 12 22:03:13.016621 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Nov 12 22:03:13.016625 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Nov 12 22:03:13.016630 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Nov 12 22:03:13.016634 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Nov 12 22:03:13.016639 kernel: NX (Execute Disable) protection: active Nov 12 22:03:13.016644 kernel: APIC: Static calls initialized Nov 12 22:03:13.016648 kernel: SMBIOS 3.2.1 present. Nov 12 22:03:13.016654 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Nov 12 22:03:13.016659 kernel: tsc: Detected 3400.000 MHz processor Nov 12 22:03:13.016663 kernel: tsc: Detected 3399.906 MHz TSC Nov 12 22:03:13.016668 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 22:03:13.016673 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 22:03:13.016678 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Nov 12 22:03:13.016682 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Nov 12 22:03:13.016687 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 22:03:13.016692 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Nov 12 22:03:13.016697 kernel: Using GB pages for direct mapping Nov 12 22:03:13.016702 kernel: ACPI: Early table checksum verification disabled Nov 12 22:03:13.016707 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Nov 12 22:03:13.016714 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Nov 12 22:03:13.016719 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Nov 12 22:03:13.016724 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Nov 12 22:03:13.016729 kernel: ACPI: FACS 0x000000008C66CF80 000040 Nov 12 22:03:13.016735 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Nov 12 22:03:13.016740 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Nov 12 22:03:13.016745 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Nov 12 22:03:13.016750 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Nov 12 22:03:13.016755 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Nov 12 22:03:13.016760 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Nov 12 22:03:13.016765 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Nov 12 22:03:13.016771 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Nov 12 22:03:13.016775 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 12 22:03:13.016780 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Nov 12 22:03:13.016785 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Nov 12 22:03:13.016790 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 12 22:03:13.016795 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 12 22:03:13.016800 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Nov 12 22:03:13.016805 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Nov 12 22:03:13.016810 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 12 22:03:13.016816 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Nov 12 22:03:13.016821 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Nov 12 22:03:13.016826 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Nov 12 22:03:13.016831 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Nov 12 22:03:13.016836 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Nov 12 22:03:13.016841 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Nov 12 22:03:13.016846 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Nov 12 22:03:13.016851 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Nov 12 22:03:13.016857 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Nov 12 22:03:13.016862 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Nov 12 22:03:13.016867 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Nov 12 22:03:13.016872 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Nov 12 22:03:13.016877 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Nov 12 22:03:13.016882 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Nov 12 22:03:13.016887 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Nov 12 22:03:13.016892 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Nov 12 22:03:13.016897 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Nov 12 22:03:13.016902 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Nov 12 22:03:13.016907 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Nov 12 22:03:13.016912 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Nov 12 22:03:13.016917 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Nov 12 22:03:13.016922 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Nov 12 22:03:13.016927 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Nov 12 22:03:13.016932 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Nov 12 22:03:13.016937 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Nov 12 22:03:13.016942 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Nov 12 22:03:13.016948 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Nov 12 22:03:13.016952 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Nov 12 22:03:13.016957 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Nov 12 22:03:13.016962 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Nov 12 22:03:13.016967 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Nov 12 22:03:13.016972 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Nov 12 22:03:13.016977 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Nov 12 22:03:13.016982 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Nov 12 22:03:13.016987 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Nov 12 22:03:13.016992 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Nov 12 22:03:13.016997 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Nov 12 22:03:13.017002 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Nov 12 22:03:13.017007 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Nov 12 22:03:13.017012 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Nov 12 22:03:13.017017 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Nov 12 22:03:13.017022 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Nov 12 22:03:13.017027 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Nov 12 22:03:13.017032 kernel: No NUMA configuration found Nov 12 22:03:13.017037 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Nov 12 22:03:13.017043 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Nov 12 22:03:13.017048 kernel: Zone ranges: Nov 12 22:03:13.017053 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 22:03:13.017058 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 12 22:03:13.017063 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Nov 12 22:03:13.017068 kernel: Movable zone start for each node Nov 12 22:03:13.017073 kernel: Early memory node ranges Nov 12 22:03:13.017078 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Nov 12 22:03:13.017083 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Nov 12 22:03:13.017091 kernel: node 0: [mem 0x0000000040400000-0x0000000081b27fff] Nov 12 22:03:13.017097 kernel: node 0: [mem 0x0000000081b2a000-0x000000008afccfff] Nov 12 22:03:13.017102 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Nov 12 22:03:13.017107 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Nov 12 22:03:13.017115 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Nov 12 22:03:13.017122 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Nov 12 22:03:13.017127 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 22:03:13.017132 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Nov 12 22:03:13.017139 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Nov 12 22:03:13.017144 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Nov 12 22:03:13.017149 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Nov 12 22:03:13.017155 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Nov 12 22:03:13.017160 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Nov 12 22:03:13.017165 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Nov 12 22:03:13.017171 kernel: ACPI: PM-Timer IO Port: 0x1808 Nov 12 22:03:13.017176 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Nov 12 22:03:13.017181 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Nov 12 22:03:13.017188 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Nov 12 22:03:13.017193 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Nov 12 22:03:13.017198 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Nov 12 22:03:13.017203 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Nov 12 22:03:13.017209 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Nov 12 22:03:13.017214 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Nov 12 22:03:13.017219 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Nov 12 22:03:13.017224 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Nov 12 22:03:13.017230 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Nov 12 22:03:13.017236 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Nov 12 22:03:13.017241 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Nov 12 22:03:13.017246 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Nov 12 22:03:13.017251 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Nov 12 22:03:13.017257 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Nov 12 22:03:13.017262 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Nov 12 22:03:13.017267 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 12 22:03:13.017273 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 12 22:03:13.017278 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 22:03:13.017283 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 12 22:03:13.017290 kernel: TSC deadline timer available Nov 12 22:03:13.017295 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Nov 12 22:03:13.017300 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Nov 12 22:03:13.017306 kernel: Booting paravirtualized kernel on bare hardware Nov 12 22:03:13.017311 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 22:03:13.017317 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Nov 12 22:03:13.017322 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Nov 12 22:03:13.017327 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Nov 12 22:03:13.017333 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Nov 12 22:03:13.017339 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 22:03:13.017345 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 22:03:13.017350 kernel: random: crng init done Nov 12 22:03:13.017355 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Nov 12 22:03:13.017361 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Nov 12 22:03:13.017366 kernel: Fallback order for Node 0: 0 Nov 12 22:03:13.017371 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Nov 12 22:03:13.017378 kernel: Policy zone: Normal Nov 12 22:03:13.017383 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 22:03:13.017388 kernel: software IO TLB: area num 16. Nov 12 22:03:13.017394 kernel: Memory: 32720296K/33452980K available (12288K kernel code, 2305K rwdata, 22724K rodata, 42828K init, 2360K bss, 732424K reserved, 0K cma-reserved) Nov 12 22:03:13.017399 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Nov 12 22:03:13.017405 kernel: ftrace: allocating 37799 entries in 148 pages Nov 12 22:03:13.017410 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 22:03:13.017415 kernel: Dynamic Preempt: voluntary Nov 12 22:03:13.017421 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 22:03:13.017427 kernel: rcu: RCU event tracing is enabled. Nov 12 22:03:13.017433 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Nov 12 22:03:13.017438 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 22:03:13.017444 kernel: Rude variant of Tasks RCU enabled. Nov 12 22:03:13.017449 kernel: Tracing variant of Tasks RCU enabled. Nov 12 22:03:13.017455 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 22:03:13.017460 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Nov 12 22:03:13.017465 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Nov 12 22:03:13.017471 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 22:03:13.017476 kernel: Console: colour dummy device 80x25 Nov 12 22:03:13.017482 kernel: printk: console [tty0] enabled Nov 12 22:03:13.017487 kernel: printk: console [ttyS1] enabled Nov 12 22:03:13.017493 kernel: ACPI: Core revision 20230628 Nov 12 22:03:13.017498 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Nov 12 22:03:13.017503 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 22:03:13.017509 kernel: DMAR: Host address width 39 Nov 12 22:03:13.017514 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Nov 12 22:03:13.017520 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Nov 12 22:03:13.017526 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Nov 12 22:03:13.017532 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Nov 12 22:03:13.017537 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Nov 12 22:03:13.017543 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Nov 12 22:03:13.017548 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Nov 12 22:03:13.017553 kernel: x2apic enabled Nov 12 22:03:13.017559 kernel: APIC: Switched APIC routing to: cluster x2apic Nov 12 22:03:13.017564 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Nov 12 22:03:13.017569 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Nov 12 22:03:13.017575 kernel: CPU0: Thermal monitoring enabled (TM1) Nov 12 22:03:13.017581 kernel: process: using mwait in idle threads Nov 12 22:03:13.017586 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 12 22:03:13.017592 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Nov 12 22:03:13.017597 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 22:03:13.017602 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 12 22:03:13.017607 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 12 22:03:13.017612 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 12 22:03:13.017618 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 22:03:13.017623 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 12 22:03:13.017628 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 12 22:03:13.017635 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 12 22:03:13.017640 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 12 22:03:13.017645 kernel: TAA: Mitigation: TSX disabled Nov 12 22:03:13.017651 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Nov 12 22:03:13.017656 kernel: SRBDS: Mitigation: Microcode Nov 12 22:03:13.017661 kernel: GDS: Mitigation: Microcode Nov 12 22:03:13.017666 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 22:03:13.017672 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 22:03:13.017677 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 22:03:13.017682 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Nov 12 22:03:13.017687 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Nov 12 22:03:13.017693 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 22:03:13.017699 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Nov 12 22:03:13.017704 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Nov 12 22:03:13.017709 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Nov 12 22:03:13.017715 kernel: Freeing SMP alternatives memory: 32K Nov 12 22:03:13.017720 kernel: pid_max: default: 32768 minimum: 301 Nov 12 22:03:13.017725 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 22:03:13.017730 kernel: landlock: Up and running. Nov 12 22:03:13.017736 kernel: SELinux: Initializing. Nov 12 22:03:13.017741 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 22:03:13.017746 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 22:03:13.017751 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Nov 12 22:03:13.017758 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 12 22:03:13.017763 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 12 22:03:13.017768 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 12 22:03:13.017774 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Nov 12 22:03:13.017779 kernel: ... version: 4 Nov 12 22:03:13.017784 kernel: ... bit width: 48 Nov 12 22:03:13.017790 kernel: ... generic registers: 4 Nov 12 22:03:13.017795 kernel: ... value mask: 0000ffffffffffff Nov 12 22:03:13.017800 kernel: ... max period: 00007fffffffffff Nov 12 22:03:13.017806 kernel: ... fixed-purpose events: 3 Nov 12 22:03:13.017812 kernel: ... event mask: 000000070000000f Nov 12 22:03:13.017817 kernel: signal: max sigframe size: 2032 Nov 12 22:03:13.017822 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Nov 12 22:03:13.017828 kernel: rcu: Hierarchical SRCU implementation. Nov 12 22:03:13.017833 kernel: rcu: Max phase no-delay instances is 400. Nov 12 22:03:13.017838 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Nov 12 22:03:13.017844 kernel: smp: Bringing up secondary CPUs ... Nov 12 22:03:13.017849 kernel: smpboot: x86: Booting SMP configuration: Nov 12 22:03:13.017855 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Nov 12 22:03:13.017861 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 12 22:03:13.017866 kernel: smp: Brought up 1 node, 16 CPUs Nov 12 22:03:13.017872 kernel: smpboot: Max logical packages: 1 Nov 12 22:03:13.017877 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Nov 12 22:03:13.017882 kernel: devtmpfs: initialized Nov 12 22:03:13.017888 kernel: x86/mm: Memory block size: 128MB Nov 12 22:03:13.017893 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b28000-0x81b28fff] (4096 bytes) Nov 12 22:03:13.017898 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Nov 12 22:03:13.017905 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 22:03:13.017910 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Nov 12 22:03:13.017915 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 22:03:13.017921 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 22:03:13.017926 kernel: audit: initializing netlink subsys (disabled) Nov 12 22:03:13.017931 kernel: audit: type=2000 audit(1731448987.039:1): state=initialized audit_enabled=0 res=1 Nov 12 22:03:13.017936 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 22:03:13.017942 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 22:03:13.017948 kernel: cpuidle: using governor menu Nov 12 22:03:13.017953 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 22:03:13.017959 kernel: dca service started, version 1.12.1 Nov 12 22:03:13.017964 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Nov 12 22:03:13.017969 kernel: PCI: Using configuration type 1 for base access Nov 12 22:03:13.017975 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Nov 12 22:03:13.017980 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 22:03:13.017985 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 22:03:13.017991 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 22:03:13.017997 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 22:03:13.018002 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 22:03:13.018007 kernel: ACPI: Added _OSI(Module Device) Nov 12 22:03:13.018013 kernel: ACPI: Added _OSI(Processor Device) Nov 12 22:03:13.018018 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 22:03:13.018023 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 22:03:13.018029 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Nov 12 22:03:13.018034 kernel: ACPI: Dynamic OEM Table Load: Nov 12 22:03:13.018039 kernel: ACPI: SSDT 0xFFFF91CC41EC4C00 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Nov 12 22:03:13.018045 kernel: ACPI: Dynamic OEM Table Load: Nov 12 22:03:13.018051 kernel: ACPI: SSDT 0xFFFF91CC41EB8000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Nov 12 22:03:13.018056 kernel: ACPI: Dynamic OEM Table Load: Nov 12 22:03:13.018061 kernel: ACPI: SSDT 0xFFFF91CC4152E700 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Nov 12 22:03:13.018067 kernel: ACPI: Dynamic OEM Table Load: Nov 12 22:03:13.018072 kernel: ACPI: SSDT 0xFFFF91CC41EBE000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Nov 12 22:03:13.018077 kernel: ACPI: Dynamic OEM Table Load: Nov 12 22:03:13.018082 kernel: ACPI: SSDT 0xFFFF91CC41ECE000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Nov 12 22:03:13.018087 kernel: ACPI: Dynamic OEM Table Load: Nov 12 22:03:13.018095 kernel: ACPI: SSDT 0xFFFF91CC41EC1C00 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Nov 12 22:03:13.018101 kernel: ACPI: _OSC evaluated successfully for all CPUs Nov 12 22:03:13.018107 kernel: ACPI: Interpreter enabled Nov 12 22:03:13.018112 kernel: ACPI: PM: (supports S0 S5) Nov 12 22:03:13.018117 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 22:03:13.018122 kernel: HEST: Enabling Firmware First mode for corrected errors. Nov 12 22:03:13.018128 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Nov 12 22:03:13.018133 kernel: HEST: Table parsing has been initialized. Nov 12 22:03:13.018138 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Nov 12 22:03:13.018144 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 22:03:13.018150 kernel: PCI: Using E820 reservations for host bridge windows Nov 12 22:03:13.018155 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Nov 12 22:03:13.018161 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Nov 12 22:03:13.018166 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Nov 12 22:03:13.018171 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Nov 12 22:03:13.018177 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Nov 12 22:03:13.018182 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Nov 12 22:03:13.018187 kernel: ACPI: \_TZ_.FN00: New power resource Nov 12 22:03:13.018193 kernel: ACPI: \_TZ_.FN01: New power resource Nov 12 22:03:13.018199 kernel: ACPI: \_TZ_.FN02: New power resource Nov 12 22:03:13.018205 kernel: ACPI: \_TZ_.FN03: New power resource Nov 12 22:03:13.018210 kernel: ACPI: \_TZ_.FN04: New power resource Nov 12 22:03:13.018215 kernel: ACPI: \PIN_: New power resource Nov 12 22:03:13.018220 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Nov 12 22:03:13.018292 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 12 22:03:13.018347 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Nov 12 22:03:13.018394 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Nov 12 22:03:13.018404 kernel: PCI host bridge to bus 0000:00 Nov 12 22:03:13.018456 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 12 22:03:13.018499 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 12 22:03:13.018542 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 12 22:03:13.018583 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Nov 12 22:03:13.018626 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Nov 12 22:03:13.018667 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Nov 12 22:03:13.018726 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Nov 12 22:03:13.018782 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Nov 12 22:03:13.018831 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Nov 12 22:03:13.018883 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Nov 12 22:03:13.018931 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Nov 12 22:03:13.018982 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Nov 12 22:03:13.019032 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Nov 12 22:03:13.019084 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Nov 12 22:03:13.019137 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Nov 12 22:03:13.019185 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Nov 12 22:03:13.019236 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Nov 12 22:03:13.019284 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Nov 12 22:03:13.019333 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Nov 12 22:03:13.019384 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Nov 12 22:03:13.019430 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 12 22:03:13.019484 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Nov 12 22:03:13.019532 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 12 22:03:13.019583 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Nov 12 22:03:13.019632 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Nov 12 22:03:13.019682 kernel: pci 0000:00:16.0: PME# supported from D3hot Nov 12 22:03:13.019740 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Nov 12 22:03:13.019790 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Nov 12 22:03:13.019836 kernel: pci 0000:00:16.1: PME# supported from D3hot Nov 12 22:03:13.019888 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Nov 12 22:03:13.019936 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Nov 12 22:03:13.019985 kernel: pci 0000:00:16.4: PME# supported from D3hot Nov 12 22:03:13.020036 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Nov 12 22:03:13.020084 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Nov 12 22:03:13.020135 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Nov 12 22:03:13.020182 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Nov 12 22:03:13.020229 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Nov 12 22:03:13.020275 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Nov 12 22:03:13.020327 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Nov 12 22:03:13.020373 kernel: pci 0000:00:17.0: PME# supported from D3hot Nov 12 22:03:13.020425 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Nov 12 22:03:13.020472 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Nov 12 22:03:13.020530 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Nov 12 22:03:13.020578 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Nov 12 22:03:13.020632 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Nov 12 22:03:13.020679 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Nov 12 22:03:13.020731 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Nov 12 22:03:13.020782 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Nov 12 22:03:13.020836 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Nov 12 22:03:13.020884 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Nov 12 22:03:13.020935 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Nov 12 22:03:13.020983 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 12 22:03:13.021034 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Nov 12 22:03:13.021085 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Nov 12 22:03:13.021170 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Nov 12 22:03:13.021217 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Nov 12 22:03:13.021270 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Nov 12 22:03:13.021318 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Nov 12 22:03:13.021372 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Nov 12 22:03:13.021421 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Nov 12 22:03:13.021473 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Nov 12 22:03:13.021521 kernel: pci 0000:01:00.0: PME# supported from D3cold Nov 12 22:03:13.021571 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 12 22:03:13.021620 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 12 22:03:13.021673 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Nov 12 22:03:13.021723 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Nov 12 22:03:13.021771 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Nov 12 22:03:13.021823 kernel: pci 0000:01:00.1: PME# supported from D3cold Nov 12 22:03:13.021871 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 12 22:03:13.021920 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 12 22:03:13.021968 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 12 22:03:13.022016 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Nov 12 22:03:13.022064 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 12 22:03:13.022116 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Nov 12 22:03:13.022170 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Nov 12 22:03:13.022221 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Nov 12 22:03:13.022271 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Nov 12 22:03:13.022318 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Nov 12 22:03:13.022367 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Nov 12 22:03:13.022415 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Nov 12 22:03:13.022463 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Nov 12 22:03:13.022510 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 12 22:03:13.022560 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Nov 12 22:03:13.022614 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Nov 12 22:03:13.022662 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Nov 12 22:03:13.022714 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Nov 12 22:03:13.022762 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Nov 12 22:03:13.022811 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Nov 12 22:03:13.022859 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Nov 12 22:03:13.022910 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Nov 12 22:03:13.022959 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 12 22:03:13.023008 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Nov 12 22:03:13.023055 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Nov 12 22:03:13.023112 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Nov 12 22:03:13.023162 kernel: pci 0000:06:00.0: enabling Extended Tags Nov 12 22:03:13.023211 kernel: pci 0000:06:00.0: supports D1 D2 Nov 12 22:03:13.023261 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 12 22:03:13.023311 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Nov 12 22:03:13.023360 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Nov 12 22:03:13.023408 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Nov 12 22:03:13.023462 kernel: pci_bus 0000:07: extended config space not accessible Nov 12 22:03:13.023517 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Nov 12 22:03:13.023568 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Nov 12 22:03:13.023620 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Nov 12 22:03:13.023673 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Nov 12 22:03:13.023723 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 12 22:03:13.023773 kernel: pci 0000:07:00.0: supports D1 D2 Nov 12 22:03:13.023824 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 12 22:03:13.023874 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Nov 12 22:03:13.023922 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Nov 12 22:03:13.023971 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Nov 12 22:03:13.023981 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Nov 12 22:03:13.023987 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Nov 12 22:03:13.023993 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Nov 12 22:03:13.023999 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Nov 12 22:03:13.024005 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Nov 12 22:03:13.024010 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Nov 12 22:03:13.024016 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Nov 12 22:03:13.024021 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Nov 12 22:03:13.024027 kernel: iommu: Default domain type: Translated Nov 12 22:03:13.024034 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 22:03:13.024040 kernel: PCI: Using ACPI for IRQ routing Nov 12 22:03:13.024045 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 12 22:03:13.024051 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Nov 12 22:03:13.024057 kernel: e820: reserve RAM buffer [mem 0x81b28000-0x83ffffff] Nov 12 22:03:13.024062 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Nov 12 22:03:13.024068 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Nov 12 22:03:13.024073 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Nov 12 22:03:13.024079 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Nov 12 22:03:13.024164 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Nov 12 22:03:13.024215 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Nov 12 22:03:13.024266 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 12 22:03:13.024275 kernel: vgaarb: loaded Nov 12 22:03:13.024281 kernel: clocksource: Switched to clocksource tsc-early Nov 12 22:03:13.024287 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 22:03:13.024293 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 22:03:13.024299 kernel: pnp: PnP ACPI init Nov 12 22:03:13.024348 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Nov 12 22:03:13.024401 kernel: pnp 00:02: [dma 0 disabled] Nov 12 22:03:13.024448 kernel: pnp 00:03: [dma 0 disabled] Nov 12 22:03:13.024498 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Nov 12 22:03:13.024542 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Nov 12 22:03:13.024589 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Nov 12 22:03:13.024635 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Nov 12 22:03:13.024682 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Nov 12 22:03:13.024724 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Nov 12 22:03:13.024769 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Nov 12 22:03:13.024814 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Nov 12 22:03:13.024859 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Nov 12 22:03:13.024901 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Nov 12 22:03:13.024948 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Nov 12 22:03:13.024995 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Nov 12 22:03:13.025041 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Nov 12 22:03:13.025086 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Nov 12 22:03:13.025163 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Nov 12 22:03:13.025207 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Nov 12 22:03:13.025250 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Nov 12 22:03:13.025296 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Nov 12 22:03:13.025342 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Nov 12 22:03:13.025350 kernel: pnp: PnP ACPI: found 10 devices Nov 12 22:03:13.025356 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 22:03:13.025362 kernel: NET: Registered PF_INET protocol family Nov 12 22:03:13.025368 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 22:03:13.025374 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Nov 12 22:03:13.025379 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 22:03:13.025386 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 22:03:13.025392 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 12 22:03:13.025398 kernel: TCP: Hash tables configured (established 262144 bind 65536) Nov 12 22:03:13.025404 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 12 22:03:13.025409 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 12 22:03:13.025415 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 22:03:13.025421 kernel: NET: Registered PF_XDP protocol family Nov 12 22:03:13.025468 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Nov 12 22:03:13.025516 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Nov 12 22:03:13.025567 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Nov 12 22:03:13.025615 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 12 22:03:13.025665 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 12 22:03:13.025714 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 12 22:03:13.025763 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 12 22:03:13.025812 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 12 22:03:13.025860 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Nov 12 22:03:13.025908 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 12 22:03:13.025958 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Nov 12 22:03:13.026007 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Nov 12 22:03:13.026054 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 12 22:03:13.026105 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Nov 12 22:03:13.026192 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Nov 12 22:03:13.026243 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 12 22:03:13.026290 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Nov 12 22:03:13.026338 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Nov 12 22:03:13.026387 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Nov 12 22:03:13.026436 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Nov 12 22:03:13.026484 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Nov 12 22:03:13.026532 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Nov 12 22:03:13.026580 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Nov 12 22:03:13.026630 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Nov 12 22:03:13.026675 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Nov 12 22:03:13.026717 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 12 22:03:13.026760 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 12 22:03:13.026802 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 12 22:03:13.026844 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Nov 12 22:03:13.026886 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Nov 12 22:03:13.026935 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Nov 12 22:03:13.026982 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Nov 12 22:03:13.027033 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Nov 12 22:03:13.027077 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Nov 12 22:03:13.027166 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Nov 12 22:03:13.027210 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Nov 12 22:03:13.027258 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Nov 12 22:03:13.027305 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Nov 12 22:03:13.027352 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Nov 12 22:03:13.027400 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Nov 12 22:03:13.027408 kernel: PCI: CLS 64 bytes, default 64 Nov 12 22:03:13.027414 kernel: DMAR: No ATSR found Nov 12 22:03:13.027420 kernel: DMAR: No SATC found Nov 12 22:03:13.027426 kernel: DMAR: dmar0: Using Queued invalidation Nov 12 22:03:13.027472 kernel: pci 0000:00:00.0: Adding to iommu group 0 Nov 12 22:03:13.027523 kernel: pci 0000:00:01.0: Adding to iommu group 1 Nov 12 22:03:13.027570 kernel: pci 0000:00:08.0: Adding to iommu group 2 Nov 12 22:03:13.027617 kernel: pci 0000:00:12.0: Adding to iommu group 3 Nov 12 22:03:13.027664 kernel: pci 0000:00:14.0: Adding to iommu group 4 Nov 12 22:03:13.027712 kernel: pci 0000:00:14.2: Adding to iommu group 4 Nov 12 22:03:13.027758 kernel: pci 0000:00:15.0: Adding to iommu group 5 Nov 12 22:03:13.027805 kernel: pci 0000:00:15.1: Adding to iommu group 5 Nov 12 22:03:13.027852 kernel: pci 0000:00:16.0: Adding to iommu group 6 Nov 12 22:03:13.027902 kernel: pci 0000:00:16.1: Adding to iommu group 6 Nov 12 22:03:13.027949 kernel: pci 0000:00:16.4: Adding to iommu group 6 Nov 12 22:03:13.027996 kernel: pci 0000:00:17.0: Adding to iommu group 7 Nov 12 22:03:13.028042 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Nov 12 22:03:13.028092 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Nov 12 22:03:13.028174 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Nov 12 22:03:13.028222 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Nov 12 22:03:13.028269 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Nov 12 22:03:13.028315 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Nov 12 22:03:13.028365 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Nov 12 22:03:13.028412 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Nov 12 22:03:13.028460 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Nov 12 22:03:13.028508 kernel: pci 0000:01:00.0: Adding to iommu group 1 Nov 12 22:03:13.028558 kernel: pci 0000:01:00.1: Adding to iommu group 1 Nov 12 22:03:13.028607 kernel: pci 0000:03:00.0: Adding to iommu group 15 Nov 12 22:03:13.028657 kernel: pci 0000:04:00.0: Adding to iommu group 16 Nov 12 22:03:13.028705 kernel: pci 0000:06:00.0: Adding to iommu group 17 Nov 12 22:03:13.028757 kernel: pci 0000:07:00.0: Adding to iommu group 17 Nov 12 22:03:13.028766 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Nov 12 22:03:13.028772 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 12 22:03:13.028778 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Nov 12 22:03:13.028783 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Nov 12 22:03:13.028789 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Nov 12 22:03:13.028795 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Nov 12 22:03:13.028800 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Nov 12 22:03:13.028852 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Nov 12 22:03:13.028861 kernel: Initialise system trusted keyrings Nov 12 22:03:13.028867 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Nov 12 22:03:13.028873 kernel: Key type asymmetric registered Nov 12 22:03:13.028878 kernel: Asymmetric key parser 'x509' registered Nov 12 22:03:13.028884 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 22:03:13.028889 kernel: io scheduler mq-deadline registered Nov 12 22:03:13.028895 kernel: io scheduler kyber registered Nov 12 22:03:13.028901 kernel: io scheduler bfq registered Nov 12 22:03:13.028949 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Nov 12 22:03:13.028998 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Nov 12 22:03:13.029046 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Nov 12 22:03:13.029096 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Nov 12 22:03:13.029176 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Nov 12 22:03:13.029224 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Nov 12 22:03:13.029275 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Nov 12 22:03:13.029286 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Nov 12 22:03:13.029292 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Nov 12 22:03:13.029298 kernel: pstore: Using crash dump compression: deflate Nov 12 22:03:13.029304 kernel: pstore: Registered erst as persistent store backend Nov 12 22:03:13.029309 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 22:03:13.029315 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 22:03:13.029321 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 22:03:13.029326 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 12 22:03:13.029332 kernel: hpet_acpi_add: no address or irqs in _CRS Nov 12 22:03:13.029383 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Nov 12 22:03:13.029392 kernel: i8042: PNP: No PS/2 controller found. Nov 12 22:03:13.029434 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Nov 12 22:03:13.029478 kernel: rtc_cmos rtc_cmos: registered as rtc0 Nov 12 22:03:13.029522 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-11-12T22:03:11 UTC (1731448991) Nov 12 22:03:13.029565 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Nov 12 22:03:13.029573 kernel: intel_pstate: Intel P-state driver initializing Nov 12 22:03:13.029581 kernel: intel_pstate: Disabling energy efficiency optimization Nov 12 22:03:13.029587 kernel: intel_pstate: HWP enabled Nov 12 22:03:13.029592 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Nov 12 22:03:13.029598 kernel: vesafb: scrolling: redraw Nov 12 22:03:13.029604 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Nov 12 22:03:13.029609 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000fc4729d1, using 768k, total 768k Nov 12 22:03:13.029615 kernel: Console: switching to colour frame buffer device 128x48 Nov 12 22:03:13.029621 kernel: fb0: VESA VGA frame buffer device Nov 12 22:03:13.029627 kernel: NET: Registered PF_INET6 protocol family Nov 12 22:03:13.029633 kernel: Segment Routing with IPv6 Nov 12 22:03:13.029639 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 22:03:13.029645 kernel: NET: Registered PF_PACKET protocol family Nov 12 22:03:13.029650 kernel: Key type dns_resolver registered Nov 12 22:03:13.029656 kernel: microcode: Microcode Update Driver: v2.2. Nov 12 22:03:13.029662 kernel: IPI shorthand broadcast: enabled Nov 12 22:03:13.029667 kernel: sched_clock: Marking stable (2476000552, 1385614430)->(4405288002, -543673020) Nov 12 22:03:13.029673 kernel: registered taskstats version 1 Nov 12 22:03:13.029679 kernel: Loading compiled-in X.509 certificates Nov 12 22:03:13.029684 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 0473a73d840db5324524af106a53c13fc6fc218a' Nov 12 22:03:13.029691 kernel: Key type .fscrypt registered Nov 12 22:03:13.029697 kernel: Key type fscrypt-provisioning registered Nov 12 22:03:13.029702 kernel: ima: Allocated hash algorithm: sha1 Nov 12 22:03:13.029708 kernel: ima: No architecture policies found Nov 12 22:03:13.029714 kernel: clk: Disabling unused clocks Nov 12 22:03:13.029719 kernel: Freeing unused kernel image (initmem) memory: 42828K Nov 12 22:03:13.029725 kernel: Write protecting the kernel read-only data: 36864k Nov 12 22:03:13.029731 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Nov 12 22:03:13.029738 kernel: Run /init as init process Nov 12 22:03:13.029743 kernel: with arguments: Nov 12 22:03:13.029749 kernel: /init Nov 12 22:03:13.029755 kernel: with environment: Nov 12 22:03:13.029760 kernel: HOME=/ Nov 12 22:03:13.029766 kernel: TERM=linux Nov 12 22:03:13.029771 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 22:03:13.029778 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 22:03:13.029787 systemd[1]: Detected architecture x86-64. Nov 12 22:03:13.029793 systemd[1]: Running in initrd. Nov 12 22:03:13.029798 systemd[1]: No hostname configured, using default hostname. Nov 12 22:03:13.029804 systemd[1]: Hostname set to . Nov 12 22:03:13.029810 systemd[1]: Initializing machine ID from random generator. Nov 12 22:03:13.029816 systemd[1]: Queued start job for default target initrd.target. Nov 12 22:03:13.029822 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 22:03:13.029828 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 22:03:13.029835 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 22:03:13.029841 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 22:03:13.029847 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 22:03:13.029853 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 22:03:13.029860 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 22:03:13.029866 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 22:03:13.029873 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Nov 12 22:03:13.029879 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Nov 12 22:03:13.029885 kernel: clocksource: Switched to clocksource tsc Nov 12 22:03:13.029891 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 22:03:13.029897 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 22:03:13.029903 systemd[1]: Reached target paths.target - Path Units. Nov 12 22:03:13.029908 systemd[1]: Reached target slices.target - Slice Units. Nov 12 22:03:13.029914 systemd[1]: Reached target swap.target - Swaps. Nov 12 22:03:13.029920 systemd[1]: Reached target timers.target - Timer Units. Nov 12 22:03:13.029927 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 22:03:13.029933 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 22:03:13.029939 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 22:03:13.029945 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 22:03:13.029951 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 22:03:13.029957 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 22:03:13.029963 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 22:03:13.029969 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 22:03:13.029975 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 22:03:13.029982 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 22:03:13.029988 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 22:03:13.029994 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 22:03:13.030000 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 22:03:13.030015 systemd-journald[266]: Collecting audit messages is disabled. Nov 12 22:03:13.030030 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 22:03:13.030037 systemd-journald[266]: Journal started Nov 12 22:03:13.030051 systemd-journald[266]: Runtime Journal (/run/log/journal/de3e9ab6c27746e8ad7e637fc13a3ac9) is 8.0M, max 639.9M, 631.9M free. Nov 12 22:03:13.052933 systemd-modules-load[268]: Inserted module 'overlay' Nov 12 22:03:13.075092 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:03:13.103744 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 22:03:13.165039 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 22:03:13.165055 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 22:03:13.165064 kernel: Bridge firewalling registered Nov 12 22:03:13.160269 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 22:03:13.165029 systemd-modules-load[268]: Inserted module 'br_netfilter' Nov 12 22:03:13.186501 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 22:03:13.206425 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 22:03:13.220422 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:03:13.264392 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 22:03:13.275716 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 22:03:13.304327 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 22:03:13.305201 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 22:03:13.310154 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 22:03:13.324866 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:03:13.348826 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:03:13.369950 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 22:03:13.403542 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 22:03:13.425425 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 22:03:13.435539 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 22:03:13.445767 systemd-resolved[305]: Positive Trust Anchors: Nov 12 22:03:13.490211 dracut-cmdline[303]: dracut-dracut-053 Nov 12 22:03:13.490211 dracut-cmdline[303]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 22:03:13.528260 kernel: SCSI subsystem initialized Nov 12 22:03:13.528291 kernel: Loading iSCSI transport class v2.0-870. Nov 12 22:03:13.445772 systemd-resolved[305]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 22:03:13.606213 kernel: iscsi: registered transport (tcp) Nov 12 22:03:13.606228 kernel: iscsi: registered transport (qla4xxx) Nov 12 22:03:13.606236 kernel: QLogic iSCSI HBA Driver Nov 12 22:03:13.445795 systemd-resolved[305]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 22:03:13.447447 systemd-resolved[305]: Defaulting to hostname 'linux'. Nov 12 22:03:13.457284 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 22:03:13.478330 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 22:03:13.514208 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 22:03:13.611999 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 22:03:13.679398 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 22:03:13.795541 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 22:03:13.795559 kernel: device-mapper: uevent: version 1.0.3 Nov 12 22:03:13.815506 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 22:03:13.874156 kernel: raid6: avx2x4 gen() 52792 MB/s Nov 12 22:03:13.906130 kernel: raid6: avx2x2 gen() 53242 MB/s Nov 12 22:03:13.942763 kernel: raid6: avx2x1 gen() 44605 MB/s Nov 12 22:03:13.942779 kernel: raid6: using algorithm avx2x2 gen() 53242 MB/s Nov 12 22:03:13.990627 kernel: raid6: .... xor() 31127 MB/s, rmw enabled Nov 12 22:03:13.990643 kernel: raid6: using avx2x2 recovery algorithm Nov 12 22:03:14.032122 kernel: xor: automatically using best checksumming function avx Nov 12 22:03:14.145111 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 22:03:14.151293 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 22:03:14.172237 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 22:03:14.179599 systemd-udevd[497]: Using default interface naming scheme 'v255'. Nov 12 22:03:14.183327 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 22:03:14.216400 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 22:03:14.271264 dracut-pre-trigger[509]: rd.md=0: removing MD RAID activation Nov 12 22:03:14.291184 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 22:03:14.312378 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 22:03:14.393151 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 22:03:14.437745 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 12 22:03:14.437763 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 12 22:03:14.410201 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 22:03:14.531177 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 22:03:14.531202 kernel: ACPI: bus type USB registered Nov 12 22:03:14.531212 kernel: usbcore: registered new interface driver usbfs Nov 12 22:03:14.531222 kernel: usbcore: registered new interface driver hub Nov 12 22:03:14.531230 kernel: usbcore: registered new device driver usb Nov 12 22:03:14.531239 kernel: PTP clock support registered Nov 12 22:03:14.440206 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 22:03:14.440306 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:03:14.581215 kernel: libata version 3.00 loaded. Nov 12 22:03:14.581231 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 22:03:14.581239 kernel: AES CTR mode by8 optimization enabled Nov 12 22:03:14.581247 kernel: ahci 0000:00:17.0: version 3.0 Nov 12 22:03:14.857160 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 12 22:03:14.857240 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Nov 12 22:03:14.857307 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Nov 12 22:03:14.857370 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Nov 12 22:03:14.857431 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Nov 12 22:03:14.857491 kernel: scsi host0: ahci Nov 12 22:03:14.857554 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 12 22:03:14.857614 kernel: scsi host1: ahci Nov 12 22:03:14.857673 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Nov 12 22:03:14.857735 kernel: scsi host2: ahci Nov 12 22:03:14.857794 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Nov 12 22:03:14.857853 kernel: scsi host3: ahci Nov 12 22:03:14.857915 kernel: hub 1-0:1.0: USB hub found Nov 12 22:03:14.857985 kernel: scsi host4: ahci Nov 12 22:03:14.858043 kernel: hub 1-0:1.0: 16 ports detected Nov 12 22:03:14.858115 kernel: scsi host5: ahci Nov 12 22:03:14.858176 kernel: hub 2-0:1.0: USB hub found Nov 12 22:03:14.858244 kernel: scsi host6: ahci Nov 12 22:03:14.858304 kernel: hub 2-0:1.0: 10 ports detected Nov 12 22:03:14.858368 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Nov 12 22:03:14.858377 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Nov 12 22:03:14.858384 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Nov 12 22:03:14.858393 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Nov 12 22:03:14.858401 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Nov 12 22:03:14.858408 kernel: pps pps0: new PPS source ptp0 Nov 12 22:03:14.858467 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Nov 12 22:03:14.858475 kernel: igb 0000:03:00.0: added PHC on eth0 Nov 12 22:03:15.066238 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Nov 12 22:03:15.066260 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Nov 12 22:03:15.066283 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 12 22:03:15.066528 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Nov 12 22:03:15.066545 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:f0:ca Nov 12 22:03:15.066772 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Nov 12 22:03:15.126298 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Nov 12 22:03:15.126370 kernel: hub 1-14:1.0: USB hub found Nov 12 22:03:15.126439 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 12 22:03:15.126502 kernel: pps pps1: new PPS source ptp1 Nov 12 22:03:15.126564 kernel: hub 1-14:1.0: 4 ports detected Nov 12 22:03:15.126628 kernel: igb 0000:04:00.0: added PHC on eth1 Nov 12 22:03:15.219206 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 12 22:03:15.219215 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 12 22:03:15.219280 kernel: ata7: SATA link down (SStatus 0 SControl 300) Nov 12 22:03:15.219289 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:f0:cb Nov 12 22:03:15.219351 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 12 22:03:15.219359 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Nov 12 22:03:15.219422 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 12 22:03:15.219430 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 12 22:03:15.219491 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 12 22:03:14.493702 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 22:03:15.383223 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 12 22:03:15.383234 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 12 22:03:15.383242 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 12 22:03:15.383252 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 12 22:03:14.560180 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 22:03:15.503431 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 12 22:03:15.503444 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Nov 12 22:03:15.897237 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 12 22:03:15.897248 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 12 22:03:15.897326 kernel: ata1.00: Features: NCQ-prio Nov 12 22:03:15.897335 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Nov 12 22:03:15.897446 kernel: ata2.00: Features: NCQ-prio Nov 12 22:03:15.897454 kernel: ata1.00: configured for UDMA/133 Nov 12 22:03:15.897462 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 12 22:03:15.897536 kernel: ata2.00: configured for UDMA/133 Nov 12 22:03:15.897545 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 12 22:03:16.093989 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Nov 12 22:03:16.094072 kernel: ata2.00: Enabling discard_zeroes_data Nov 12 22:03:16.094097 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Nov 12 22:03:16.094185 kernel: ata1.00: Enabling discard_zeroes_data Nov 12 22:03:16.094200 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 12 22:03:16.094266 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 12 22:03:16.094342 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Nov 12 22:03:16.094402 kernel: sd 0:0:0:0: [sdb] Write Protect is off Nov 12 22:03:16.094469 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Nov 12 22:03:16.094528 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 12 22:03:16.094598 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Nov 12 22:03:16.094663 kernel: ata1.00: Enabling discard_zeroes_data Nov 12 22:03:16.094671 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 12 22:03:16.094679 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Nov 12 22:03:16.094734 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Nov 12 22:03:16.094806 kernel: sd 1:0:0:0: [sda] Write Protect is off Nov 12 22:03:16.094866 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Nov 12 22:03:16.094935 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Nov 12 22:03:16.094994 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 22:03:16.095002 kernel: GPT:9289727 != 937703087 Nov 12 22:03:16.095009 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 22:03:16.095016 kernel: GPT:9289727 != 937703087 Nov 12 22:03:16.095023 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 22:03:16.095030 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 12 22:03:16.095037 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Nov 12 22:03:16.095125 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 12 22:03:16.095210 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 12 22:03:16.095286 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Nov 12 22:03:16.468887 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Nov 12 22:03:16.468965 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 12 22:03:16.469031 kernel: ata2.00: Enabling discard_zeroes_data Nov 12 22:03:16.469040 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Nov 12 22:03:16.469135 kernel: usbcore: registered new interface driver usbhid Nov 12 22:03:16.469159 kernel: usbhid: USB HID core driver Nov 12 22:03:16.469180 kernel: BTRFS: device fsid 9dfeafbb-8ab7-4be2-acae-f51db463fc77 devid 1 transid 37 /dev/sdb3 scanned by (udev-worker) (564) Nov 12 22:03:16.469187 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by (udev-worker) (553) Nov 12 22:03:16.469194 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Nov 12 22:03:16.469201 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Nov 12 22:03:16.469264 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Nov 12 22:03:16.469324 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Nov 12 22:03:16.469394 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Nov 12 22:03:16.469402 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Nov 12 22:03:16.469465 kernel: ata1.00: Enabling discard_zeroes_data Nov 12 22:03:16.469474 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 12 22:03:16.469481 kernel: ata1.00: Enabling discard_zeroes_data Nov 12 22:03:16.469488 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 12 22:03:16.469548 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 12 22:03:14.560328 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:03:15.339467 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:03:16.541551 kernel: ata1.00: Enabling discard_zeroes_data Nov 12 22:03:16.541563 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 12 22:03:15.417312 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:03:16.557192 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Nov 12 22:03:15.513430 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 22:03:15.514447 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 22:03:15.529583 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 22:03:16.573349 disk-uuid[704]: Primary Header is updated. Nov 12 22:03:16.573349 disk-uuid[704]: Secondary Entries is updated. Nov 12 22:03:16.573349 disk-uuid[704]: Secondary Header is updated. Nov 12 22:03:15.539077 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 22:03:15.613216 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 22:03:16.091308 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:03:16.110304 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 22:03:16.161754 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Nov 12 22:03:16.206686 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Nov 12 22:03:16.261257 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Nov 12 22:03:16.287163 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Nov 12 22:03:16.332734 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Nov 12 22:03:16.397377 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 22:03:16.446310 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 22:03:16.582276 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:03:16.617340 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Nov 12 22:03:17.518362 kernel: ata1.00: Enabling discard_zeroes_data Nov 12 22:03:17.539127 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 12 22:03:17.539164 disk-uuid[705]: The operation has completed successfully. Nov 12 22:03:17.578036 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 22:03:17.578083 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 22:03:17.614478 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 22:03:17.640254 sh[739]: Success Nov 12 22:03:17.650187 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 12 22:03:17.684629 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 22:03:17.705035 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 22:03:17.720421 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 22:03:17.764139 kernel: BTRFS info (device dm-0): first mount of filesystem 9dfeafbb-8ab7-4be2-acae-f51db463fc77 Nov 12 22:03:17.764157 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 22:03:17.786716 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 22:03:17.806876 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 22:03:17.825854 kernel: BTRFS info (device dm-0): using free space tree Nov 12 22:03:17.865137 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 12 22:03:17.867217 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 22:03:17.876586 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 22:03:17.886357 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 22:03:17.996757 kernel: BTRFS info (device sdb6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 22:03:17.996771 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Nov 12 22:03:17.996779 kernel: BTRFS info (device sdb6): using free space tree Nov 12 22:03:17.996787 kernel: BTRFS info (device sdb6): enabling ssd optimizations Nov 12 22:03:17.996794 kernel: BTRFS info (device sdb6): auto enabling async discard Nov 12 22:03:17.926561 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 22:03:18.034369 kernel: BTRFS info (device sdb6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 22:03:18.036441 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 22:03:18.046939 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 22:03:18.092543 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 22:03:18.111312 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 22:03:18.141522 ignition[829]: Ignition 2.19.0 Nov 12 22:03:18.141527 ignition[829]: Stage: fetch-offline Nov 12 22:03:18.143692 unknown[829]: fetched base config from "system" Nov 12 22:03:18.141549 ignition[829]: no configs at "/usr/lib/ignition/base.d" Nov 12 22:03:18.143696 unknown[829]: fetched user config from "system" Nov 12 22:03:18.141555 ignition[829]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 12 22:03:18.144743 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 22:03:18.141610 ignition[829]: parsed url from cmdline: "" Nov 12 22:03:18.146565 systemd-networkd[923]: lo: Link UP Nov 12 22:03:18.141612 ignition[829]: no config URL provided Nov 12 22:03:18.146567 systemd-networkd[923]: lo: Gained carrier Nov 12 22:03:18.141615 ignition[829]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 22:03:18.148866 systemd-networkd[923]: Enumeration completed Nov 12 22:03:18.141637 ignition[829]: parsing config with SHA512: a49f23bd1b04298a78e389a4771c717b69c737912acd6a7dd82c6fc9ea4de2691a74696655b88b644d7eb1f9cf59b3ddaaad99dbe0b79f027a08d5adfe94950f Nov 12 22:03:18.148938 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 22:03:18.143911 ignition[829]: fetch-offline: fetch-offline passed Nov 12 22:03:18.149603 systemd-networkd[923]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 22:03:18.143914 ignition[829]: POST message to Packet Timeline Nov 12 22:03:18.172470 systemd[1]: Reached target network.target - Network. Nov 12 22:03:18.143917 ignition[829]: POST Status error: resource requires networking Nov 12 22:03:18.180150 systemd-networkd[923]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 22:03:18.143952 ignition[829]: Ignition finished successfully Nov 12 22:03:18.188366 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 12 22:03:18.213824 ignition[935]: Ignition 2.19.0 Nov 12 22:03:18.206322 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 22:03:18.213829 ignition[935]: Stage: kargs Nov 12 22:03:18.400276 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Nov 12 22:03:18.208465 systemd-networkd[923]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 22:03:18.213960 ignition[935]: no configs at "/usr/lib/ignition/base.d" Nov 12 22:03:18.395825 systemd-networkd[923]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 22:03:18.213968 ignition[935]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 12 22:03:18.214604 ignition[935]: kargs: kargs passed Nov 12 22:03:18.214607 ignition[935]: POST message to Packet Timeline Nov 12 22:03:18.214618 ignition[935]: GET https://metadata.packet.net/metadata: attempt #1 Nov 12 22:03:18.215156 ignition[935]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:49181->[::1]:53: read: connection refused Nov 12 22:03:18.415671 ignition[935]: GET https://metadata.packet.net/metadata: attempt #2 Nov 12 22:03:18.416358 ignition[935]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:56829->[::1]:53: read: connection refused Nov 12 22:03:18.617132 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Nov 12 22:03:18.618655 systemd-networkd[923]: eno1: Link UP Nov 12 22:03:18.618847 systemd-networkd[923]: eno2: Link UP Nov 12 22:03:18.619027 systemd-networkd[923]: enp1s0f0np0: Link UP Nov 12 22:03:18.619252 systemd-networkd[923]: enp1s0f0np0: Gained carrier Nov 12 22:03:18.629287 systemd-networkd[923]: enp1s0f1np1: Link UP Nov 12 22:03:18.670469 systemd-networkd[923]: enp1s0f0np0: DHCPv4 address 147.75.202.249/31, gateway 147.75.202.248 acquired from 145.40.83.140 Nov 12 22:03:18.816890 ignition[935]: GET https://metadata.packet.net/metadata: attempt #3 Nov 12 22:03:18.818181 ignition[935]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:36283->[::1]:53: read: connection refused Nov 12 22:03:19.432876 systemd-networkd[923]: enp1s0f1np1: Gained carrier Nov 12 22:03:19.618690 ignition[935]: GET https://metadata.packet.net/metadata: attempt #4 Nov 12 22:03:19.619963 ignition[935]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:33913->[::1]:53: read: connection refused Nov 12 22:03:20.328680 systemd-networkd[923]: enp1s0f0np0: Gained IPv6LL Nov 12 22:03:21.221238 ignition[935]: GET https://metadata.packet.net/metadata: attempt #5 Nov 12 22:03:21.222573 ignition[935]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:54864->[::1]:53: read: connection refused Nov 12 22:03:21.225386 systemd-networkd[923]: enp1s0f1np1: Gained IPv6LL Nov 12 22:03:24.426057 ignition[935]: GET https://metadata.packet.net/metadata: attempt #6 Nov 12 22:03:25.803829 ignition[935]: GET result: OK Nov 12 22:03:26.095652 ignition[935]: Ignition finished successfully Nov 12 22:03:26.098186 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 22:03:26.128493 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 22:03:26.139717 ignition[955]: Ignition 2.19.0 Nov 12 22:03:26.139722 ignition[955]: Stage: disks Nov 12 22:03:26.139824 ignition[955]: no configs at "/usr/lib/ignition/base.d" Nov 12 22:03:26.139830 ignition[955]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 12 22:03:26.140356 ignition[955]: disks: disks passed Nov 12 22:03:26.140358 ignition[955]: POST message to Packet Timeline Nov 12 22:03:26.140367 ignition[955]: GET https://metadata.packet.net/metadata: attempt #1 Nov 12 22:03:26.697390 ignition[955]: GET result: OK Nov 12 22:03:27.001965 ignition[955]: Ignition finished successfully Nov 12 22:03:27.005487 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 22:03:27.020430 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 22:03:27.038355 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 22:03:27.060513 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 22:03:27.072664 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 22:03:27.100503 systemd[1]: Reached target basic.target - Basic System. Nov 12 22:03:27.129378 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 22:03:27.164642 systemd-fsck[975]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 12 22:03:27.175861 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 22:03:27.193202 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 22:03:27.297142 kernel: EXT4-fs (sdb9): mounted filesystem cc5635ac-cac6-420e-b789-89e3a937cfb2 r/w with ordered data mode. Quota mode: none. Nov 12 22:03:27.297681 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 22:03:27.306539 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 22:03:27.338303 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 22:03:27.369153 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by mount (984) Nov 12 22:03:27.346709 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 22:03:27.481271 kernel: BTRFS info (device sdb6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 22:03:27.481290 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Nov 12 22:03:27.481298 kernel: BTRFS info (device sdb6): using free space tree Nov 12 22:03:27.481305 kernel: BTRFS info (device sdb6): enabling ssd optimizations Nov 12 22:03:27.481313 kernel: BTRFS info (device sdb6): auto enabling async discard Nov 12 22:03:27.369745 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 12 22:03:27.504470 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Nov 12 22:03:27.515346 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 22:03:27.556415 coreos-metadata[986]: Nov 12 22:03:27.553 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 12 22:03:27.515434 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 22:03:27.598295 coreos-metadata[1002]: Nov 12 22:03:27.553 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 12 22:03:27.540721 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 22:03:27.565408 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 22:03:27.601349 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 22:03:27.646215 initrd-setup-root[1016]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 22:03:27.656212 initrd-setup-root[1023]: cut: /sysroot/etc/group: No such file or directory Nov 12 22:03:27.667216 initrd-setup-root[1030]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 22:03:27.678157 initrd-setup-root[1037]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 22:03:27.685397 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 22:03:27.721445 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 22:03:27.764327 kernel: BTRFS info (device sdb6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 22:03:27.740387 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 22:03:27.773880 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 22:03:27.796589 ignition[1109]: INFO : Ignition 2.19.0 Nov 12 22:03:27.796589 ignition[1109]: INFO : Stage: mount Nov 12 22:03:27.803299 ignition[1109]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 22:03:27.803299 ignition[1109]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 12 22:03:27.803299 ignition[1109]: INFO : mount: mount passed Nov 12 22:03:27.803299 ignition[1109]: INFO : POST message to Packet Timeline Nov 12 22:03:27.803299 ignition[1109]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 12 22:03:27.801998 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 22:03:28.205846 coreos-metadata[1002]: Nov 12 22:03:28.205 INFO Fetch successful Nov 12 22:03:28.235087 ignition[1109]: INFO : GET result: OK Nov 12 22:03:28.238868 systemd[1]: flatcar-static-network.service: Deactivated successfully. Nov 12 22:03:28.238923 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Nov 12 22:03:28.310705 coreos-metadata[986]: Nov 12 22:03:28.310 INFO Fetch successful Nov 12 22:03:28.380094 coreos-metadata[986]: Nov 12 22:03:28.380 INFO wrote hostname ci-4081.2.0-a-a9d0314af7 to /sysroot/etc/hostname Nov 12 22:03:28.381786 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 12 22:03:28.544458 ignition[1109]: INFO : Ignition finished successfully Nov 12 22:03:28.546991 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 22:03:28.581312 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 22:03:28.592168 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 22:03:28.654180 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sdb6 scanned by mount (1137) Nov 12 22:03:28.654198 kernel: BTRFS info (device sdb6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 22:03:28.674884 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Nov 12 22:03:28.693139 kernel: BTRFS info (device sdb6): using free space tree Nov 12 22:03:28.731640 kernel: BTRFS info (device sdb6): enabling ssd optimizations Nov 12 22:03:28.731663 kernel: BTRFS info (device sdb6): auto enabling async discard Nov 12 22:03:28.745172 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 22:03:28.771562 ignition[1154]: INFO : Ignition 2.19.0 Nov 12 22:03:28.771562 ignition[1154]: INFO : Stage: files Nov 12 22:03:28.786359 ignition[1154]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 22:03:28.786359 ignition[1154]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 12 22:03:28.786359 ignition[1154]: DEBUG : files: compiled without relabeling support, skipping Nov 12 22:03:28.786359 ignition[1154]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 22:03:28.786359 ignition[1154]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 22:03:28.786359 ignition[1154]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 22:03:28.786359 ignition[1154]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 22:03:28.786359 ignition[1154]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 22:03:28.786359 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 22:03:28.786359 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 22:03:28.775443 unknown[1154]: wrote ssh authorized keys file for user: core Nov 12 22:03:28.918182 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 12 22:03:28.958786 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 22:03:28.975388 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 12 22:03:28.975388 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 22:03:28.975388 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 22:03:28.975388 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 22:03:28.975388 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 22:03:28.975388 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 22:03:28.975388 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 22:03:28.975388 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 22:03:28.975388 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 22:03:28.975388 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 22:03:28.975388 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 12 22:03:28.975388 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 12 22:03:28.975388 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 12 22:03:28.975388 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Nov 12 22:03:29.577993 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 12 22:03:30.412726 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 12 22:03:30.412726 ignition[1154]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 12 22:03:30.443310 ignition[1154]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 22:03:30.443310 ignition[1154]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 22:03:30.443310 ignition[1154]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 12 22:03:30.443310 ignition[1154]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 12 22:03:30.443310 ignition[1154]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 22:03:30.443310 ignition[1154]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 22:03:30.443310 ignition[1154]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 22:03:30.443310 ignition[1154]: INFO : files: files passed Nov 12 22:03:30.443310 ignition[1154]: INFO : POST message to Packet Timeline Nov 12 22:03:30.443310 ignition[1154]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 12 22:03:31.174469 ignition[1154]: INFO : GET result: OK Nov 12 22:03:31.435308 ignition[1154]: INFO : Ignition finished successfully Nov 12 22:03:31.438318 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 22:03:31.469353 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 22:03:31.479685 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 22:03:31.489631 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 22:03:31.489697 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 22:03:31.534402 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 22:03:31.549684 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 22:03:31.580567 initrd-setup-root-after-ignition[1194]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 22:03:31.580567 initrd-setup-root-after-ignition[1194]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 22:03:31.594445 initrd-setup-root-after-ignition[1199]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 22:03:31.584520 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 22:03:31.663358 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 22:03:31.663407 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 22:03:31.682490 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 22:03:31.703254 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 22:03:31.720334 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 22:03:31.734497 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 22:03:31.806164 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 22:03:31.834512 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 22:03:31.863332 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 22:03:31.874723 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 22:03:31.895783 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 22:03:31.914726 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 22:03:31.915147 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 22:03:31.943819 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 22:03:31.965719 systemd[1]: Stopped target basic.target - Basic System. Nov 12 22:03:31.983687 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 22:03:32.002710 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 22:03:32.023687 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 22:03:32.044820 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 22:03:32.064715 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 22:03:32.085753 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 22:03:32.107738 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 22:03:32.127697 systemd[1]: Stopped target swap.target - Swaps. Nov 12 22:03:32.146601 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 22:03:32.146997 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 22:03:32.182568 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 22:03:32.192730 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 22:03:32.214592 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 22:03:32.215029 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 22:03:32.237596 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 22:03:32.237993 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 22:03:32.269689 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 22:03:32.270162 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 22:03:32.289925 systemd[1]: Stopped target paths.target - Path Units. Nov 12 22:03:32.308384 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 22:03:32.308558 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 22:03:32.329382 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 22:03:32.347520 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 22:03:32.365416 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 22:03:32.365543 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 22:03:32.385439 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 22:03:32.385564 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 22:03:32.407457 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 22:03:32.497420 ignition[1219]: INFO : Ignition 2.19.0 Nov 12 22:03:32.497420 ignition[1219]: INFO : Stage: umount Nov 12 22:03:32.497420 ignition[1219]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 22:03:32.497420 ignition[1219]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 12 22:03:32.497420 ignition[1219]: INFO : umount: umount passed Nov 12 22:03:32.497420 ignition[1219]: INFO : POST message to Packet Timeline Nov 12 22:03:32.497420 ignition[1219]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 12 22:03:32.407624 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 22:03:32.426466 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 22:03:32.426624 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 22:03:32.444459 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 12 22:03:32.444647 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 12 22:03:32.473377 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 22:03:32.487699 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 22:03:32.497407 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 22:03:32.497516 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 22:03:32.515524 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 22:03:32.515644 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 22:03:32.561394 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 22:03:32.561896 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 22:03:32.561963 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 22:03:32.572816 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 22:03:32.572906 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 22:03:33.293318 ignition[1219]: INFO : GET result: OK Nov 12 22:03:33.616194 ignition[1219]: INFO : Ignition finished successfully Nov 12 22:03:33.619109 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 22:03:33.619406 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 22:03:33.636353 systemd[1]: Stopped target network.target - Network. Nov 12 22:03:33.651362 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 22:03:33.651532 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 22:03:33.669431 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 22:03:33.669565 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 22:03:33.688505 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 22:03:33.688661 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 22:03:33.707504 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 22:03:33.707664 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 22:03:33.726491 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 22:03:33.726656 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 22:03:33.745858 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 22:03:33.760256 systemd-networkd[923]: enp1s0f1np1: DHCPv6 lease lost Nov 12 22:03:33.764584 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 22:03:33.774303 systemd-networkd[923]: enp1s0f0np0: DHCPv6 lease lost Nov 12 22:03:33.783038 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 22:03:33.783341 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 22:03:33.802506 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 22:03:33.802818 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 22:03:33.822634 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 22:03:33.822755 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 22:03:33.855313 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 22:03:33.882270 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 22:03:33.882512 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 22:03:33.901594 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 22:03:33.901764 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:03:33.921568 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 22:03:33.921730 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 22:03:33.939482 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 22:03:33.939642 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 22:03:33.959696 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 22:03:33.981394 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 22:03:33.981768 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 22:03:34.017660 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 22:03:34.017706 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 22:03:34.021424 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 22:03:34.021444 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 22:03:34.050321 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 22:03:34.050384 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 22:03:34.092340 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 22:03:34.092525 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 22:03:34.122471 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 22:03:34.122631 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:03:34.174500 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 22:03:34.176346 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 22:03:34.176496 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 22:03:34.447343 systemd-journald[266]: Received SIGTERM from PID 1 (systemd). Nov 12 22:03:34.207393 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 12 22:03:34.207539 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 22:03:34.226375 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 22:03:34.226514 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 22:03:34.248375 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 22:03:34.248511 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:03:34.270495 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 22:03:34.270729 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 22:03:34.291954 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 22:03:34.292218 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 22:03:34.313384 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 22:03:34.342422 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 22:03:34.382815 systemd[1]: Switching root. Nov 12 22:03:34.548294 systemd-journald[266]: Journal stopped Nov 12 22:03:13.016513 kernel: microcode: updated early: 0xf4 -> 0xfc, date = 2023-07-27 Nov 12 22:03:13.016528 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 16:20:46 -00 2024 Nov 12 22:03:13.016535 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 22:03:13.016540 kernel: BIOS-provided physical RAM map: Nov 12 22:03:13.016544 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Nov 12 22:03:13.016548 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Nov 12 22:03:13.016553 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Nov 12 22:03:13.016557 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Nov 12 22:03:13.016561 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Nov 12 22:03:13.016565 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b27fff] usable Nov 12 22:03:13.016569 kernel: BIOS-e820: [mem 0x0000000081b28000-0x0000000081b28fff] ACPI NVS Nov 12 22:03:13.016574 kernel: BIOS-e820: [mem 0x0000000081b29000-0x0000000081b29fff] reserved Nov 12 22:03:13.016578 kernel: BIOS-e820: [mem 0x0000000081b2a000-0x000000008afccfff] usable Nov 12 22:03:13.016582 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Nov 12 22:03:13.016588 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Nov 12 22:03:13.016592 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Nov 12 22:03:13.016598 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Nov 12 22:03:13.016603 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Nov 12 22:03:13.016607 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Nov 12 22:03:13.016612 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 12 22:03:13.016616 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Nov 12 22:03:13.016621 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Nov 12 22:03:13.016625 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Nov 12 22:03:13.016630 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Nov 12 22:03:13.016634 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Nov 12 22:03:13.016639 kernel: NX (Execute Disable) protection: active Nov 12 22:03:13.016644 kernel: APIC: Static calls initialized Nov 12 22:03:13.016648 kernel: SMBIOS 3.2.1 present. Nov 12 22:03:13.016654 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Nov 12 22:03:13.016659 kernel: tsc: Detected 3400.000 MHz processor Nov 12 22:03:13.016663 kernel: tsc: Detected 3399.906 MHz TSC Nov 12 22:03:13.016668 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 22:03:13.016673 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 22:03:13.016678 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Nov 12 22:03:13.016682 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Nov 12 22:03:13.016687 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 22:03:13.016692 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Nov 12 22:03:13.016697 kernel: Using GB pages for direct mapping Nov 12 22:03:13.016702 kernel: ACPI: Early table checksum verification disabled Nov 12 22:03:13.016707 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Nov 12 22:03:13.016714 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Nov 12 22:03:13.016719 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Nov 12 22:03:13.016724 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Nov 12 22:03:13.016729 kernel: ACPI: FACS 0x000000008C66CF80 000040 Nov 12 22:03:13.016735 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Nov 12 22:03:13.016740 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Nov 12 22:03:13.016745 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Nov 12 22:03:13.016750 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Nov 12 22:03:13.016755 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Nov 12 22:03:13.016760 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Nov 12 22:03:13.016765 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Nov 12 22:03:13.016771 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Nov 12 22:03:13.016775 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 12 22:03:13.016780 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Nov 12 22:03:13.016785 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Nov 12 22:03:13.016790 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 12 22:03:13.016795 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 12 22:03:13.016800 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Nov 12 22:03:13.016805 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Nov 12 22:03:13.016810 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 12 22:03:13.016816 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Nov 12 22:03:13.016821 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Nov 12 22:03:13.016826 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Nov 12 22:03:13.016831 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Nov 12 22:03:13.016836 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Nov 12 22:03:13.016841 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Nov 12 22:03:13.016846 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Nov 12 22:03:13.016851 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Nov 12 22:03:13.016857 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Nov 12 22:03:13.016862 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Nov 12 22:03:13.016867 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Nov 12 22:03:13.016872 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Nov 12 22:03:13.016877 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Nov 12 22:03:13.016882 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Nov 12 22:03:13.016887 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Nov 12 22:03:13.016892 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Nov 12 22:03:13.016897 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Nov 12 22:03:13.016902 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Nov 12 22:03:13.016907 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Nov 12 22:03:13.016912 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Nov 12 22:03:13.016917 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Nov 12 22:03:13.016922 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Nov 12 22:03:13.016927 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Nov 12 22:03:13.016932 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Nov 12 22:03:13.016937 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Nov 12 22:03:13.016942 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Nov 12 22:03:13.016948 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Nov 12 22:03:13.016952 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Nov 12 22:03:13.016957 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Nov 12 22:03:13.016962 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Nov 12 22:03:13.016967 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Nov 12 22:03:13.016972 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Nov 12 22:03:13.016977 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Nov 12 22:03:13.016982 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Nov 12 22:03:13.016987 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Nov 12 22:03:13.016992 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Nov 12 22:03:13.016997 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Nov 12 22:03:13.017002 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Nov 12 22:03:13.017007 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Nov 12 22:03:13.017012 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Nov 12 22:03:13.017017 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Nov 12 22:03:13.017022 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Nov 12 22:03:13.017027 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Nov 12 22:03:13.017032 kernel: No NUMA configuration found Nov 12 22:03:13.017037 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Nov 12 22:03:13.017043 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Nov 12 22:03:13.017048 kernel: Zone ranges: Nov 12 22:03:13.017053 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 22:03:13.017058 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 12 22:03:13.017063 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Nov 12 22:03:13.017068 kernel: Movable zone start for each node Nov 12 22:03:13.017073 kernel: Early memory node ranges Nov 12 22:03:13.017078 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Nov 12 22:03:13.017083 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Nov 12 22:03:13.017091 kernel: node 0: [mem 0x0000000040400000-0x0000000081b27fff] Nov 12 22:03:13.017097 kernel: node 0: [mem 0x0000000081b2a000-0x000000008afccfff] Nov 12 22:03:13.017102 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Nov 12 22:03:13.017107 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Nov 12 22:03:13.017115 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Nov 12 22:03:13.017122 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Nov 12 22:03:13.017127 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 22:03:13.017132 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Nov 12 22:03:13.017139 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Nov 12 22:03:13.017144 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Nov 12 22:03:13.017149 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Nov 12 22:03:13.017155 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Nov 12 22:03:13.017160 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Nov 12 22:03:13.017165 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Nov 12 22:03:13.017171 kernel: ACPI: PM-Timer IO Port: 0x1808 Nov 12 22:03:13.017176 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Nov 12 22:03:13.017181 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Nov 12 22:03:13.017188 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Nov 12 22:03:13.017193 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Nov 12 22:03:13.017198 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Nov 12 22:03:13.017203 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Nov 12 22:03:13.017209 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Nov 12 22:03:13.017214 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Nov 12 22:03:13.017219 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Nov 12 22:03:13.017224 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Nov 12 22:03:13.017230 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Nov 12 22:03:13.017236 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Nov 12 22:03:13.017241 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Nov 12 22:03:13.017246 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Nov 12 22:03:13.017251 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Nov 12 22:03:13.017257 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Nov 12 22:03:13.017262 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Nov 12 22:03:13.017267 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 12 22:03:13.017273 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 12 22:03:13.017278 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 22:03:13.017283 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 12 22:03:13.017290 kernel: TSC deadline timer available Nov 12 22:03:13.017295 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Nov 12 22:03:13.017300 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Nov 12 22:03:13.017306 kernel: Booting paravirtualized kernel on bare hardware Nov 12 22:03:13.017311 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 22:03:13.017317 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Nov 12 22:03:13.017322 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Nov 12 22:03:13.017327 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Nov 12 22:03:13.017333 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Nov 12 22:03:13.017339 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 22:03:13.017345 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 22:03:13.017350 kernel: random: crng init done Nov 12 22:03:13.017355 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Nov 12 22:03:13.017361 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Nov 12 22:03:13.017366 kernel: Fallback order for Node 0: 0 Nov 12 22:03:13.017371 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Nov 12 22:03:13.017378 kernel: Policy zone: Normal Nov 12 22:03:13.017383 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 22:03:13.017388 kernel: software IO TLB: area num 16. Nov 12 22:03:13.017394 kernel: Memory: 32720296K/33452980K available (12288K kernel code, 2305K rwdata, 22724K rodata, 42828K init, 2360K bss, 732424K reserved, 0K cma-reserved) Nov 12 22:03:13.017399 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Nov 12 22:03:13.017405 kernel: ftrace: allocating 37799 entries in 148 pages Nov 12 22:03:13.017410 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 22:03:13.017415 kernel: Dynamic Preempt: voluntary Nov 12 22:03:13.017421 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 22:03:13.017427 kernel: rcu: RCU event tracing is enabled. Nov 12 22:03:13.017433 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Nov 12 22:03:13.017438 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 22:03:13.017444 kernel: Rude variant of Tasks RCU enabled. Nov 12 22:03:13.017449 kernel: Tracing variant of Tasks RCU enabled. Nov 12 22:03:13.017455 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 22:03:13.017460 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Nov 12 22:03:13.017465 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Nov 12 22:03:13.017471 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 22:03:13.017476 kernel: Console: colour dummy device 80x25 Nov 12 22:03:13.017482 kernel: printk: console [tty0] enabled Nov 12 22:03:13.017487 kernel: printk: console [ttyS1] enabled Nov 12 22:03:13.017493 kernel: ACPI: Core revision 20230628 Nov 12 22:03:13.017498 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Nov 12 22:03:13.017503 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 22:03:13.017509 kernel: DMAR: Host address width 39 Nov 12 22:03:13.017514 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Nov 12 22:03:13.017520 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Nov 12 22:03:13.017526 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Nov 12 22:03:13.017532 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Nov 12 22:03:13.017537 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Nov 12 22:03:13.017543 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Nov 12 22:03:13.017548 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Nov 12 22:03:13.017553 kernel: x2apic enabled Nov 12 22:03:13.017559 kernel: APIC: Switched APIC routing to: cluster x2apic Nov 12 22:03:13.017564 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Nov 12 22:03:13.017569 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Nov 12 22:03:13.017575 kernel: CPU0: Thermal monitoring enabled (TM1) Nov 12 22:03:13.017581 kernel: process: using mwait in idle threads Nov 12 22:03:13.017586 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 12 22:03:13.017592 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Nov 12 22:03:13.017597 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 22:03:13.017602 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 12 22:03:13.017607 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 12 22:03:13.017612 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 12 22:03:13.017618 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 22:03:13.017623 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 12 22:03:13.017628 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 12 22:03:13.017635 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 12 22:03:13.017640 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 12 22:03:13.017645 kernel: TAA: Mitigation: TSX disabled Nov 12 22:03:13.017651 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Nov 12 22:03:13.017656 kernel: SRBDS: Mitigation: Microcode Nov 12 22:03:13.017661 kernel: GDS: Mitigation: Microcode Nov 12 22:03:13.017666 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 22:03:13.017672 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 22:03:13.017677 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 22:03:13.017682 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Nov 12 22:03:13.017687 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Nov 12 22:03:13.017693 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 22:03:13.017699 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Nov 12 22:03:13.017704 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Nov 12 22:03:13.017709 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Nov 12 22:03:13.017715 kernel: Freeing SMP alternatives memory: 32K Nov 12 22:03:13.017720 kernel: pid_max: default: 32768 minimum: 301 Nov 12 22:03:13.017725 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 22:03:13.017730 kernel: landlock: Up and running. Nov 12 22:03:13.017736 kernel: SELinux: Initializing. Nov 12 22:03:13.017741 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 22:03:13.017746 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 22:03:13.017751 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Nov 12 22:03:13.017758 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 12 22:03:13.017763 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 12 22:03:13.017768 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 12 22:03:13.017774 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Nov 12 22:03:13.017779 kernel: ... version: 4 Nov 12 22:03:13.017784 kernel: ... bit width: 48 Nov 12 22:03:13.017790 kernel: ... generic registers: 4 Nov 12 22:03:13.017795 kernel: ... value mask: 0000ffffffffffff Nov 12 22:03:13.017800 kernel: ... max period: 00007fffffffffff Nov 12 22:03:13.017806 kernel: ... fixed-purpose events: 3 Nov 12 22:03:13.017812 kernel: ... event mask: 000000070000000f Nov 12 22:03:13.017817 kernel: signal: max sigframe size: 2032 Nov 12 22:03:13.017822 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Nov 12 22:03:13.017828 kernel: rcu: Hierarchical SRCU implementation. Nov 12 22:03:13.017833 kernel: rcu: Max phase no-delay instances is 400. Nov 12 22:03:13.017838 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Nov 12 22:03:13.017844 kernel: smp: Bringing up secondary CPUs ... Nov 12 22:03:13.017849 kernel: smpboot: x86: Booting SMP configuration: Nov 12 22:03:13.017855 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Nov 12 22:03:13.017861 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 12 22:03:13.017866 kernel: smp: Brought up 1 node, 16 CPUs Nov 12 22:03:13.017872 kernel: smpboot: Max logical packages: 1 Nov 12 22:03:13.017877 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Nov 12 22:03:13.017882 kernel: devtmpfs: initialized Nov 12 22:03:13.017888 kernel: x86/mm: Memory block size: 128MB Nov 12 22:03:13.017893 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b28000-0x81b28fff] (4096 bytes) Nov 12 22:03:13.017898 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Nov 12 22:03:13.017905 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 22:03:13.017910 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Nov 12 22:03:13.017915 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 22:03:13.017921 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 22:03:13.017926 kernel: audit: initializing netlink subsys (disabled) Nov 12 22:03:13.017931 kernel: audit: type=2000 audit(1731448987.039:1): state=initialized audit_enabled=0 res=1 Nov 12 22:03:13.017936 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 22:03:13.017942 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 22:03:13.017948 kernel: cpuidle: using governor menu Nov 12 22:03:13.017953 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 22:03:13.017959 kernel: dca service started, version 1.12.1 Nov 12 22:03:13.017964 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Nov 12 22:03:13.017969 kernel: PCI: Using configuration type 1 for base access Nov 12 22:03:13.017975 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Nov 12 22:03:13.017980 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 22:03:13.017985 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 22:03:13.017991 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 22:03:13.017997 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 22:03:13.018002 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 22:03:13.018007 kernel: ACPI: Added _OSI(Module Device) Nov 12 22:03:13.018013 kernel: ACPI: Added _OSI(Processor Device) Nov 12 22:03:13.018018 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 22:03:13.018023 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 22:03:13.018029 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Nov 12 22:03:13.018034 kernel: ACPI: Dynamic OEM Table Load: Nov 12 22:03:13.018039 kernel: ACPI: SSDT 0xFFFF91CC41EC4C00 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Nov 12 22:03:13.018045 kernel: ACPI: Dynamic OEM Table Load: Nov 12 22:03:13.018051 kernel: ACPI: SSDT 0xFFFF91CC41EB8000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Nov 12 22:03:13.018056 kernel: ACPI: Dynamic OEM Table Load: Nov 12 22:03:13.018061 kernel: ACPI: SSDT 0xFFFF91CC4152E700 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Nov 12 22:03:13.018067 kernel: ACPI: Dynamic OEM Table Load: Nov 12 22:03:13.018072 kernel: ACPI: SSDT 0xFFFF91CC41EBE000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Nov 12 22:03:13.018077 kernel: ACPI: Dynamic OEM Table Load: Nov 12 22:03:13.018082 kernel: ACPI: SSDT 0xFFFF91CC41ECE000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Nov 12 22:03:13.018087 kernel: ACPI: Dynamic OEM Table Load: Nov 12 22:03:13.018095 kernel: ACPI: SSDT 0xFFFF91CC41EC1C00 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Nov 12 22:03:13.018101 kernel: ACPI: _OSC evaluated successfully for all CPUs Nov 12 22:03:13.018107 kernel: ACPI: Interpreter enabled Nov 12 22:03:13.018112 kernel: ACPI: PM: (supports S0 S5) Nov 12 22:03:13.018117 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 22:03:13.018122 kernel: HEST: Enabling Firmware First mode for corrected errors. Nov 12 22:03:13.018128 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Nov 12 22:03:13.018133 kernel: HEST: Table parsing has been initialized. Nov 12 22:03:13.018138 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Nov 12 22:03:13.018144 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 22:03:13.018150 kernel: PCI: Using E820 reservations for host bridge windows Nov 12 22:03:13.018155 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Nov 12 22:03:13.018161 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Nov 12 22:03:13.018166 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Nov 12 22:03:13.018171 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Nov 12 22:03:13.018177 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Nov 12 22:03:13.018182 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Nov 12 22:03:13.018187 kernel: ACPI: \_TZ_.FN00: New power resource Nov 12 22:03:13.018193 kernel: ACPI: \_TZ_.FN01: New power resource Nov 12 22:03:13.018199 kernel: ACPI: \_TZ_.FN02: New power resource Nov 12 22:03:13.018205 kernel: ACPI: \_TZ_.FN03: New power resource Nov 12 22:03:13.018210 kernel: ACPI: \_TZ_.FN04: New power resource Nov 12 22:03:13.018215 kernel: ACPI: \PIN_: New power resource Nov 12 22:03:13.018220 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Nov 12 22:03:13.018292 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 12 22:03:13.018347 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Nov 12 22:03:13.018394 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Nov 12 22:03:13.018404 kernel: PCI host bridge to bus 0000:00 Nov 12 22:03:13.018456 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 12 22:03:13.018499 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 12 22:03:13.018542 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 12 22:03:13.018583 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Nov 12 22:03:13.018626 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Nov 12 22:03:13.018667 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Nov 12 22:03:13.018726 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Nov 12 22:03:13.018782 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Nov 12 22:03:13.018831 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Nov 12 22:03:13.018883 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Nov 12 22:03:13.018931 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Nov 12 22:03:13.018982 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Nov 12 22:03:13.019032 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Nov 12 22:03:13.019084 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Nov 12 22:03:13.019137 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Nov 12 22:03:13.019185 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Nov 12 22:03:13.019236 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Nov 12 22:03:13.019284 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Nov 12 22:03:13.019333 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Nov 12 22:03:13.019384 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Nov 12 22:03:13.019430 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 12 22:03:13.019484 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Nov 12 22:03:13.019532 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 12 22:03:13.019583 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Nov 12 22:03:13.019632 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Nov 12 22:03:13.019682 kernel: pci 0000:00:16.0: PME# supported from D3hot Nov 12 22:03:13.019740 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Nov 12 22:03:13.019790 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Nov 12 22:03:13.019836 kernel: pci 0000:00:16.1: PME# supported from D3hot Nov 12 22:03:13.019888 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Nov 12 22:03:13.019936 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Nov 12 22:03:13.019985 kernel: pci 0000:00:16.4: PME# supported from D3hot Nov 12 22:03:13.020036 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Nov 12 22:03:13.020084 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Nov 12 22:03:13.020135 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Nov 12 22:03:13.020182 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Nov 12 22:03:13.020229 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Nov 12 22:03:13.020275 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Nov 12 22:03:13.020327 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Nov 12 22:03:13.020373 kernel: pci 0000:00:17.0: PME# supported from D3hot Nov 12 22:03:13.020425 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Nov 12 22:03:13.020472 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Nov 12 22:03:13.020530 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Nov 12 22:03:13.020578 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Nov 12 22:03:13.020632 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Nov 12 22:03:13.020679 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Nov 12 22:03:13.020731 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Nov 12 22:03:13.020782 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Nov 12 22:03:13.020836 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Nov 12 22:03:13.020884 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Nov 12 22:03:13.020935 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Nov 12 22:03:13.020983 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 12 22:03:13.021034 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Nov 12 22:03:13.021085 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Nov 12 22:03:13.021170 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Nov 12 22:03:13.021217 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Nov 12 22:03:13.021270 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Nov 12 22:03:13.021318 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Nov 12 22:03:13.021372 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Nov 12 22:03:13.021421 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Nov 12 22:03:13.021473 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Nov 12 22:03:13.021521 kernel: pci 0000:01:00.0: PME# supported from D3cold Nov 12 22:03:13.021571 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 12 22:03:13.021620 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 12 22:03:13.021673 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Nov 12 22:03:13.021723 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Nov 12 22:03:13.021771 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Nov 12 22:03:13.021823 kernel: pci 0000:01:00.1: PME# supported from D3cold Nov 12 22:03:13.021871 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 12 22:03:13.021920 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 12 22:03:13.021968 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 12 22:03:13.022016 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Nov 12 22:03:13.022064 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 12 22:03:13.022116 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Nov 12 22:03:13.022170 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Nov 12 22:03:13.022221 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Nov 12 22:03:13.022271 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Nov 12 22:03:13.022318 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Nov 12 22:03:13.022367 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Nov 12 22:03:13.022415 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Nov 12 22:03:13.022463 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Nov 12 22:03:13.022510 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 12 22:03:13.022560 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Nov 12 22:03:13.022614 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Nov 12 22:03:13.022662 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Nov 12 22:03:13.022714 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Nov 12 22:03:13.022762 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Nov 12 22:03:13.022811 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Nov 12 22:03:13.022859 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Nov 12 22:03:13.022910 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Nov 12 22:03:13.022959 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 12 22:03:13.023008 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Nov 12 22:03:13.023055 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Nov 12 22:03:13.023112 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Nov 12 22:03:13.023162 kernel: pci 0000:06:00.0: enabling Extended Tags Nov 12 22:03:13.023211 kernel: pci 0000:06:00.0: supports D1 D2 Nov 12 22:03:13.023261 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 12 22:03:13.023311 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Nov 12 22:03:13.023360 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Nov 12 22:03:13.023408 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Nov 12 22:03:13.023462 kernel: pci_bus 0000:07: extended config space not accessible Nov 12 22:03:13.023517 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Nov 12 22:03:13.023568 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Nov 12 22:03:13.023620 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Nov 12 22:03:13.023673 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Nov 12 22:03:13.023723 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 12 22:03:13.023773 kernel: pci 0000:07:00.0: supports D1 D2 Nov 12 22:03:13.023824 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 12 22:03:13.023874 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Nov 12 22:03:13.023922 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Nov 12 22:03:13.023971 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Nov 12 22:03:13.023981 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Nov 12 22:03:13.023987 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Nov 12 22:03:13.023993 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Nov 12 22:03:13.023999 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Nov 12 22:03:13.024005 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Nov 12 22:03:13.024010 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Nov 12 22:03:13.024016 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Nov 12 22:03:13.024021 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Nov 12 22:03:13.024027 kernel: iommu: Default domain type: Translated Nov 12 22:03:13.024034 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 22:03:13.024040 kernel: PCI: Using ACPI for IRQ routing Nov 12 22:03:13.024045 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 12 22:03:13.024051 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Nov 12 22:03:13.024057 kernel: e820: reserve RAM buffer [mem 0x81b28000-0x83ffffff] Nov 12 22:03:13.024062 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Nov 12 22:03:13.024068 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Nov 12 22:03:13.024073 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Nov 12 22:03:13.024079 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Nov 12 22:03:13.024164 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Nov 12 22:03:13.024215 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Nov 12 22:03:13.024266 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 12 22:03:13.024275 kernel: vgaarb: loaded Nov 12 22:03:13.024281 kernel: clocksource: Switched to clocksource tsc-early Nov 12 22:03:13.024287 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 22:03:13.024293 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 22:03:13.024299 kernel: pnp: PnP ACPI init Nov 12 22:03:13.024348 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Nov 12 22:03:13.024401 kernel: pnp 00:02: [dma 0 disabled] Nov 12 22:03:13.024448 kernel: pnp 00:03: [dma 0 disabled] Nov 12 22:03:13.024498 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Nov 12 22:03:13.024542 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Nov 12 22:03:13.024589 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Nov 12 22:03:13.024635 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Nov 12 22:03:13.024682 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Nov 12 22:03:13.024724 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Nov 12 22:03:13.024769 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Nov 12 22:03:13.024814 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Nov 12 22:03:13.024859 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Nov 12 22:03:13.024901 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Nov 12 22:03:13.024948 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Nov 12 22:03:13.024995 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Nov 12 22:03:13.025041 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Nov 12 22:03:13.025086 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Nov 12 22:03:13.025163 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Nov 12 22:03:13.025207 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Nov 12 22:03:13.025250 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Nov 12 22:03:13.025296 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Nov 12 22:03:13.025342 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Nov 12 22:03:13.025350 kernel: pnp: PnP ACPI: found 10 devices Nov 12 22:03:13.025356 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 22:03:13.025362 kernel: NET: Registered PF_INET protocol family Nov 12 22:03:13.025368 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 22:03:13.025374 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Nov 12 22:03:13.025379 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 22:03:13.025386 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 22:03:13.025392 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 12 22:03:13.025398 kernel: TCP: Hash tables configured (established 262144 bind 65536) Nov 12 22:03:13.025404 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 12 22:03:13.025409 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 12 22:03:13.025415 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 22:03:13.025421 kernel: NET: Registered PF_XDP protocol family Nov 12 22:03:13.025468 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Nov 12 22:03:13.025516 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Nov 12 22:03:13.025567 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Nov 12 22:03:13.025615 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 12 22:03:13.025665 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 12 22:03:13.025714 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 12 22:03:13.025763 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 12 22:03:13.025812 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 12 22:03:13.025860 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Nov 12 22:03:13.025908 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 12 22:03:13.025958 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Nov 12 22:03:13.026007 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Nov 12 22:03:13.026054 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 12 22:03:13.026105 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Nov 12 22:03:13.026192 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Nov 12 22:03:13.026243 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 12 22:03:13.026290 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Nov 12 22:03:13.026338 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Nov 12 22:03:13.026387 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Nov 12 22:03:13.026436 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Nov 12 22:03:13.026484 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Nov 12 22:03:13.026532 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Nov 12 22:03:13.026580 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Nov 12 22:03:13.026630 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Nov 12 22:03:13.026675 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Nov 12 22:03:13.026717 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 12 22:03:13.026760 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 12 22:03:13.026802 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 12 22:03:13.026844 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Nov 12 22:03:13.026886 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Nov 12 22:03:13.026935 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Nov 12 22:03:13.026982 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Nov 12 22:03:13.027033 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Nov 12 22:03:13.027077 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Nov 12 22:03:13.027166 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Nov 12 22:03:13.027210 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Nov 12 22:03:13.027258 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Nov 12 22:03:13.027305 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Nov 12 22:03:13.027352 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Nov 12 22:03:13.027400 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Nov 12 22:03:13.027408 kernel: PCI: CLS 64 bytes, default 64 Nov 12 22:03:13.027414 kernel: DMAR: No ATSR found Nov 12 22:03:13.027420 kernel: DMAR: No SATC found Nov 12 22:03:13.027426 kernel: DMAR: dmar0: Using Queued invalidation Nov 12 22:03:13.027472 kernel: pci 0000:00:00.0: Adding to iommu group 0 Nov 12 22:03:13.027523 kernel: pci 0000:00:01.0: Adding to iommu group 1 Nov 12 22:03:13.027570 kernel: pci 0000:00:08.0: Adding to iommu group 2 Nov 12 22:03:13.027617 kernel: pci 0000:00:12.0: Adding to iommu group 3 Nov 12 22:03:13.027664 kernel: pci 0000:00:14.0: Adding to iommu group 4 Nov 12 22:03:13.027712 kernel: pci 0000:00:14.2: Adding to iommu group 4 Nov 12 22:03:13.027758 kernel: pci 0000:00:15.0: Adding to iommu group 5 Nov 12 22:03:13.027805 kernel: pci 0000:00:15.1: Adding to iommu group 5 Nov 12 22:03:13.027852 kernel: pci 0000:00:16.0: Adding to iommu group 6 Nov 12 22:03:13.027902 kernel: pci 0000:00:16.1: Adding to iommu group 6 Nov 12 22:03:13.027949 kernel: pci 0000:00:16.4: Adding to iommu group 6 Nov 12 22:03:13.027996 kernel: pci 0000:00:17.0: Adding to iommu group 7 Nov 12 22:03:13.028042 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Nov 12 22:03:13.028092 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Nov 12 22:03:13.028174 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Nov 12 22:03:13.028222 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Nov 12 22:03:13.028269 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Nov 12 22:03:13.028315 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Nov 12 22:03:13.028365 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Nov 12 22:03:13.028412 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Nov 12 22:03:13.028460 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Nov 12 22:03:13.028508 kernel: pci 0000:01:00.0: Adding to iommu group 1 Nov 12 22:03:13.028558 kernel: pci 0000:01:00.1: Adding to iommu group 1 Nov 12 22:03:13.028607 kernel: pci 0000:03:00.0: Adding to iommu group 15 Nov 12 22:03:13.028657 kernel: pci 0000:04:00.0: Adding to iommu group 16 Nov 12 22:03:13.028705 kernel: pci 0000:06:00.0: Adding to iommu group 17 Nov 12 22:03:13.028757 kernel: pci 0000:07:00.0: Adding to iommu group 17 Nov 12 22:03:13.028766 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Nov 12 22:03:13.028772 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 12 22:03:13.028778 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Nov 12 22:03:13.028783 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Nov 12 22:03:13.028789 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Nov 12 22:03:13.028795 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Nov 12 22:03:13.028800 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Nov 12 22:03:13.028852 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Nov 12 22:03:13.028861 kernel: Initialise system trusted keyrings Nov 12 22:03:13.028867 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Nov 12 22:03:13.028873 kernel: Key type asymmetric registered Nov 12 22:03:13.028878 kernel: Asymmetric key parser 'x509' registered Nov 12 22:03:13.028884 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 22:03:13.028889 kernel: io scheduler mq-deadline registered Nov 12 22:03:13.028895 kernel: io scheduler kyber registered Nov 12 22:03:13.028901 kernel: io scheduler bfq registered Nov 12 22:03:13.028949 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Nov 12 22:03:13.028998 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Nov 12 22:03:13.029046 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Nov 12 22:03:13.029096 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Nov 12 22:03:13.029176 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Nov 12 22:03:13.029224 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Nov 12 22:03:13.029275 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Nov 12 22:03:13.029286 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Nov 12 22:03:13.029292 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Nov 12 22:03:13.029298 kernel: pstore: Using crash dump compression: deflate Nov 12 22:03:13.029304 kernel: pstore: Registered erst as persistent store backend Nov 12 22:03:13.029309 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 22:03:13.029315 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 22:03:13.029321 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 22:03:13.029326 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 12 22:03:13.029332 kernel: hpet_acpi_add: no address or irqs in _CRS Nov 12 22:03:13.029383 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Nov 12 22:03:13.029392 kernel: i8042: PNP: No PS/2 controller found. Nov 12 22:03:13.029434 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Nov 12 22:03:13.029478 kernel: rtc_cmos rtc_cmos: registered as rtc0 Nov 12 22:03:13.029522 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-11-12T22:03:11 UTC (1731448991) Nov 12 22:03:13.029565 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Nov 12 22:03:13.029573 kernel: intel_pstate: Intel P-state driver initializing Nov 12 22:03:13.029581 kernel: intel_pstate: Disabling energy efficiency optimization Nov 12 22:03:13.029587 kernel: intel_pstate: HWP enabled Nov 12 22:03:13.029592 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Nov 12 22:03:13.029598 kernel: vesafb: scrolling: redraw Nov 12 22:03:13.029604 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Nov 12 22:03:13.029609 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000fc4729d1, using 768k, total 768k Nov 12 22:03:13.029615 kernel: Console: switching to colour frame buffer device 128x48 Nov 12 22:03:13.029621 kernel: fb0: VESA VGA frame buffer device Nov 12 22:03:13.029627 kernel: NET: Registered PF_INET6 protocol family Nov 12 22:03:13.029633 kernel: Segment Routing with IPv6 Nov 12 22:03:13.029639 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 22:03:13.029645 kernel: NET: Registered PF_PACKET protocol family Nov 12 22:03:13.029650 kernel: Key type dns_resolver registered Nov 12 22:03:13.029656 kernel: microcode: Microcode Update Driver: v2.2. Nov 12 22:03:13.029662 kernel: IPI shorthand broadcast: enabled Nov 12 22:03:13.029667 kernel: sched_clock: Marking stable (2476000552, 1385614430)->(4405288002, -543673020) Nov 12 22:03:13.029673 kernel: registered taskstats version 1 Nov 12 22:03:13.029679 kernel: Loading compiled-in X.509 certificates Nov 12 22:03:13.029684 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 0473a73d840db5324524af106a53c13fc6fc218a' Nov 12 22:03:13.029691 kernel: Key type .fscrypt registered Nov 12 22:03:13.029697 kernel: Key type fscrypt-provisioning registered Nov 12 22:03:13.029702 kernel: ima: Allocated hash algorithm: sha1 Nov 12 22:03:13.029708 kernel: ima: No architecture policies found Nov 12 22:03:13.029714 kernel: clk: Disabling unused clocks Nov 12 22:03:13.029719 kernel: Freeing unused kernel image (initmem) memory: 42828K Nov 12 22:03:13.029725 kernel: Write protecting the kernel read-only data: 36864k Nov 12 22:03:13.029731 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Nov 12 22:03:13.029738 kernel: Run /init as init process Nov 12 22:03:13.029743 kernel: with arguments: Nov 12 22:03:13.029749 kernel: /init Nov 12 22:03:13.029755 kernel: with environment: Nov 12 22:03:13.029760 kernel: HOME=/ Nov 12 22:03:13.029766 kernel: TERM=linux Nov 12 22:03:13.029771 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 22:03:13.029778 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 22:03:13.029787 systemd[1]: Detected architecture x86-64. Nov 12 22:03:13.029793 systemd[1]: Running in initrd. Nov 12 22:03:13.029798 systemd[1]: No hostname configured, using default hostname. Nov 12 22:03:13.029804 systemd[1]: Hostname set to . Nov 12 22:03:13.029810 systemd[1]: Initializing machine ID from random generator. Nov 12 22:03:13.029816 systemd[1]: Queued start job for default target initrd.target. Nov 12 22:03:13.029822 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 22:03:13.029828 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 22:03:13.029835 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 22:03:13.029841 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 22:03:13.029847 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 22:03:13.029853 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 22:03:13.029860 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 22:03:13.029866 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 22:03:13.029873 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Nov 12 22:03:13.029879 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Nov 12 22:03:13.029885 kernel: clocksource: Switched to clocksource tsc Nov 12 22:03:13.029891 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 22:03:13.029897 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 22:03:13.029903 systemd[1]: Reached target paths.target - Path Units. Nov 12 22:03:13.029908 systemd[1]: Reached target slices.target - Slice Units. Nov 12 22:03:13.029914 systemd[1]: Reached target swap.target - Swaps. Nov 12 22:03:13.029920 systemd[1]: Reached target timers.target - Timer Units. Nov 12 22:03:13.029927 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 22:03:13.029933 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 22:03:13.029939 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 22:03:13.029945 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 22:03:13.029951 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 22:03:13.029957 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 22:03:13.029963 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 22:03:13.029969 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 22:03:13.029975 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 22:03:13.029982 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 22:03:13.029988 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 22:03:13.029994 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 22:03:13.030000 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 22:03:13.030015 systemd-journald[266]: Collecting audit messages is disabled. Nov 12 22:03:13.030030 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 22:03:13.030037 systemd-journald[266]: Journal started Nov 12 22:03:13.030051 systemd-journald[266]: Runtime Journal (/run/log/journal/de3e9ab6c27746e8ad7e637fc13a3ac9) is 8.0M, max 639.9M, 631.9M free. Nov 12 22:03:13.052933 systemd-modules-load[268]: Inserted module 'overlay' Nov 12 22:03:13.075092 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:03:13.103744 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 22:03:13.165039 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 22:03:13.165055 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 22:03:13.165064 kernel: Bridge firewalling registered Nov 12 22:03:13.160269 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 22:03:13.165029 systemd-modules-load[268]: Inserted module 'br_netfilter' Nov 12 22:03:13.186501 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 22:03:13.206425 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 22:03:13.220422 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:03:13.264392 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 22:03:13.275716 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 22:03:13.304327 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 22:03:13.305201 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 22:03:13.310154 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 22:03:13.324866 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:03:13.348826 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:03:13.369950 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 22:03:13.403542 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 22:03:13.425425 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 22:03:13.435539 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 22:03:13.445767 systemd-resolved[305]: Positive Trust Anchors: Nov 12 22:03:13.490211 dracut-cmdline[303]: dracut-dracut-053 Nov 12 22:03:13.490211 dracut-cmdline[303]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 22:03:13.528260 kernel: SCSI subsystem initialized Nov 12 22:03:13.528291 kernel: Loading iSCSI transport class v2.0-870. Nov 12 22:03:13.445772 systemd-resolved[305]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 22:03:13.606213 kernel: iscsi: registered transport (tcp) Nov 12 22:03:13.606228 kernel: iscsi: registered transport (qla4xxx) Nov 12 22:03:13.606236 kernel: QLogic iSCSI HBA Driver Nov 12 22:03:13.445795 systemd-resolved[305]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 22:03:13.447447 systemd-resolved[305]: Defaulting to hostname 'linux'. Nov 12 22:03:13.457284 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 22:03:13.478330 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 22:03:13.514208 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 22:03:13.611999 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 22:03:13.679398 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 22:03:13.795541 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 22:03:13.795559 kernel: device-mapper: uevent: version 1.0.3 Nov 12 22:03:13.815506 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 22:03:13.874156 kernel: raid6: avx2x4 gen() 52792 MB/s Nov 12 22:03:13.906130 kernel: raid6: avx2x2 gen() 53242 MB/s Nov 12 22:03:13.942763 kernel: raid6: avx2x1 gen() 44605 MB/s Nov 12 22:03:13.942779 kernel: raid6: using algorithm avx2x2 gen() 53242 MB/s Nov 12 22:03:13.990627 kernel: raid6: .... xor() 31127 MB/s, rmw enabled Nov 12 22:03:13.990643 kernel: raid6: using avx2x2 recovery algorithm Nov 12 22:03:14.032122 kernel: xor: automatically using best checksumming function avx Nov 12 22:03:14.145111 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 22:03:14.151293 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 22:03:14.172237 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 22:03:14.179599 systemd-udevd[497]: Using default interface naming scheme 'v255'. Nov 12 22:03:14.183327 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 22:03:14.216400 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 22:03:14.271264 dracut-pre-trigger[509]: rd.md=0: removing MD RAID activation Nov 12 22:03:14.291184 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 22:03:14.312378 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 22:03:14.393151 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 22:03:14.437745 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 12 22:03:14.437763 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 12 22:03:14.410201 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 22:03:14.531177 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 22:03:14.531202 kernel: ACPI: bus type USB registered Nov 12 22:03:14.531212 kernel: usbcore: registered new interface driver usbfs Nov 12 22:03:14.531222 kernel: usbcore: registered new interface driver hub Nov 12 22:03:14.531230 kernel: usbcore: registered new device driver usb Nov 12 22:03:14.531239 kernel: PTP clock support registered Nov 12 22:03:14.440206 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 22:03:14.440306 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:03:14.581215 kernel: libata version 3.00 loaded. Nov 12 22:03:14.581231 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 22:03:14.581239 kernel: AES CTR mode by8 optimization enabled Nov 12 22:03:14.581247 kernel: ahci 0000:00:17.0: version 3.0 Nov 12 22:03:14.857160 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 12 22:03:14.857240 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Nov 12 22:03:14.857307 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Nov 12 22:03:14.857370 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Nov 12 22:03:14.857431 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Nov 12 22:03:14.857491 kernel: scsi host0: ahci Nov 12 22:03:14.857554 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 12 22:03:14.857614 kernel: scsi host1: ahci Nov 12 22:03:14.857673 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Nov 12 22:03:14.857735 kernel: scsi host2: ahci Nov 12 22:03:14.857794 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Nov 12 22:03:14.857853 kernel: scsi host3: ahci Nov 12 22:03:14.857915 kernel: hub 1-0:1.0: USB hub found Nov 12 22:03:14.857985 kernel: scsi host4: ahci Nov 12 22:03:14.858043 kernel: hub 1-0:1.0: 16 ports detected Nov 12 22:03:14.858115 kernel: scsi host5: ahci Nov 12 22:03:14.858176 kernel: hub 2-0:1.0: USB hub found Nov 12 22:03:14.858244 kernel: scsi host6: ahci Nov 12 22:03:14.858304 kernel: hub 2-0:1.0: 10 ports detected Nov 12 22:03:14.858368 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Nov 12 22:03:14.858377 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Nov 12 22:03:14.858384 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Nov 12 22:03:14.858393 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Nov 12 22:03:14.858401 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Nov 12 22:03:14.858408 kernel: pps pps0: new PPS source ptp0 Nov 12 22:03:14.858467 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Nov 12 22:03:14.858475 kernel: igb 0000:03:00.0: added PHC on eth0 Nov 12 22:03:15.066238 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Nov 12 22:03:15.066260 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Nov 12 22:03:15.066283 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 12 22:03:15.066528 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Nov 12 22:03:15.066545 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:f0:ca Nov 12 22:03:15.066772 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Nov 12 22:03:15.126298 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Nov 12 22:03:15.126370 kernel: hub 1-14:1.0: USB hub found Nov 12 22:03:15.126439 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 12 22:03:15.126502 kernel: pps pps1: new PPS source ptp1 Nov 12 22:03:15.126564 kernel: hub 1-14:1.0: 4 ports detected Nov 12 22:03:15.126628 kernel: igb 0000:04:00.0: added PHC on eth1 Nov 12 22:03:15.219206 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 12 22:03:15.219215 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 12 22:03:15.219280 kernel: ata7: SATA link down (SStatus 0 SControl 300) Nov 12 22:03:15.219289 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:f0:cb Nov 12 22:03:15.219351 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 12 22:03:15.219359 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Nov 12 22:03:15.219422 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 12 22:03:15.219430 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 12 22:03:15.219491 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 12 22:03:14.493702 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 22:03:15.383223 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 12 22:03:15.383234 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 12 22:03:15.383242 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 12 22:03:15.383252 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 12 22:03:14.560180 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 22:03:15.503431 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 12 22:03:15.503444 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Nov 12 22:03:15.897237 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 12 22:03:15.897248 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 12 22:03:15.897326 kernel: ata1.00: Features: NCQ-prio Nov 12 22:03:15.897335 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Nov 12 22:03:15.897446 kernel: ata2.00: Features: NCQ-prio Nov 12 22:03:15.897454 kernel: ata1.00: configured for UDMA/133 Nov 12 22:03:15.897462 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 12 22:03:15.897536 kernel: ata2.00: configured for UDMA/133 Nov 12 22:03:15.897545 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 12 22:03:16.093989 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Nov 12 22:03:16.094072 kernel: ata2.00: Enabling discard_zeroes_data Nov 12 22:03:16.094097 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Nov 12 22:03:16.094185 kernel: ata1.00: Enabling discard_zeroes_data Nov 12 22:03:16.094200 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 12 22:03:16.094266 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 12 22:03:16.094342 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Nov 12 22:03:16.094402 kernel: sd 0:0:0:0: [sdb] Write Protect is off Nov 12 22:03:16.094469 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Nov 12 22:03:16.094528 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 12 22:03:16.094598 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Nov 12 22:03:16.094663 kernel: ata1.00: Enabling discard_zeroes_data Nov 12 22:03:16.094671 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 12 22:03:16.094679 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Nov 12 22:03:16.094734 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Nov 12 22:03:16.094806 kernel: sd 1:0:0:0: [sda] Write Protect is off Nov 12 22:03:16.094866 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Nov 12 22:03:16.094935 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Nov 12 22:03:16.094994 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 22:03:16.095002 kernel: GPT:9289727 != 937703087 Nov 12 22:03:16.095009 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 22:03:16.095016 kernel: GPT:9289727 != 937703087 Nov 12 22:03:16.095023 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 22:03:16.095030 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 12 22:03:16.095037 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Nov 12 22:03:16.095125 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 12 22:03:16.095210 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 12 22:03:16.095286 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Nov 12 22:03:16.468887 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Nov 12 22:03:16.468965 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 12 22:03:16.469031 kernel: ata2.00: Enabling discard_zeroes_data Nov 12 22:03:16.469040 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Nov 12 22:03:16.469135 kernel: usbcore: registered new interface driver usbhid Nov 12 22:03:16.469159 kernel: usbhid: USB HID core driver Nov 12 22:03:16.469180 kernel: BTRFS: device fsid 9dfeafbb-8ab7-4be2-acae-f51db463fc77 devid 1 transid 37 /dev/sdb3 scanned by (udev-worker) (564) Nov 12 22:03:16.469187 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by (udev-worker) (553) Nov 12 22:03:16.469194 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Nov 12 22:03:16.469201 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Nov 12 22:03:16.469264 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Nov 12 22:03:16.469324 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Nov 12 22:03:16.469394 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Nov 12 22:03:16.469402 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Nov 12 22:03:16.469465 kernel: ata1.00: Enabling discard_zeroes_data Nov 12 22:03:16.469474 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 12 22:03:16.469481 kernel: ata1.00: Enabling discard_zeroes_data Nov 12 22:03:16.469488 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 12 22:03:16.469548 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 12 22:03:14.560328 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:03:15.339467 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:03:16.541551 kernel: ata1.00: Enabling discard_zeroes_data Nov 12 22:03:16.541563 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 12 22:03:15.417312 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:03:16.557192 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Nov 12 22:03:15.513430 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 22:03:15.514447 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 22:03:15.529583 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 22:03:16.573349 disk-uuid[704]: Primary Header is updated. Nov 12 22:03:16.573349 disk-uuid[704]: Secondary Entries is updated. Nov 12 22:03:16.573349 disk-uuid[704]: Secondary Header is updated. Nov 12 22:03:15.539077 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 22:03:15.613216 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 22:03:16.091308 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:03:16.110304 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 22:03:16.161754 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Nov 12 22:03:16.206686 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Nov 12 22:03:16.261257 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Nov 12 22:03:16.287163 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Nov 12 22:03:16.332734 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Nov 12 22:03:16.397377 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 22:03:16.446310 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 22:03:16.582276 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:03:16.617340 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Nov 12 22:03:17.518362 kernel: ata1.00: Enabling discard_zeroes_data Nov 12 22:03:17.539127 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 12 22:03:17.539164 disk-uuid[705]: The operation has completed successfully. Nov 12 22:03:17.578036 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 22:03:17.578083 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 22:03:17.614478 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 22:03:17.640254 sh[739]: Success Nov 12 22:03:17.650187 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 12 22:03:17.684629 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 22:03:17.705035 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 22:03:17.720421 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 22:03:17.764139 kernel: BTRFS info (device dm-0): first mount of filesystem 9dfeafbb-8ab7-4be2-acae-f51db463fc77 Nov 12 22:03:17.764157 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 22:03:17.786716 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 22:03:17.806876 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 22:03:17.825854 kernel: BTRFS info (device dm-0): using free space tree Nov 12 22:03:17.865137 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 12 22:03:17.867217 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 22:03:17.876586 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 22:03:17.886357 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 22:03:17.996757 kernel: BTRFS info (device sdb6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 22:03:17.996771 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Nov 12 22:03:17.996779 kernel: BTRFS info (device sdb6): using free space tree Nov 12 22:03:17.996787 kernel: BTRFS info (device sdb6): enabling ssd optimizations Nov 12 22:03:17.996794 kernel: BTRFS info (device sdb6): auto enabling async discard Nov 12 22:03:17.926561 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 22:03:18.034369 kernel: BTRFS info (device sdb6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 22:03:18.036441 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 22:03:18.046939 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 22:03:18.092543 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 22:03:18.111312 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 22:03:18.141522 ignition[829]: Ignition 2.19.0 Nov 12 22:03:18.141527 ignition[829]: Stage: fetch-offline Nov 12 22:03:18.143692 unknown[829]: fetched base config from "system" Nov 12 22:03:18.141549 ignition[829]: no configs at "/usr/lib/ignition/base.d" Nov 12 22:03:18.143696 unknown[829]: fetched user config from "system" Nov 12 22:03:18.141555 ignition[829]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 12 22:03:18.144743 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 22:03:18.141610 ignition[829]: parsed url from cmdline: "" Nov 12 22:03:18.146565 systemd-networkd[923]: lo: Link UP Nov 12 22:03:18.141612 ignition[829]: no config URL provided Nov 12 22:03:18.146567 systemd-networkd[923]: lo: Gained carrier Nov 12 22:03:18.141615 ignition[829]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 22:03:18.148866 systemd-networkd[923]: Enumeration completed Nov 12 22:03:18.141637 ignition[829]: parsing config with SHA512: a49f23bd1b04298a78e389a4771c717b69c737912acd6a7dd82c6fc9ea4de2691a74696655b88b644d7eb1f9cf59b3ddaaad99dbe0b79f027a08d5adfe94950f Nov 12 22:03:18.148938 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 22:03:18.143911 ignition[829]: fetch-offline: fetch-offline passed Nov 12 22:03:18.149603 systemd-networkd[923]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 22:03:18.143914 ignition[829]: POST message to Packet Timeline Nov 12 22:03:18.172470 systemd[1]: Reached target network.target - Network. Nov 12 22:03:18.143917 ignition[829]: POST Status error: resource requires networking Nov 12 22:03:18.180150 systemd-networkd[923]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 22:03:18.143952 ignition[829]: Ignition finished successfully Nov 12 22:03:18.188366 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 12 22:03:18.213824 ignition[935]: Ignition 2.19.0 Nov 12 22:03:18.206322 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 22:03:18.213829 ignition[935]: Stage: kargs Nov 12 22:03:18.400276 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Nov 12 22:03:18.208465 systemd-networkd[923]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 22:03:18.213960 ignition[935]: no configs at "/usr/lib/ignition/base.d" Nov 12 22:03:18.395825 systemd-networkd[923]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 22:03:18.213968 ignition[935]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 12 22:03:18.214604 ignition[935]: kargs: kargs passed Nov 12 22:03:18.214607 ignition[935]: POST message to Packet Timeline Nov 12 22:03:18.214618 ignition[935]: GET https://metadata.packet.net/metadata: attempt #1 Nov 12 22:03:18.215156 ignition[935]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:49181->[::1]:53: read: connection refused Nov 12 22:03:18.415671 ignition[935]: GET https://metadata.packet.net/metadata: attempt #2 Nov 12 22:03:18.416358 ignition[935]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:56829->[::1]:53: read: connection refused Nov 12 22:03:18.617132 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Nov 12 22:03:18.618655 systemd-networkd[923]: eno1: Link UP Nov 12 22:03:18.618847 systemd-networkd[923]: eno2: Link UP Nov 12 22:03:18.619027 systemd-networkd[923]: enp1s0f0np0: Link UP Nov 12 22:03:18.619252 systemd-networkd[923]: enp1s0f0np0: Gained carrier Nov 12 22:03:18.629287 systemd-networkd[923]: enp1s0f1np1: Link UP Nov 12 22:03:18.670469 systemd-networkd[923]: enp1s0f0np0: DHCPv4 address 147.75.202.249/31, gateway 147.75.202.248 acquired from 145.40.83.140 Nov 12 22:03:18.816890 ignition[935]: GET https://metadata.packet.net/metadata: attempt #3 Nov 12 22:03:18.818181 ignition[935]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:36283->[::1]:53: read: connection refused Nov 12 22:03:19.432876 systemd-networkd[923]: enp1s0f1np1: Gained carrier Nov 12 22:03:19.618690 ignition[935]: GET https://metadata.packet.net/metadata: attempt #4 Nov 12 22:03:19.619963 ignition[935]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:33913->[::1]:53: read: connection refused Nov 12 22:03:20.328680 systemd-networkd[923]: enp1s0f0np0: Gained IPv6LL Nov 12 22:03:21.221238 ignition[935]: GET https://metadata.packet.net/metadata: attempt #5 Nov 12 22:03:21.222573 ignition[935]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:54864->[::1]:53: read: connection refused Nov 12 22:03:21.225386 systemd-networkd[923]: enp1s0f1np1: Gained IPv6LL Nov 12 22:03:24.426057 ignition[935]: GET https://metadata.packet.net/metadata: attempt #6 Nov 12 22:03:25.803829 ignition[935]: GET result: OK Nov 12 22:03:26.095652 ignition[935]: Ignition finished successfully Nov 12 22:03:26.098186 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 22:03:26.128493 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 22:03:26.139717 ignition[955]: Ignition 2.19.0 Nov 12 22:03:26.139722 ignition[955]: Stage: disks Nov 12 22:03:26.139824 ignition[955]: no configs at "/usr/lib/ignition/base.d" Nov 12 22:03:26.139830 ignition[955]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 12 22:03:26.140356 ignition[955]: disks: disks passed Nov 12 22:03:26.140358 ignition[955]: POST message to Packet Timeline Nov 12 22:03:26.140367 ignition[955]: GET https://metadata.packet.net/metadata: attempt #1 Nov 12 22:03:26.697390 ignition[955]: GET result: OK Nov 12 22:03:27.001965 ignition[955]: Ignition finished successfully Nov 12 22:03:27.005487 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 22:03:27.020430 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 22:03:27.038355 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 22:03:27.060513 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 22:03:27.072664 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 22:03:27.100503 systemd[1]: Reached target basic.target - Basic System. Nov 12 22:03:27.129378 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 22:03:27.164642 systemd-fsck[975]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 12 22:03:27.175861 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 22:03:27.193202 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 22:03:27.297142 kernel: EXT4-fs (sdb9): mounted filesystem cc5635ac-cac6-420e-b789-89e3a937cfb2 r/w with ordered data mode. Quota mode: none. Nov 12 22:03:27.297681 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 22:03:27.306539 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 22:03:27.338303 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 22:03:27.369153 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by mount (984) Nov 12 22:03:27.346709 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 22:03:27.481271 kernel: BTRFS info (device sdb6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 22:03:27.481290 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Nov 12 22:03:27.481298 kernel: BTRFS info (device sdb6): using free space tree Nov 12 22:03:27.481305 kernel: BTRFS info (device sdb6): enabling ssd optimizations Nov 12 22:03:27.481313 kernel: BTRFS info (device sdb6): auto enabling async discard Nov 12 22:03:27.369745 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 12 22:03:27.504470 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Nov 12 22:03:27.515346 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 22:03:27.556415 coreos-metadata[986]: Nov 12 22:03:27.553 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 12 22:03:27.515434 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 22:03:27.598295 coreos-metadata[1002]: Nov 12 22:03:27.553 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 12 22:03:27.540721 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 22:03:27.565408 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 22:03:27.601349 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 22:03:27.646215 initrd-setup-root[1016]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 22:03:27.656212 initrd-setup-root[1023]: cut: /sysroot/etc/group: No such file or directory Nov 12 22:03:27.667216 initrd-setup-root[1030]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 22:03:27.678157 initrd-setup-root[1037]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 22:03:27.685397 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 22:03:27.721445 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 22:03:27.764327 kernel: BTRFS info (device sdb6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 22:03:27.740387 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 22:03:27.773880 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 22:03:27.796589 ignition[1109]: INFO : Ignition 2.19.0 Nov 12 22:03:27.796589 ignition[1109]: INFO : Stage: mount Nov 12 22:03:27.803299 ignition[1109]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 22:03:27.803299 ignition[1109]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 12 22:03:27.803299 ignition[1109]: INFO : mount: mount passed Nov 12 22:03:27.803299 ignition[1109]: INFO : POST message to Packet Timeline Nov 12 22:03:27.803299 ignition[1109]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 12 22:03:27.801998 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 22:03:28.205846 coreos-metadata[1002]: Nov 12 22:03:28.205 INFO Fetch successful Nov 12 22:03:28.235087 ignition[1109]: INFO : GET result: OK Nov 12 22:03:28.238868 systemd[1]: flatcar-static-network.service: Deactivated successfully. Nov 12 22:03:28.238923 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Nov 12 22:03:28.310705 coreos-metadata[986]: Nov 12 22:03:28.310 INFO Fetch successful Nov 12 22:03:28.380094 coreos-metadata[986]: Nov 12 22:03:28.380 INFO wrote hostname ci-4081.2.0-a-a9d0314af7 to /sysroot/etc/hostname Nov 12 22:03:28.381786 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 12 22:03:28.544458 ignition[1109]: INFO : Ignition finished successfully Nov 12 22:03:28.546991 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 22:03:28.581312 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 22:03:28.592168 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 22:03:28.654180 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sdb6 scanned by mount (1137) Nov 12 22:03:28.654198 kernel: BTRFS info (device sdb6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 22:03:28.674884 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Nov 12 22:03:28.693139 kernel: BTRFS info (device sdb6): using free space tree Nov 12 22:03:28.731640 kernel: BTRFS info (device sdb6): enabling ssd optimizations Nov 12 22:03:28.731663 kernel: BTRFS info (device sdb6): auto enabling async discard Nov 12 22:03:28.745172 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 22:03:28.771562 ignition[1154]: INFO : Ignition 2.19.0 Nov 12 22:03:28.771562 ignition[1154]: INFO : Stage: files Nov 12 22:03:28.786359 ignition[1154]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 22:03:28.786359 ignition[1154]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 12 22:03:28.786359 ignition[1154]: DEBUG : files: compiled without relabeling support, skipping Nov 12 22:03:28.786359 ignition[1154]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 22:03:28.786359 ignition[1154]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 22:03:28.786359 ignition[1154]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 22:03:28.786359 ignition[1154]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 22:03:28.786359 ignition[1154]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 22:03:28.786359 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 22:03:28.786359 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 22:03:28.775443 unknown[1154]: wrote ssh authorized keys file for user: core Nov 12 22:03:28.918182 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 12 22:03:28.958786 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 22:03:28.975388 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 12 22:03:28.975388 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 22:03:28.975388 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 22:03:28.975388 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 22:03:28.975388 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 22:03:28.975388 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 22:03:28.975388 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 22:03:28.975388 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 22:03:28.975388 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 22:03:28.975388 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 22:03:28.975388 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 12 22:03:28.975388 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 12 22:03:28.975388 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 12 22:03:28.975388 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Nov 12 22:03:29.577993 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 12 22:03:30.412726 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 12 22:03:30.412726 ignition[1154]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 12 22:03:30.443310 ignition[1154]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 22:03:30.443310 ignition[1154]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 22:03:30.443310 ignition[1154]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 12 22:03:30.443310 ignition[1154]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 12 22:03:30.443310 ignition[1154]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 22:03:30.443310 ignition[1154]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 22:03:30.443310 ignition[1154]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 22:03:30.443310 ignition[1154]: INFO : files: files passed Nov 12 22:03:30.443310 ignition[1154]: INFO : POST message to Packet Timeline Nov 12 22:03:30.443310 ignition[1154]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 12 22:03:31.174469 ignition[1154]: INFO : GET result: OK Nov 12 22:03:31.435308 ignition[1154]: INFO : Ignition finished successfully Nov 12 22:03:31.438318 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 22:03:31.469353 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 22:03:31.479685 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 22:03:31.489631 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 22:03:31.489697 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 22:03:31.534402 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 22:03:31.549684 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 22:03:31.580567 initrd-setup-root-after-ignition[1194]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 22:03:31.580567 initrd-setup-root-after-ignition[1194]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 22:03:31.594445 initrd-setup-root-after-ignition[1199]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 22:03:31.584520 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 22:03:31.663358 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 22:03:31.663407 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 22:03:31.682490 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 22:03:31.703254 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 22:03:31.720334 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 22:03:31.734497 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 22:03:31.806164 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 22:03:31.834512 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 22:03:31.863332 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 22:03:31.874723 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 22:03:31.895783 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 22:03:31.914726 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 22:03:31.915147 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 22:03:31.943819 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 22:03:31.965719 systemd[1]: Stopped target basic.target - Basic System. Nov 12 22:03:31.983687 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 22:03:32.002710 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 22:03:32.023687 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 22:03:32.044820 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 22:03:32.064715 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 22:03:32.085753 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 22:03:32.107738 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 22:03:32.127697 systemd[1]: Stopped target swap.target - Swaps. Nov 12 22:03:32.146601 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 22:03:32.146997 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 22:03:32.182568 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 22:03:32.192730 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 22:03:32.214592 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 22:03:32.215029 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 22:03:32.237596 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 22:03:32.237993 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 22:03:32.269689 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 22:03:32.270162 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 22:03:32.289925 systemd[1]: Stopped target paths.target - Path Units. Nov 12 22:03:32.308384 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 22:03:32.308558 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 22:03:32.329382 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 22:03:32.347520 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 22:03:32.365416 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 22:03:32.365543 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 22:03:32.385439 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 22:03:32.385564 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 22:03:32.407457 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 22:03:32.497420 ignition[1219]: INFO : Ignition 2.19.0 Nov 12 22:03:32.497420 ignition[1219]: INFO : Stage: umount Nov 12 22:03:32.497420 ignition[1219]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 22:03:32.497420 ignition[1219]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 12 22:03:32.497420 ignition[1219]: INFO : umount: umount passed Nov 12 22:03:32.497420 ignition[1219]: INFO : POST message to Packet Timeline Nov 12 22:03:32.497420 ignition[1219]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 12 22:03:32.407624 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 22:03:32.426466 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 22:03:32.426624 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 22:03:32.444459 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 12 22:03:32.444647 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 12 22:03:32.473377 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 22:03:32.487699 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 22:03:32.497407 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 22:03:32.497516 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 22:03:32.515524 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 22:03:32.515644 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 22:03:32.561394 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 22:03:32.561896 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 22:03:32.561963 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 22:03:32.572816 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 22:03:32.572906 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 22:03:33.293318 ignition[1219]: INFO : GET result: OK Nov 12 22:03:33.616194 ignition[1219]: INFO : Ignition finished successfully Nov 12 22:03:33.619109 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 22:03:33.619406 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 22:03:33.636353 systemd[1]: Stopped target network.target - Network. Nov 12 22:03:33.651362 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 22:03:33.651532 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 22:03:33.669431 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 22:03:33.669565 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 22:03:33.688505 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 22:03:33.688661 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 22:03:33.707504 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 22:03:33.707664 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 22:03:33.726491 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 22:03:33.726656 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 22:03:33.745858 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 22:03:33.760256 systemd-networkd[923]: enp1s0f1np1: DHCPv6 lease lost Nov 12 22:03:33.764584 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 22:03:33.774303 systemd-networkd[923]: enp1s0f0np0: DHCPv6 lease lost Nov 12 22:03:33.783038 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 22:03:33.783341 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 22:03:33.802506 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 22:03:33.802818 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 22:03:33.822634 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 22:03:33.822755 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 22:03:33.855313 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 22:03:33.882270 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 22:03:33.882512 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 22:03:33.901594 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 22:03:33.901764 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:03:33.921568 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 22:03:33.921730 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 22:03:33.939482 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 22:03:33.939642 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 22:03:33.959696 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 22:03:33.981394 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 22:03:33.981768 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 22:03:34.017660 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 22:03:34.017706 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 22:03:34.021424 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 22:03:34.021444 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 22:03:34.050321 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 22:03:34.050384 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 22:03:34.092340 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 22:03:34.092525 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 22:03:34.122471 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 22:03:34.122631 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:03:34.174500 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 22:03:34.176346 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 22:03:34.176496 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 22:03:34.447343 systemd-journald[266]: Received SIGTERM from PID 1 (systemd). Nov 12 22:03:34.207393 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 12 22:03:34.207539 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 22:03:34.226375 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 22:03:34.226514 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 22:03:34.248375 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 22:03:34.248511 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:03:34.270495 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 22:03:34.270729 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 22:03:34.291954 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 22:03:34.292218 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 22:03:34.313384 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 22:03:34.342422 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 22:03:34.382815 systemd[1]: Switching root. Nov 12 22:03:34.548294 systemd-journald[266]: Journal stopped Nov 12 22:03:37.165507 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 22:03:37.165521 kernel: SELinux: policy capability open_perms=1 Nov 12 22:03:37.165528 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 22:03:37.165535 kernel: SELinux: policy capability always_check_network=0 Nov 12 22:03:37.165540 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 22:03:37.165545 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 22:03:37.165551 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 22:03:37.165556 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 22:03:37.165561 kernel: audit: type=1403 audit(1731449014.755:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 22:03:37.165568 systemd[1]: Successfully loaded SELinux policy in 160.469ms. Nov 12 22:03:37.165576 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.171ms. Nov 12 22:03:37.165582 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 22:03:37.165588 systemd[1]: Detected architecture x86-64. Nov 12 22:03:37.165594 systemd[1]: Detected first boot. Nov 12 22:03:37.165600 systemd[1]: Hostname set to . Nov 12 22:03:37.165607 systemd[1]: Initializing machine ID from random generator. Nov 12 22:03:37.165613 zram_generator::config[1270]: No configuration found. Nov 12 22:03:37.165620 systemd[1]: Populated /etc with preset unit settings. Nov 12 22:03:37.165626 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 12 22:03:37.165632 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 12 22:03:37.165638 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 12 22:03:37.165644 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 22:03:37.165651 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 22:03:37.165657 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 22:03:37.165664 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 22:03:37.165670 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 22:03:37.165677 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 22:03:37.165683 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 22:03:37.165689 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 22:03:37.165696 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 22:03:37.165703 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 22:03:37.165709 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 22:03:37.165715 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 22:03:37.165721 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 22:03:37.165727 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 22:03:37.165734 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Nov 12 22:03:37.165740 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 22:03:37.165747 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 12 22:03:37.165753 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 12 22:03:37.165760 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 12 22:03:37.165768 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 22:03:37.165774 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 22:03:37.165780 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 22:03:37.165787 systemd[1]: Reached target slices.target - Slice Units. Nov 12 22:03:37.165794 systemd[1]: Reached target swap.target - Swaps. Nov 12 22:03:37.165801 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 22:03:37.165807 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 22:03:37.165814 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 22:03:37.165820 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 22:03:37.165827 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 22:03:37.165834 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 22:03:37.165841 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 22:03:37.165848 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 22:03:37.165854 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 22:03:37.165861 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:03:37.165867 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 22:03:37.165874 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 22:03:37.165883 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 22:03:37.165890 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 22:03:37.165897 systemd[1]: Reached target machines.target - Containers. Nov 12 22:03:37.165903 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 22:03:37.165910 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 22:03:37.165916 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 22:03:37.165923 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 22:03:37.165929 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 22:03:37.165936 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 22:03:37.165943 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 22:03:37.165950 kernel: ACPI: bus type drm_connector registered Nov 12 22:03:37.165956 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 22:03:37.165962 kernel: fuse: init (API version 7.39) Nov 12 22:03:37.165968 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 22:03:37.165975 kernel: loop: module loaded Nov 12 22:03:37.165981 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 22:03:37.165987 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 12 22:03:37.165995 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 12 22:03:37.166001 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 12 22:03:37.166008 systemd[1]: Stopped systemd-fsck-usr.service. Nov 12 22:03:37.166014 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 22:03:37.166029 systemd-journald[1373]: Collecting audit messages is disabled. Nov 12 22:03:37.166044 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 22:03:37.166051 systemd-journald[1373]: Journal started Nov 12 22:03:37.166064 systemd-journald[1373]: Runtime Journal (/run/log/journal/0ced460d902d4a0682f6be34316e719e) is 8.0M, max 639.9M, 631.9M free. Nov 12 22:03:35.270786 systemd[1]: Queued start job for default target multi-user.target. Nov 12 22:03:35.287903 systemd[1]: Unnecessary job was removed for dev-sdb6.device - /dev/sdb6. Nov 12 22:03:35.288162 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 12 22:03:37.220140 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 22:03:37.254158 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 22:03:37.287138 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 22:03:37.320899 systemd[1]: verity-setup.service: Deactivated successfully. Nov 12 22:03:37.320930 systemd[1]: Stopped verity-setup.service. Nov 12 22:03:37.384140 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:03:37.405306 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 22:03:37.415669 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 22:03:37.425377 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 22:03:37.435362 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 22:03:37.445374 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 22:03:37.455318 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 22:03:37.465330 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 22:03:37.475454 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 22:03:37.486552 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 22:03:37.497804 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 22:03:37.498038 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 22:03:37.509980 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 22:03:37.510355 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 22:03:37.522041 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 22:03:37.522463 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 22:03:37.533038 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 22:03:37.533445 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 22:03:37.545029 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 22:03:37.545438 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 22:03:37.556034 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 22:03:37.556588 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 22:03:37.567156 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 22:03:37.578011 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 22:03:37.590009 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 22:03:37.601984 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 22:03:37.636652 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 22:03:37.662362 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 22:03:37.673894 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 22:03:37.684280 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 22:03:37.684302 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 22:03:37.684986 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 22:03:37.706232 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 22:03:37.718690 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 22:03:37.728438 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 22:03:37.730236 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 22:03:37.741662 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 22:03:37.752213 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 22:03:37.752883 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 22:03:37.760935 systemd-journald[1373]: Time spent on flushing to /var/log/journal/0ced460d902d4a0682f6be34316e719e is 14.071ms for 1371 entries. Nov 12 22:03:37.760935 systemd-journald[1373]: System Journal (/var/log/journal/0ced460d902d4a0682f6be34316e719e) is 8.0M, max 195.6M, 187.6M free. Nov 12 22:03:37.798878 systemd-journald[1373]: Received client request to flush runtime journal. Nov 12 22:03:37.798909 kernel: loop0: detected capacity change from 0 to 210664 Nov 12 22:03:37.768904 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 22:03:37.770573 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 22:03:37.783247 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 22:03:37.793074 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 22:03:37.830876 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 22:03:37.831842 systemd-tmpfiles[1405]: ACLs are not supported, ignoring. Nov 12 22:03:37.831853 systemd-tmpfiles[1405]: ACLs are not supported, ignoring. Nov 12 22:03:37.848119 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 22:03:37.853110 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 22:03:37.864319 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 22:03:37.875306 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 22:03:37.886331 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 22:03:37.903311 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 22:03:37.913132 kernel: loop1: detected capacity change from 0 to 140768 Nov 12 22:03:37.923294 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:03:37.933309 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 22:03:37.947034 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 22:03:37.974405 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 22:03:37.987132 kernel: loop2: detected capacity change from 0 to 8 Nov 12 22:03:37.998959 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 22:03:38.008645 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 22:03:38.009049 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 22:03:38.020699 udevadm[1412]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 12 22:03:38.027731 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 22:03:38.044140 kernel: loop3: detected capacity change from 0 to 142488 Nov 12 22:03:38.063294 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 22:03:38.070716 systemd-tmpfiles[1428]: ACLs are not supported, ignoring. Nov 12 22:03:38.070727 systemd-tmpfiles[1428]: ACLs are not supported, ignoring. Nov 12 22:03:38.075349 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 22:03:38.104966 ldconfig[1399]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 22:03:38.106069 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 22:03:38.119146 kernel: loop4: detected capacity change from 0 to 210664 Nov 12 22:03:38.149152 kernel: loop5: detected capacity change from 0 to 140768 Nov 12 22:03:38.180149 kernel: loop6: detected capacity change from 0 to 8 Nov 12 22:03:38.180201 kernel: loop7: detected capacity change from 0 to 142488 Nov 12 22:03:38.210182 (sd-merge)[1432]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Nov 12 22:03:38.210435 (sd-merge)[1432]: Merged extensions into '/usr'. Nov 12 22:03:38.212608 systemd[1]: Reloading requested from client PID 1404 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 22:03:38.212615 systemd[1]: Reloading... Nov 12 22:03:38.249176 zram_generator::config[1458]: No configuration found. Nov 12 22:03:38.304713 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 22:03:38.342635 systemd[1]: Reloading finished in 129 ms. Nov 12 22:03:38.369266 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 22:03:38.380515 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 22:03:38.403454 systemd[1]: Starting ensure-sysext.service... Nov 12 22:03:38.411037 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 22:03:38.423386 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 22:03:38.437457 systemd-tmpfiles[1515]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 22:03:38.437883 systemd-tmpfiles[1515]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 22:03:38.438901 systemd-tmpfiles[1515]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 22:03:38.439352 systemd-tmpfiles[1515]: ACLs are not supported, ignoring. Nov 12 22:03:38.439427 systemd-tmpfiles[1515]: ACLs are not supported, ignoring. Nov 12 22:03:38.442560 systemd-tmpfiles[1515]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 22:03:38.442567 systemd-tmpfiles[1515]: Skipping /boot Nov 12 22:03:38.450866 systemd-tmpfiles[1515]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 22:03:38.450873 systemd-tmpfiles[1515]: Skipping /boot Nov 12 22:03:38.451224 systemd[1]: Reloading requested from client PID 1514 ('systemctl') (unit ensure-sysext.service)... Nov 12 22:03:38.451237 systemd[1]: Reloading... Nov 12 22:03:38.466104 systemd-udevd[1516]: Using default interface naming scheme 'v255'. Nov 12 22:03:38.487152 zram_generator::config[1542]: No configuration found. Nov 12 22:03:38.536768 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Nov 12 22:03:38.536818 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 37 scanned by (udev-worker) (1614) Nov 12 22:03:38.536829 kernel: ACPI: button: Sleep Button [SLPB] Nov 12 22:03:38.536839 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1591) Nov 12 22:03:38.551290 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 12 22:03:38.555988 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 22:03:38.559097 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1591) Nov 12 22:03:38.569099 kernel: ACPI: button: Power Button [PWRF] Nov 12 22:03:38.628545 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Nov 12 22:03:38.628667 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Nov 12 22:03:38.663094 kernel: IPMI message handler: version 39.2 Nov 12 22:03:38.663125 kernel: mousedev: PS/2 mouse device common for all mice Nov 12 22:03:38.678286 systemd[1]: Reloading finished in 226 ms. Nov 12 22:03:38.716841 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Nov 12 22:03:38.762445 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Nov 12 22:03:38.762554 kernel: ipmi device interface Nov 12 22:03:38.762566 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Nov 12 22:03:38.735920 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 22:03:38.811144 kernel: ipmi_si: IPMI System Interface driver Nov 12 22:03:38.811179 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Nov 12 22:03:38.864503 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Nov 12 22:03:38.864532 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Nov 12 22:03:38.864551 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Nov 12 22:03:38.952619 kernel: iTCO_vendor_support: vendor-support=0 Nov 12 22:03:38.952641 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Nov 12 22:03:38.952726 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Nov 12 22:03:38.952795 kernel: ipmi_si: Adding ACPI-specified kcs state machine Nov 12 22:03:38.952807 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Nov 12 22:03:38.965469 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 22:03:38.994188 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Nov 12 22:03:38.994368 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Nov 12 22:03:39.015571 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Nov 12 22:03:39.009328 systemd[1]: Finished ensure-sysext.service. Nov 12 22:03:39.063094 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Nov 12 22:03:39.102513 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Nov 12 22:03:39.102615 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Nov 12 22:03:39.102696 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Nov 12 22:03:39.073879 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:03:39.152096 kernel: ipmi_ssif: IPMI SSIF Interface driver Nov 12 22:03:39.163434 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 22:03:39.173097 kernel: intel_rapl_common: Found RAPL domain package Nov 12 22:03:39.173132 kernel: intel_rapl_common: Found RAPL domain core Nov 12 22:03:39.173144 kernel: intel_rapl_common: Found RAPL domain dram Nov 12 22:03:39.180862 augenrules[1710]: No rules Nov 12 22:03:39.214198 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 22:03:39.225217 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 22:03:39.225780 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 22:03:39.235615 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 22:03:39.245619 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 22:03:39.256689 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 22:03:39.266237 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 22:03:39.266765 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 22:03:39.277750 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 22:03:39.289066 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 22:03:39.290004 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 22:03:39.290867 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 12 22:03:39.305830 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 22:03:39.316811 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:03:39.337174 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:03:39.337712 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 22:03:39.348339 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 22:03:39.348511 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 22:03:39.348664 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 22:03:39.348750 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 22:03:39.348903 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 22:03:39.348985 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 22:03:39.349142 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 22:03:39.349224 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 22:03:39.349371 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 22:03:39.349451 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 22:03:39.349600 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 22:03:39.349755 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 22:03:39.354277 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 22:03:39.354318 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 22:03:39.354361 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 22:03:39.354954 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 22:03:39.355778 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 22:03:39.355810 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 22:03:39.356011 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 22:03:39.363215 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 22:03:39.366243 lvm[1741]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 22:03:39.378528 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 22:03:39.402732 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 22:03:39.417795 systemd-resolved[1726]: Positive Trust Anchors: Nov 12 22:03:39.417802 systemd-resolved[1726]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 22:03:39.417826 systemd-resolved[1726]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 22:03:39.420424 systemd-resolved[1726]: Using system hostname 'ci-4081.2.0-a-a9d0314af7'. Nov 12 22:03:39.424250 systemd-networkd[1725]: lo: Link UP Nov 12 22:03:39.424257 systemd-networkd[1725]: lo: Gained carrier Nov 12 22:03:39.427031 systemd-networkd[1725]: bond0: netdev ready Nov 12 22:03:39.428055 systemd-networkd[1725]: Enumeration completed Nov 12 22:03:39.432900 systemd-networkd[1725]: enp1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:15:af:98.network. Nov 12 22:03:39.480315 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 12 22:03:39.491402 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 22:03:39.501208 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 22:03:39.511298 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:03:39.523363 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 22:03:39.533169 systemd[1]: Reached target network.target - Network. Nov 12 22:03:39.541170 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 22:03:39.552177 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 22:03:39.562218 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 22:03:39.573188 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 22:03:39.584176 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 22:03:39.595128 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 22:03:39.595143 systemd[1]: Reached target paths.target - Path Units. Nov 12 22:03:39.603128 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 22:03:39.613249 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 22:03:39.623222 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 22:03:39.634126 systemd[1]: Reached target timers.target - Timer Units. Nov 12 22:03:39.642406 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 22:03:39.652789 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 22:03:39.662500 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 22:03:39.672803 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 22:03:39.683802 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 22:03:39.685936 lvm[1765]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 22:03:39.695538 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 22:03:39.705204 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 22:03:39.715161 systemd[1]: Reached target basic.target - Basic System. Nov 12 22:03:39.723190 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 22:03:39.723204 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 22:03:39.738344 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 22:03:39.749253 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 12 22:03:39.759979 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 22:03:39.769061 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 22:03:39.778136 coreos-metadata[1768]: Nov 12 22:03:39.778 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 12 22:03:39.778990 coreos-metadata[1768]: Nov 12 22:03:39.778 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Nov 12 22:03:39.781787 dbus-daemon[1769]: [system] SELinux support is enabled Nov 12 22:03:39.783723 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 22:03:39.793193 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Nov 12 22:03:39.794845 jq[1772]: false Nov 12 22:03:39.810215 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 22:03:39.810824 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 22:03:39.817144 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Nov 12 22:03:39.817330 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 22:03:39.821642 systemd-networkd[1725]: enp1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:15:af:99.network. Nov 12 22:03:39.827057 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 22:03:39.833914 extend-filesystems[1774]: Found loop4 Nov 12 22:03:39.833914 extend-filesystems[1774]: Found loop5 Nov 12 22:03:39.895207 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Nov 12 22:03:39.895225 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 37 scanned by (udev-worker) (1635) Nov 12 22:03:39.895235 extend-filesystems[1774]: Found loop6 Nov 12 22:03:39.895235 extend-filesystems[1774]: Found loop7 Nov 12 22:03:39.895235 extend-filesystems[1774]: Found sda Nov 12 22:03:39.895235 extend-filesystems[1774]: Found sdb Nov 12 22:03:39.895235 extend-filesystems[1774]: Found sdb1 Nov 12 22:03:39.895235 extend-filesystems[1774]: Found sdb2 Nov 12 22:03:39.895235 extend-filesystems[1774]: Found sdb3 Nov 12 22:03:39.895235 extend-filesystems[1774]: Found usr Nov 12 22:03:39.895235 extend-filesystems[1774]: Found sdb4 Nov 12 22:03:39.895235 extend-filesystems[1774]: Found sdb6 Nov 12 22:03:39.895235 extend-filesystems[1774]: Found sdb7 Nov 12 22:03:39.895235 extend-filesystems[1774]: Found sdb9 Nov 12 22:03:39.895235 extend-filesystems[1774]: Checking size of /dev/sdb9 Nov 12 22:03:39.895235 extend-filesystems[1774]: Resized partition /dev/sdb9 Nov 12 22:03:40.132202 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Nov 12 22:03:40.132322 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Nov 12 22:03:40.132333 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Nov 12 22:03:40.132344 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Nov 12 22:03:40.132353 kernel: bond0: active interface up! Nov 12 22:03:39.837969 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 22:03:40.132431 extend-filesystems[1790]: resize2fs 1.47.1 (20-May-2024) Nov 12 22:03:39.882900 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 22:03:39.896202 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Nov 12 22:03:39.904489 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 22:03:40.158427 sshd_keygen[1797]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 22:03:39.904852 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 22:03:40.158554 update_engine[1799]: I20241112 22:03:39.938159 1799 main.cc:92] Flatcar Update Engine starting Nov 12 22:03:40.158554 update_engine[1799]: I20241112 22:03:39.938892 1799 update_check_scheduler.cc:74] Next update check in 11m56s Nov 12 22:03:39.930187 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 22:03:40.158731 jq[1800]: true Nov 12 22:03:39.930496 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 22:03:39.931622 systemd-logind[1794]: Watching system buttons on /dev/input/event3 (Power Button) Nov 12 22:03:39.931632 systemd-logind[1794]: Watching system buttons on /dev/input/event2 (Sleep Button) Nov 12 22:03:39.931642 systemd-logind[1794]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Nov 12 22:03:39.931766 systemd-logind[1794]: New seat seat0. Nov 12 22:03:40.159198 dbus-daemon[1769]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 12 22:03:39.983309 systemd-networkd[1725]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Nov 12 22:03:39.984552 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 22:03:39.984715 systemd-networkd[1725]: enp1s0f0np0: Link UP Nov 12 22:03:39.984939 systemd-networkd[1725]: enp1s0f0np0: Gained carrier Nov 12 22:03:40.012295 systemd-networkd[1725]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:15:af:98.network. Nov 12 22:03:40.012489 systemd-networkd[1725]: enp1s0f1np1: Link UP Nov 12 22:03:40.012675 systemd-networkd[1725]: enp1s0f1np1: Gained carrier Nov 12 22:03:40.024369 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 22:03:40.030290 systemd-networkd[1725]: bond0: Link UP Nov 12 22:03:40.030491 systemd-networkd[1725]: bond0: Gained carrier Nov 12 22:03:40.030617 systemd-timesyncd[1727]: Network configuration changed, trying to establish connection. Nov 12 22:03:40.030941 systemd-timesyncd[1727]: Network configuration changed, trying to establish connection. Nov 12 22:03:40.031166 systemd-timesyncd[1727]: Network configuration changed, trying to establish connection. Nov 12 22:03:40.031272 systemd-timesyncd[1727]: Network configuration changed, trying to establish connection. Nov 12 22:03:40.050378 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 22:03:40.050466 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 22:03:40.050611 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 22:03:40.050693 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 22:03:40.068716 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 22:03:40.068799 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 22:03:40.097531 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 22:03:40.158598 (ntainerd)[1811]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 22:03:40.160206 jq[1810]: true Nov 12 22:03:40.164723 tar[1809]: linux-amd64/helm Nov 12 22:03:40.165687 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Nov 12 22:03:40.165783 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Nov 12 22:03:40.167210 systemd[1]: Started update-engine.service - Update Engine. Nov 12 22:03:40.181214 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 22:03:40.189171 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 22:03:40.189268 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 22:03:40.200198 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 22:03:40.200306 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 22:03:40.223574 bash[1841]: Updated "/home/core/.ssh/authorized_keys" Nov 12 22:03:40.241253 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Nov 12 22:03:40.241281 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 22:03:40.253397 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 22:03:40.258970 locksmithd[1848]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 22:03:40.264447 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 22:03:40.264544 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 22:03:40.288320 systemd[1]: Starting sshkeys.service... Nov 12 22:03:40.295874 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 22:03:40.322421 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 12 22:03:40.333884 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 12 22:03:40.345550 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 22:03:40.349053 containerd[1811]: time="2024-11-12T22:03:40.349009096Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 12 22:03:40.357835 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 22:03:40.358061 coreos-metadata[1863]: Nov 12 22:03:40.358 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 12 22:03:40.365904 containerd[1811]: time="2024-11-12T22:03:40.365877431Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 22:03:40.366670 containerd[1811]: time="2024-11-12T22:03:40.366627397Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:03:40.366670 containerd[1811]: time="2024-11-12T22:03:40.366645266Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 22:03:40.366670 containerd[1811]: time="2024-11-12T22:03:40.366655318Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 22:03:40.366749 containerd[1811]: time="2024-11-12T22:03:40.366737256Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 22:03:40.366749 containerd[1811]: time="2024-11-12T22:03:40.366747620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 22:03:40.366795 containerd[1811]: time="2024-11-12T22:03:40.366782410Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:03:40.366795 containerd[1811]: time="2024-11-12T22:03:40.366791037Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 22:03:40.366911 containerd[1811]: time="2024-11-12T22:03:40.366877701Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:03:40.366911 containerd[1811]: time="2024-11-12T22:03:40.366886893Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 22:03:40.366911 containerd[1811]: time="2024-11-12T22:03:40.366894451Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:03:40.366911 containerd[1811]: time="2024-11-12T22:03:40.366900382Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 22:03:40.366970 containerd[1811]: time="2024-11-12T22:03:40.366940719Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 22:03:40.366962 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Nov 12 22:03:40.367147 containerd[1811]: time="2024-11-12T22:03:40.367108985Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 22:03:40.367175 containerd[1811]: time="2024-11-12T22:03:40.367164600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:03:40.367175 containerd[1811]: time="2024-11-12T22:03:40.367173211Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 22:03:40.367256 containerd[1811]: time="2024-11-12T22:03:40.367219039Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 22:03:40.367256 containerd[1811]: time="2024-11-12T22:03:40.367245640Z" level=info msg="metadata content store policy set" policy=shared Nov 12 22:03:40.377302 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 22:03:40.378748 containerd[1811]: time="2024-11-12T22:03:40.378704758Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 22:03:40.378748 containerd[1811]: time="2024-11-12T22:03:40.378728478Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 22:03:40.378748 containerd[1811]: time="2024-11-12T22:03:40.378738483Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 22:03:40.378748 containerd[1811]: time="2024-11-12T22:03:40.378747270Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 22:03:40.378814 containerd[1811]: time="2024-11-12T22:03:40.378761644Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 22:03:40.378837 containerd[1811]: time="2024-11-12T22:03:40.378828136Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 22:03:40.379376 containerd[1811]: time="2024-11-12T22:03:40.379334162Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 22:03:40.379452 containerd[1811]: time="2024-11-12T22:03:40.379406041Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 22:03:40.379452 containerd[1811]: time="2024-11-12T22:03:40.379417591Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 22:03:40.379452 containerd[1811]: time="2024-11-12T22:03:40.379425574Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 22:03:40.379452 containerd[1811]: time="2024-11-12T22:03:40.379433674Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 22:03:40.379452 containerd[1811]: time="2024-11-12T22:03:40.379440827Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 22:03:40.379452 containerd[1811]: time="2024-11-12T22:03:40.379447871Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 22:03:40.379552 containerd[1811]: time="2024-11-12T22:03:40.379455659Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 22:03:40.379552 containerd[1811]: time="2024-11-12T22:03:40.379463492Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 22:03:40.379552 containerd[1811]: time="2024-11-12T22:03:40.379471001Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 22:03:40.379552 containerd[1811]: time="2024-11-12T22:03:40.379478190Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 22:03:40.379552 containerd[1811]: time="2024-11-12T22:03:40.379484349Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 22:03:40.379552 containerd[1811]: time="2024-11-12T22:03:40.379495361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 22:03:40.379552 containerd[1811]: time="2024-11-12T22:03:40.379502925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 22:03:40.379552 containerd[1811]: time="2024-11-12T22:03:40.379510160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 22:03:40.379552 containerd[1811]: time="2024-11-12T22:03:40.379519875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 22:03:40.379552 containerd[1811]: time="2024-11-12T22:03:40.379528449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 22:03:40.379552 containerd[1811]: time="2024-11-12T22:03:40.379535780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 22:03:40.379552 containerd[1811]: time="2024-11-12T22:03:40.379542170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 22:03:40.379552 containerd[1811]: time="2024-11-12T22:03:40.379549317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 22:03:40.379721 containerd[1811]: time="2024-11-12T22:03:40.379556578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 22:03:40.379721 containerd[1811]: time="2024-11-12T22:03:40.379566585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 22:03:40.379721 containerd[1811]: time="2024-11-12T22:03:40.379573555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 22:03:40.379721 containerd[1811]: time="2024-11-12T22:03:40.379580656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 22:03:40.379721 containerd[1811]: time="2024-11-12T22:03:40.379587275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 22:03:40.379721 containerd[1811]: time="2024-11-12T22:03:40.379601290Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 22:03:40.379721 containerd[1811]: time="2024-11-12T22:03:40.379612904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 22:03:40.379721 containerd[1811]: time="2024-11-12T22:03:40.379619746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 22:03:40.379721 containerd[1811]: time="2024-11-12T22:03:40.379626147Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 22:03:40.379721 containerd[1811]: time="2024-11-12T22:03:40.379648368Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 22:03:40.379721 containerd[1811]: time="2024-11-12T22:03:40.379657785Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 22:03:40.379721 containerd[1811]: time="2024-11-12T22:03:40.379664132Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 22:03:40.379721 containerd[1811]: time="2024-11-12T22:03:40.379670798Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 22:03:40.379886 containerd[1811]: time="2024-11-12T22:03:40.379677286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 22:03:40.379886 containerd[1811]: time="2024-11-12T22:03:40.379687520Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 22:03:40.379886 containerd[1811]: time="2024-11-12T22:03:40.379696737Z" level=info msg="NRI interface is disabled by configuration." Nov 12 22:03:40.379886 containerd[1811]: time="2024-11-12T22:03:40.379702562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 22:03:40.379941 containerd[1811]: time="2024-11-12T22:03:40.379863593Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 22:03:40.379941 containerd[1811]: time="2024-11-12T22:03:40.379898501Z" level=info msg="Connect containerd service" Nov 12 22:03:40.379941 containerd[1811]: time="2024-11-12T22:03:40.379916248Z" level=info msg="using legacy CRI server" Nov 12 22:03:40.379941 containerd[1811]: time="2024-11-12T22:03:40.379920417Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 22:03:40.380058 containerd[1811]: time="2024-11-12T22:03:40.379968102Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 22:03:40.380327 containerd[1811]: time="2024-11-12T22:03:40.380269719Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 22:03:40.380458 containerd[1811]: time="2024-11-12T22:03:40.380406939Z" level=info msg="Start subscribing containerd event" Nov 12 22:03:40.380458 containerd[1811]: time="2024-11-12T22:03:40.380436851Z" level=info msg="Start recovering state" Nov 12 22:03:40.380458 containerd[1811]: time="2024-11-12T22:03:40.380452495Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 22:03:40.380516 containerd[1811]: time="2024-11-12T22:03:40.380481728Z" level=info msg="Start event monitor" Nov 12 22:03:40.380516 containerd[1811]: time="2024-11-12T22:03:40.380491081Z" level=info msg="Start snapshots syncer" Nov 12 22:03:40.380516 containerd[1811]: time="2024-11-12T22:03:40.380497233Z" level=info msg="Start cni network conf syncer for default" Nov 12 22:03:40.380516 containerd[1811]: time="2024-11-12T22:03:40.380501204Z" level=info msg="Start streaming server" Nov 12 22:03:40.380588 containerd[1811]: time="2024-11-12T22:03:40.380482534Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 22:03:40.380588 containerd[1811]: time="2024-11-12T22:03:40.380568819Z" level=info msg="containerd successfully booted in 0.032286s" Nov 12 22:03:40.386450 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 22:03:40.415125 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Nov 12 22:03:40.437083 extend-filesystems[1790]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Nov 12 22:03:40.437083 extend-filesystems[1790]: old_desc_blocks = 1, new_desc_blocks = 56 Nov 12 22:03:40.437083 extend-filesystems[1790]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Nov 12 22:03:40.480172 extend-filesystems[1774]: Resized filesystem in /dev/sdb9 Nov 12 22:03:40.480226 tar[1809]: linux-amd64/LICENSE Nov 12 22:03:40.480226 tar[1809]: linux-amd64/README.md Nov 12 22:03:40.437934 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 22:03:40.438038 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 22:03:40.490298 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 22:03:40.779266 coreos-metadata[1768]: Nov 12 22:03:40.779 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Nov 12 22:03:41.832533 systemd-timesyncd[1727]: Network configuration changed, trying to establish connection. Nov 12 22:03:41.896159 systemd-networkd[1725]: bond0: Gained IPv6LL Nov 12 22:03:41.896299 systemd-timesyncd[1727]: Network configuration changed, trying to establish connection. Nov 12 22:03:41.897353 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 22:03:41.908894 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 22:03:41.927339 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:03:41.937773 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 22:03:41.956040 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 22:03:42.643149 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 Nov 12 22:03:42.643301 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity Nov 12 22:03:42.666896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:03:42.678832 (kubelet)[1905]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 22:03:43.240739 kubelet[1905]: E1112 22:03:43.240654 1905 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 22:03:43.241777 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 22:03:43.241851 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 22:03:44.175020 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 22:03:44.191425 systemd[1]: Started sshd@0-147.75.202.249:22-147.75.109.163:42438.service - OpenSSH per-connection server daemon (147.75.109.163:42438). Nov 12 22:03:44.232926 coreos-metadata[1768]: Nov 12 22:03:44.232 INFO Fetch successful Nov 12 22:03:44.252980 sshd[1925]: Accepted publickey for core from 147.75.109.163 port 42438 ssh2: RSA SHA256:KBVmhKW+oGJyeiskyb5aOJEhXukp3J/3zKpNJrwWNKM Nov 12 22:03:44.254506 sshd[1925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:03:44.259996 systemd-logind[1794]: New session 1 of user core. Nov 12 22:03:44.278291 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 22:03:44.289060 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 22:03:44.304972 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 12 22:03:44.318544 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 22:03:44.343063 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Nov 12 22:03:44.356412 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 22:03:44.367366 (systemd)[1935]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 22:03:44.442781 systemd[1935]: Queued start job for default target default.target. Nov 12 22:03:44.452762 systemd[1935]: Created slice app.slice - User Application Slice. Nov 12 22:03:44.452776 systemd[1935]: Reached target paths.target - Paths. Nov 12 22:03:44.452784 systemd[1935]: Reached target timers.target - Timers. Nov 12 22:03:44.453417 systemd[1935]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 22:03:44.458918 systemd[1935]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 22:03:44.458946 systemd[1935]: Reached target sockets.target - Sockets. Nov 12 22:03:44.458955 systemd[1935]: Reached target basic.target - Basic System. Nov 12 22:03:44.458976 systemd[1935]: Reached target default.target - Main User Target. Nov 12 22:03:44.458991 systemd[1935]: Startup finished in 88ms. Nov 12 22:03:44.459097 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 22:03:44.471247 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 22:03:44.537318 systemd[1]: Started sshd@1-147.75.202.249:22-147.75.109.163:42454.service - OpenSSH per-connection server daemon (147.75.109.163:42454). Nov 12 22:03:44.541383 coreos-metadata[1863]: Nov 12 22:03:44.541 INFO Fetch successful Nov 12 22:03:44.569290 sshd[1946]: Accepted publickey for core from 147.75.109.163 port 42454 ssh2: RSA SHA256:KBVmhKW+oGJyeiskyb5aOJEhXukp3J/3zKpNJrwWNKM Nov 12 22:03:44.569985 sshd[1946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:03:44.572323 systemd-logind[1794]: New session 2 of user core. Nov 12 22:03:44.573357 unknown[1863]: wrote ssh authorized keys file for user: core Nov 12 22:03:44.590309 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 22:03:44.612608 update-ssh-keys[1948]: Updated "/home/core/.ssh/authorized_keys" Nov 12 22:03:44.613157 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 12 22:03:44.624715 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Nov 12 22:03:44.636584 systemd[1]: Finished sshkeys.service. Nov 12 22:03:44.644639 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 22:03:44.655572 systemd[1]: Startup finished in 2.662s (kernel) + 22.752s (initrd) + 10.059s (userspace) = 35.474s. Nov 12 22:03:44.670425 sshd[1946]: pam_unix(sshd:session): session closed for user core Nov 12 22:03:44.680691 systemd[1]: sshd@1-147.75.202.249:22-147.75.109.163:42454.service: Deactivated successfully. Nov 12 22:03:44.684262 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 22:03:44.686086 systemd-logind[1794]: Session 2 logged out. Waiting for processes to exit. Nov 12 22:03:44.689252 systemd[1]: Started sshd@2-147.75.202.249:22-147.75.109.163:42462.service - OpenSSH per-connection server daemon (147.75.109.163:42462). Nov 12 22:03:44.691683 systemd-logind[1794]: Removed session 2. Nov 12 22:03:44.694995 login[1879]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 12 22:03:44.697684 systemd-logind[1794]: New session 3 of user core. Nov 12 22:03:44.698398 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 22:03:44.705025 login[1875]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 12 22:03:44.707745 systemd-logind[1794]: New session 4 of user core. Nov 12 22:03:44.708420 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 22:03:44.718497 sshd[1960]: Accepted publickey for core from 147.75.109.163 port 42462 ssh2: RSA SHA256:KBVmhKW+oGJyeiskyb5aOJEhXukp3J/3zKpNJrwWNKM Nov 12 22:03:44.719270 sshd[1960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:03:44.721639 systemd-logind[1794]: New session 5 of user core. Nov 12 22:03:44.722103 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 22:03:44.767715 sshd[1960]: pam_unix(sshd:session): session closed for user core Nov 12 22:03:44.788505 systemd[1]: sshd@2-147.75.202.249:22-147.75.109.163:42462.service: Deactivated successfully. Nov 12 22:03:44.790256 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 22:03:44.791923 systemd-logind[1794]: Session 5 logged out. Waiting for processes to exit. Nov 12 22:03:44.793911 systemd[1]: Started sshd@3-147.75.202.249:22-147.75.109.163:42478.service - OpenSSH per-connection server daemon (147.75.109.163:42478). Nov 12 22:03:44.795211 systemd-logind[1794]: Removed session 5. Nov 12 22:03:44.850709 sshd[1990]: Accepted publickey for core from 147.75.109.163 port 42478 ssh2: RSA SHA256:KBVmhKW+oGJyeiskyb5aOJEhXukp3J/3zKpNJrwWNKM Nov 12 22:03:44.852728 sshd[1990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:03:44.859638 systemd-logind[1794]: New session 6 of user core. Nov 12 22:03:44.869400 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 22:03:44.935903 sshd[1990]: pam_unix(sshd:session): session closed for user core Nov 12 22:03:44.963043 systemd[1]: sshd@3-147.75.202.249:22-147.75.109.163:42478.service: Deactivated successfully. Nov 12 22:03:44.966618 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 22:03:44.969909 systemd-logind[1794]: Session 6 logged out. Waiting for processes to exit. Nov 12 22:03:44.989049 systemd[1]: Started sshd@4-147.75.202.249:22-147.75.109.163:42492.service - OpenSSH per-connection server daemon (147.75.109.163:42492). Nov 12 22:03:44.991729 systemd-logind[1794]: Removed session 6. Nov 12 22:03:45.036387 sshd[1997]: Accepted publickey for core from 147.75.109.163 port 42492 ssh2: RSA SHA256:KBVmhKW+oGJyeiskyb5aOJEhXukp3J/3zKpNJrwWNKM Nov 12 22:03:45.036976 sshd[1997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:03:45.039636 systemd-logind[1794]: New session 7 of user core. Nov 12 22:03:45.049386 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 22:03:45.106629 sudo[2000]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 22:03:45.106780 sudo[2000]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 22:03:45.120745 sudo[2000]: pam_unix(sudo:session): session closed for user root Nov 12 22:03:45.121803 sshd[1997]: pam_unix(sshd:session): session closed for user core Nov 12 22:03:45.142038 systemd[1]: sshd@4-147.75.202.249:22-147.75.109.163:42492.service: Deactivated successfully. Nov 12 22:03:45.143182 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 22:03:45.144167 systemd-logind[1794]: Session 7 logged out. Waiting for processes to exit. Nov 12 22:03:45.145233 systemd[1]: Started sshd@5-147.75.202.249:22-147.75.109.163:42500.service - OpenSSH per-connection server daemon (147.75.109.163:42500). Nov 12 22:03:45.145997 systemd-logind[1794]: Removed session 7. Nov 12 22:03:45.170959 sshd[2005]: Accepted publickey for core from 147.75.109.163 port 42500 ssh2: RSA SHA256:KBVmhKW+oGJyeiskyb5aOJEhXukp3J/3zKpNJrwWNKM Nov 12 22:03:45.171696 sshd[2005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:03:45.174770 systemd-logind[1794]: New session 8 of user core. Nov 12 22:03:45.193426 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 22:03:45.252029 sudo[2009]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 22:03:45.252216 sudo[2009]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 22:03:45.254303 sudo[2009]: pam_unix(sudo:session): session closed for user root Nov 12 22:03:45.256877 sudo[2008]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 12 22:03:45.257025 sudo[2008]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 22:03:45.276396 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 12 22:03:45.277571 auditctl[2012]: No rules Nov 12 22:03:45.277793 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 22:03:45.277915 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 12 22:03:45.279611 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 22:03:45.307324 augenrules[2030]: No rules Nov 12 22:03:45.307691 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 22:03:45.308373 sudo[2008]: pam_unix(sudo:session): session closed for user root Nov 12 22:03:45.309432 sshd[2005]: pam_unix(sshd:session): session closed for user core Nov 12 22:03:45.326782 systemd[1]: sshd@5-147.75.202.249:22-147.75.109.163:42500.service: Deactivated successfully. Nov 12 22:03:45.328076 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 22:03:45.329432 systemd-logind[1794]: Session 8 logged out. Waiting for processes to exit. Nov 12 22:03:45.330883 systemd[1]: Started sshd@6-147.75.202.249:22-147.75.109.163:42510.service - OpenSSH per-connection server daemon (147.75.109.163:42510). Nov 12 22:03:45.331948 systemd-logind[1794]: Removed session 8. Nov 12 22:03:45.377518 sshd[2038]: Accepted publickey for core from 147.75.109.163 port 42510 ssh2: RSA SHA256:KBVmhKW+oGJyeiskyb5aOJEhXukp3J/3zKpNJrwWNKM Nov 12 22:03:45.379514 sshd[2038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:03:45.386417 systemd-logind[1794]: New session 9 of user core. Nov 12 22:03:45.409607 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 22:03:45.479264 sudo[2041]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 22:03:45.480083 sudo[2041]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 22:03:45.966277 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 22:03:45.966326 (dockerd)[2066]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 22:03:46.235107 dockerd[2066]: time="2024-11-12T22:03:46.235037939Z" level=info msg="Starting up" Nov 12 22:03:46.508954 dockerd[2066]: time="2024-11-12T22:03:46.508858769Z" level=info msg="Loading containers: start." Nov 12 22:03:46.589141 kernel: Initializing XFRM netlink socket Nov 12 22:03:46.606007 systemd-timesyncd[1727]: Network configuration changed, trying to establish connection. Nov 12 22:03:46.606118 systemd-timesyncd[1727]: Network configuration changed, trying to establish connection. Nov 12 22:03:46.609096 systemd-timesyncd[1727]: Network configuration changed, trying to establish connection. Nov 12 22:03:46.640615 systemd-networkd[1725]: docker0: Link UP Nov 12 22:03:46.640780 systemd-timesyncd[1727]: Network configuration changed, trying to establish connection. Nov 12 22:03:46.663127 dockerd[2066]: time="2024-11-12T22:03:46.663074119Z" level=info msg="Loading containers: done." Nov 12 22:03:46.687113 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1619352087-merged.mount: Deactivated successfully. Nov 12 22:03:46.688358 dockerd[2066]: time="2024-11-12T22:03:46.688314482Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 22:03:46.688401 dockerd[2066]: time="2024-11-12T22:03:46.688365667Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 12 22:03:46.688420 dockerd[2066]: time="2024-11-12T22:03:46.688414829Z" level=info msg="Daemon has completed initialization" Nov 12 22:03:46.703912 dockerd[2066]: time="2024-11-12T22:03:46.703852485Z" level=info msg="API listen on /run/docker.sock" Nov 12 22:03:46.703977 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 22:03:47.910033 containerd[1811]: time="2024-11-12T22:03:47.910010906Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.6\"" Nov 12 22:03:48.550046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2926563246.mount: Deactivated successfully. Nov 12 22:03:50.077875 containerd[1811]: time="2024-11-12T22:03:50.077818855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:03:50.078080 containerd[1811]: time="2024-11-12T22:03:50.077949743Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.6: active requests=0, bytes read=32676443" Nov 12 22:03:50.078454 containerd[1811]: time="2024-11-12T22:03:50.078415156Z" level=info msg="ImageCreate event name:\"sha256:a247bfa6152e770cd36ef6fe2a8831429eb43da1fd506c30b12af93f032ee849\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:03:50.080024 containerd[1811]: time="2024-11-12T22:03:50.079984178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:3a820898379831ecff7cf4ce4954bb7a6505988eefcef146fd1ee2f56a01cdbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:03:50.080662 containerd[1811]: time="2024-11-12T22:03:50.080622205Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.6\" with image id \"sha256:a247bfa6152e770cd36ef6fe2a8831429eb43da1fd506c30b12af93f032ee849\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:3a820898379831ecff7cf4ce4954bb7a6505988eefcef146fd1ee2f56a01cdbb\", size \"32673243\" in 2.170588859s" Nov 12 22:03:50.080662 containerd[1811]: time="2024-11-12T22:03:50.080637659Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.6\" returns image reference \"sha256:a247bfa6152e770cd36ef6fe2a8831429eb43da1fd506c30b12af93f032ee849\"" Nov 12 22:03:50.092275 containerd[1811]: time="2024-11-12T22:03:50.092255814Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.6\"" Nov 12 22:03:52.016222 containerd[1811]: time="2024-11-12T22:03:52.016167469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:03:52.016429 containerd[1811]: time="2024-11-12T22:03:52.016351485Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.6: active requests=0, bytes read=29605796" Nov 12 22:03:52.016761 containerd[1811]: time="2024-11-12T22:03:52.016718149Z" level=info msg="ImageCreate event name:\"sha256:382949f9bfdd9da8bf555d18adac4eb0dba8264b7e3b5963e6a26ef8d412477c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:03:52.018729 containerd[1811]: time="2024-11-12T22:03:52.018684615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a412c3cdf35d39c8d37748b457a486faae7c5f2ee1d1ba2059c709bc5534686\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:03:52.019226 containerd[1811]: time="2024-11-12T22:03:52.019185107Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.6\" with image id \"sha256:382949f9bfdd9da8bf555d18adac4eb0dba8264b7e3b5963e6a26ef8d412477c\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a412c3cdf35d39c8d37748b457a486faae7c5f2ee1d1ba2059c709bc5534686\", size \"31051162\" in 1.926910129s" Nov 12 22:03:52.019226 containerd[1811]: time="2024-11-12T22:03:52.019201555Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.6\" returns image reference \"sha256:382949f9bfdd9da8bf555d18adac4eb0dba8264b7e3b5963e6a26ef8d412477c\"" Nov 12 22:03:52.031480 containerd[1811]: time="2024-11-12T22:03:52.031460535Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.6\"" Nov 12 22:03:53.270858 containerd[1811]: time="2024-11-12T22:03:53.270801114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:03:53.271061 containerd[1811]: time="2024-11-12T22:03:53.271022228Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.6: active requests=0, bytes read=17784244" Nov 12 22:03:53.271429 containerd[1811]: time="2024-11-12T22:03:53.271378769Z" level=info msg="ImageCreate event name:\"sha256:ad5858afd532223324ff223396490f5fd8228323963b424ad7868407bd4ef1fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:03:53.273139 containerd[1811]: time="2024-11-12T22:03:53.273100711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:948395c284d82c985f2dc0d99b5b51b3ca85eba97003babbc73834e0ab91fa59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:03:53.273621 containerd[1811]: time="2024-11-12T22:03:53.273580239Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.6\" with image id \"sha256:ad5858afd532223324ff223396490f5fd8228323963b424ad7868407bd4ef1fb\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:948395c284d82c985f2dc0d99b5b51b3ca85eba97003babbc73834e0ab91fa59\", size \"19229628\" in 1.242100258s" Nov 12 22:03:53.273621 containerd[1811]: time="2024-11-12T22:03:53.273595585Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.6\" returns image reference \"sha256:ad5858afd532223324ff223396490f5fd8228323963b424ad7868407bd4ef1fb\"" Nov 12 22:03:53.284864 containerd[1811]: time="2024-11-12T22:03:53.284844700Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.6\"" Nov 12 22:03:53.492332 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 22:03:53.502302 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:03:53.716063 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:03:53.718154 (kubelet)[2358]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 22:03:53.741806 kubelet[2358]: E1112 22:03:53.741782 2358 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 22:03:53.743848 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 22:03:53.743922 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 22:03:54.271382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1805546690.mount: Deactivated successfully. Nov 12 22:03:54.535633 containerd[1811]: time="2024-11-12T22:03:54.535546008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:03:54.535822 containerd[1811]: time="2024-11-12T22:03:54.535768265Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.6: active requests=0, bytes read=29054624" Nov 12 22:03:54.536139 containerd[1811]: time="2024-11-12T22:03:54.536120718Z" level=info msg="ImageCreate event name:\"sha256:2cce8902ed3ccdc78ecdb02734bd9ba32e2c7b44fc221663cf9ece2a179ff6a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:03:54.537023 containerd[1811]: time="2024-11-12T22:03:54.536974691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:aaf790f611159ab21713affc2c5676f742c9b31db26dd2e61e46c4257dd11b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:03:54.537413 containerd[1811]: time="2024-11-12T22:03:54.537373644Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.6\" with image id \"sha256:2cce8902ed3ccdc78ecdb02734bd9ba32e2c7b44fc221663cf9ece2a179ff6a6\", repo tag \"registry.k8s.io/kube-proxy:v1.30.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:aaf790f611159ab21713affc2c5676f742c9b31db26dd2e61e46c4257dd11b76\", size \"29053643\" in 1.25250879s" Nov 12 22:03:54.537413 containerd[1811]: time="2024-11-12T22:03:54.537389329Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.6\" returns image reference \"sha256:2cce8902ed3ccdc78ecdb02734bd9ba32e2c7b44fc221663cf9ece2a179ff6a6\"" Nov 12 22:03:54.547926 containerd[1811]: time="2024-11-12T22:03:54.547875092Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 22:03:55.092871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount285875224.mount: Deactivated successfully. Nov 12 22:03:55.601839 containerd[1811]: time="2024-11-12T22:03:55.601784490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:03:55.602023 containerd[1811]: time="2024-11-12T22:03:55.601879289Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Nov 12 22:03:55.602403 containerd[1811]: time="2024-11-12T22:03:55.602363381Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:03:55.603949 containerd[1811]: time="2024-11-12T22:03:55.603908762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:03:55.604604 containerd[1811]: time="2024-11-12T22:03:55.604556541Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.05666015s" Nov 12 22:03:55.604604 containerd[1811]: time="2024-11-12T22:03:55.604573678Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Nov 12 22:03:55.615519 containerd[1811]: time="2024-11-12T22:03:55.615500518Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 12 22:03:56.078485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount796963844.mount: Deactivated successfully. Nov 12 22:03:56.079855 containerd[1811]: time="2024-11-12T22:03:56.079815094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:03:56.079988 containerd[1811]: time="2024-11-12T22:03:56.079964233Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Nov 12 22:03:56.080435 containerd[1811]: time="2024-11-12T22:03:56.080421035Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:03:56.081801 containerd[1811]: time="2024-11-12T22:03:56.081787496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:03:56.082317 containerd[1811]: time="2024-11-12T22:03:56.082303872Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 466.78417ms" Nov 12 22:03:56.082358 containerd[1811]: time="2024-11-12T22:03:56.082319607Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Nov 12 22:03:56.094368 containerd[1811]: time="2024-11-12T22:03:56.094300983Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Nov 12 22:03:56.642555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1778168982.mount: Deactivated successfully. Nov 12 22:03:58.728606 containerd[1811]: time="2024-11-12T22:03:58.728577891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:03:58.728839 containerd[1811]: time="2024-11-12T22:03:58.728747363Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Nov 12 22:03:58.729322 containerd[1811]: time="2024-11-12T22:03:58.729309292Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:03:58.731032 containerd[1811]: time="2024-11-12T22:03:58.730991193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:03:58.731789 containerd[1811]: time="2024-11-12T22:03:58.731745104Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.637423254s" Nov 12 22:03:58.731789 containerd[1811]: time="2024-11-12T22:03:58.731766308Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Nov 12 22:04:00.628831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:04:00.642289 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:04:00.654740 systemd[1]: Reloading requested from client PID 2670 ('systemctl') (unit session-9.scope)... Nov 12 22:04:00.654747 systemd[1]: Reloading... Nov 12 22:04:00.693144 zram_generator::config[2709]: No configuration found. Nov 12 22:04:00.760378 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 22:04:00.819994 systemd[1]: Reloading finished in 165 ms. Nov 12 22:04:00.862913 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 12 22:04:00.863126 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 12 22:04:00.863721 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:04:00.869419 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:04:01.090678 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:04:01.099819 (kubelet)[2775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 22:04:01.122609 kubelet[2775]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 22:04:01.122609 kubelet[2775]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 22:04:01.122609 kubelet[2775]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 22:04:01.122816 kubelet[2775]: I1112 22:04:01.122608 2775 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 22:04:01.297938 kubelet[2775]: I1112 22:04:01.297887 2775 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Nov 12 22:04:01.297938 kubelet[2775]: I1112 22:04:01.297905 2775 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 22:04:01.298043 kubelet[2775]: I1112 22:04:01.298037 2775 server.go:927] "Client rotation is on, will bootstrap in background" Nov 12 22:04:01.308481 kubelet[2775]: I1112 22:04:01.308467 2775 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 22:04:01.309514 kubelet[2775]: E1112 22:04:01.309469 2775 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://147.75.202.249:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 147.75.202.249:6443: connect: connection refused Nov 12 22:04:01.324883 kubelet[2775]: I1112 22:04:01.324870 2775 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 22:04:01.325851 kubelet[2775]: I1112 22:04:01.325832 2775 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 22:04:01.325969 kubelet[2775]: I1112 22:04:01.325853 2775 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.2.0-a-a9d0314af7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 22:04:01.326344 kubelet[2775]: I1112 22:04:01.326335 2775 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 22:04:01.326377 kubelet[2775]: I1112 22:04:01.326345 2775 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 22:04:01.326426 kubelet[2775]: I1112 22:04:01.326417 2775 state_mem.go:36] "Initialized new in-memory state store" Nov 12 22:04:01.327183 kubelet[2775]: I1112 22:04:01.327174 2775 kubelet.go:400] "Attempting to sync node with API server" Nov 12 22:04:01.327215 kubelet[2775]: I1112 22:04:01.327183 2775 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 22:04:01.327215 kubelet[2775]: I1112 22:04:01.327199 2775 kubelet.go:312] "Adding apiserver pod source" Nov 12 22:04:01.327215 kubelet[2775]: I1112 22:04:01.327212 2775 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 22:04:01.329025 kubelet[2775]: W1112 22:04:01.328987 2775 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.75.202.249:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.202.249:6443: connect: connection refused Nov 12 22:04:01.329025 kubelet[2775]: W1112 22:04:01.329008 2775 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.75.202.249:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.0-a-a9d0314af7&limit=500&resourceVersion=0": dial tcp 147.75.202.249:6443: connect: connection refused Nov 12 22:04:01.329135 kubelet[2775]: E1112 22:04:01.329035 2775 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.75.202.249:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.0-a-a9d0314af7&limit=500&resourceVersion=0": dial tcp 147.75.202.249:6443: connect: connection refused Nov 12 22:04:01.329135 kubelet[2775]: E1112 22:04:01.329035 2775 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.75.202.249:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.202.249:6443: connect: connection refused Nov 12 22:04:01.330662 kubelet[2775]: I1112 22:04:01.330649 2775 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 22:04:01.331790 kubelet[2775]: I1112 22:04:01.331753 2775 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 22:04:01.331790 kubelet[2775]: W1112 22:04:01.331780 2775 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 22:04:01.332050 kubelet[2775]: I1112 22:04:01.332044 2775 server.go:1264] "Started kubelet" Nov 12 22:04:01.332146 kubelet[2775]: I1112 22:04:01.332101 2775 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 22:04:01.332270 kubelet[2775]: I1112 22:04:01.332224 2775 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 22:04:01.332406 kubelet[2775]: I1112 22:04:01.332396 2775 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 22:04:01.333783 kubelet[2775]: E1112 22:04:01.333774 2775 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 22:04:01.334163 kubelet[2775]: I1112 22:04:01.334157 2775 server.go:455] "Adding debug handlers to kubelet server" Nov 12 22:04:01.334210 kubelet[2775]: I1112 22:04:01.334190 2775 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 22:04:01.334228 kubelet[2775]: I1112 22:04:01.334222 2775 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 22:04:01.334260 kubelet[2775]: E1112 22:04:01.334230 2775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.0-a-a9d0314af7\" not found" Nov 12 22:04:01.334291 kubelet[2775]: I1112 22:04:01.334263 2775 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Nov 12 22:04:01.334291 kubelet[2775]: I1112 22:04:01.334286 2775 reconciler.go:26] "Reconciler: start to sync state" Nov 12 22:04:01.337544 kubelet[2775]: E1112 22:04:01.337520 2775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.202.249:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.0-a-a9d0314af7?timeout=10s\": dial tcp 147.75.202.249:6443: connect: connection refused" interval="200ms" Nov 12 22:04:01.337764 kubelet[2775]: W1112 22:04:01.337732 2775 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.75.202.249:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.202.249:6443: connect: connection refused Nov 12 22:04:01.337807 kubelet[2775]: E1112 22:04:01.337774 2775 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.75.202.249:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.202.249:6443: connect: connection refused Nov 12 22:04:01.338149 kubelet[2775]: I1112 22:04:01.338018 2775 factory.go:221] Registration of the systemd container factory successfully Nov 12 22:04:01.338149 kubelet[2775]: I1112 22:04:01.338115 2775 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 22:04:01.339154 kubelet[2775]: E1112 22:04:01.338805 2775 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.75.202.249:6443/api/v1/namespaces/default/events\": dial tcp 147.75.202.249:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.2.0-a-a9d0314af7.180757babf4957c3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.0-a-a9d0314af7,UID:ci-4081.2.0-a-a9d0314af7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.0-a-a9d0314af7,},FirstTimestamp:2024-11-12 22:04:01.332033475 +0000 UTC m=+0.229963788,LastTimestamp:2024-11-12 22:04:01.332033475 +0000 UTC m=+0.229963788,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.0-a-a9d0314af7,}" Nov 12 22:04:01.339287 kubelet[2775]: I1112 22:04:01.339277 2775 factory.go:221] Registration of the containerd container factory successfully Nov 12 22:04:01.345134 kubelet[2775]: I1112 22:04:01.345068 2775 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 22:04:01.345706 kubelet[2775]: I1112 22:04:01.345677 2775 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 22:04:01.345740 kubelet[2775]: I1112 22:04:01.345708 2775 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 22:04:01.345740 kubelet[2775]: I1112 22:04:01.345721 2775 kubelet.go:2337] "Starting kubelet main sync loop" Nov 12 22:04:01.345779 kubelet[2775]: E1112 22:04:01.345749 2775 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 22:04:01.346022 kubelet[2775]: W1112 22:04:01.345991 2775 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.75.202.249:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.202.249:6443: connect: connection refused Nov 12 22:04:01.346063 kubelet[2775]: E1112 22:04:01.346032 2775 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.75.202.249:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.202.249:6443: connect: connection refused Nov 12 22:04:01.375331 kubelet[2775]: I1112 22:04:01.375246 2775 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 22:04:01.375331 kubelet[2775]: I1112 22:04:01.375287 2775 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 22:04:01.375331 kubelet[2775]: I1112 22:04:01.375325 2775 state_mem.go:36] "Initialized new in-memory state store" Nov 12 22:04:01.377237 kubelet[2775]: I1112 22:04:01.377194 2775 policy_none.go:49] "None policy: Start" Nov 12 22:04:01.377768 kubelet[2775]: I1112 22:04:01.377705 2775 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 22:04:01.377768 kubelet[2775]: I1112 22:04:01.377727 2775 state_mem.go:35] "Initializing new in-memory state store" Nov 12 22:04:01.380549 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 12 22:04:01.399575 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 12 22:04:01.401354 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 12 22:04:01.417812 kubelet[2775]: I1112 22:04:01.417762 2775 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 22:04:01.417905 kubelet[2775]: I1112 22:04:01.417880 2775 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 12 22:04:01.417980 kubelet[2775]: I1112 22:04:01.417971 2775 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 22:04:01.418582 kubelet[2775]: E1112 22:04:01.418568 2775 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.2.0-a-a9d0314af7\" not found" Nov 12 22:04:01.436878 kubelet[2775]: I1112 22:04:01.436851 2775 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:01.437268 kubelet[2775]: E1112 22:04:01.437199 2775 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.75.202.249:6443/api/v1/nodes\": dial tcp 147.75.202.249:6443: connect: connection refused" node="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:01.446558 kubelet[2775]: I1112 22:04:01.446484 2775 topology_manager.go:215] "Topology Admit Handler" podUID="56525a49ef592d0fb385cf91ce4ea459" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:01.449049 kubelet[2775]: I1112 22:04:01.448994 2775 topology_manager.go:215] "Topology Admit Handler" podUID="7cacfc83d7f0994f92dd46e67a3b41f3" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:01.451965 kubelet[2775]: I1112 22:04:01.451918 2775 topology_manager.go:215] "Topology Admit Handler" podUID="9faed0542f01307fdcf83e568abe81bd" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:01.465582 systemd[1]: Created slice kubepods-burstable-pod56525a49ef592d0fb385cf91ce4ea459.slice - libcontainer container kubepods-burstable-pod56525a49ef592d0fb385cf91ce4ea459.slice. Nov 12 22:04:01.489153 systemd[1]: Created slice kubepods-burstable-pod7cacfc83d7f0994f92dd46e67a3b41f3.slice - libcontainer container kubepods-burstable-pod7cacfc83d7f0994f92dd46e67a3b41f3.slice. Nov 12 22:04:01.499461 systemd[1]: Created slice kubepods-burstable-pod9faed0542f01307fdcf83e568abe81bd.slice - libcontainer container kubepods-burstable-pod9faed0542f01307fdcf83e568abe81bd.slice. Nov 12 22:04:01.535782 kubelet[2775]: I1112 22:04:01.535669 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/56525a49ef592d0fb385cf91ce4ea459-ca-certs\") pod \"kube-apiserver-ci-4081.2.0-a-a9d0314af7\" (UID: \"56525a49ef592d0fb385cf91ce4ea459\") " pod="kube-system/kube-apiserver-ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:01.535782 kubelet[2775]: I1112 22:04:01.535759 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/56525a49ef592d0fb385cf91ce4ea459-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.0-a-a9d0314af7\" (UID: \"56525a49ef592d0fb385cf91ce4ea459\") " pod="kube-system/kube-apiserver-ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:01.536113 kubelet[2775]: I1112 22:04:01.535819 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7cacfc83d7f0994f92dd46e67a3b41f3-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.0-a-a9d0314af7\" (UID: \"7cacfc83d7f0994f92dd46e67a3b41f3\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:01.536113 kubelet[2775]: I1112 22:04:01.535865 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9faed0542f01307fdcf83e568abe81bd-kubeconfig\") pod \"kube-scheduler-ci-4081.2.0-a-a9d0314af7\" (UID: \"9faed0542f01307fdcf83e568abe81bd\") " pod="kube-system/kube-scheduler-ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:01.536113 kubelet[2775]: I1112 22:04:01.535904 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/56525a49ef592d0fb385cf91ce4ea459-k8s-certs\") pod \"kube-apiserver-ci-4081.2.0-a-a9d0314af7\" (UID: \"56525a49ef592d0fb385cf91ce4ea459\") " pod="kube-system/kube-apiserver-ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:01.536113 kubelet[2775]: I1112 22:04:01.535943 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7cacfc83d7f0994f92dd46e67a3b41f3-ca-certs\") pod \"kube-controller-manager-ci-4081.2.0-a-a9d0314af7\" (UID: \"7cacfc83d7f0994f92dd46e67a3b41f3\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:01.536113 kubelet[2775]: I1112 22:04:01.535982 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7cacfc83d7f0994f92dd46e67a3b41f3-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.0-a-a9d0314af7\" (UID: \"7cacfc83d7f0994f92dd46e67a3b41f3\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:01.536497 kubelet[2775]: I1112 22:04:01.536020 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7cacfc83d7f0994f92dd46e67a3b41f3-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.0-a-a9d0314af7\" (UID: \"7cacfc83d7f0994f92dd46e67a3b41f3\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:01.536497 kubelet[2775]: I1112 22:04:01.536061 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7cacfc83d7f0994f92dd46e67a3b41f3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.0-a-a9d0314af7\" (UID: \"7cacfc83d7f0994f92dd46e67a3b41f3\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:01.538395 kubelet[2775]: E1112 22:04:01.538290 2775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.202.249:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.0-a-a9d0314af7?timeout=10s\": dial tcp 147.75.202.249:6443: connect: connection refused" interval="400ms" Nov 12 22:04:01.641695 kubelet[2775]: I1112 22:04:01.641626 2775 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:01.642417 kubelet[2775]: E1112 22:04:01.642311 2775 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.75.202.249:6443/api/v1/nodes\": dial tcp 147.75.202.249:6443: connect: connection refused" node="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:01.782452 containerd[1811]: time="2024-11-12T22:04:01.782325488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.0-a-a9d0314af7,Uid:56525a49ef592d0fb385cf91ce4ea459,Namespace:kube-system,Attempt:0,}" Nov 12 22:04:01.795816 containerd[1811]: time="2024-11-12T22:04:01.795756987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.0-a-a9d0314af7,Uid:7cacfc83d7f0994f92dd46e67a3b41f3,Namespace:kube-system,Attempt:0,}" Nov 12 22:04:01.804608 containerd[1811]: time="2024-11-12T22:04:01.804561945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.0-a-a9d0314af7,Uid:9faed0542f01307fdcf83e568abe81bd,Namespace:kube-system,Attempt:0,}" Nov 12 22:04:01.939877 kubelet[2775]: E1112 22:04:01.939624 2775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.202.249:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.0-a-a9d0314af7?timeout=10s\": dial tcp 147.75.202.249:6443: connect: connection refused" interval="800ms" Nov 12 22:04:02.046890 kubelet[2775]: I1112 22:04:02.046825 2775 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:02.047648 kubelet[2775]: E1112 22:04:02.047546 2775 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.75.202.249:6443/api/v1/nodes\": dial tcp 147.75.202.249:6443: connect: connection refused" node="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:02.201996 kubelet[2775]: W1112 22:04:02.201880 2775 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.75.202.249:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.0-a-a9d0314af7&limit=500&resourceVersion=0": dial tcp 147.75.202.249:6443: connect: connection refused Nov 12 22:04:02.201996 kubelet[2775]: E1112 22:04:02.201921 2775 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.75.202.249:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.0-a-a9d0314af7&limit=500&resourceVersion=0": dial tcp 147.75.202.249:6443: connect: connection refused Nov 12 22:04:02.252248 kubelet[2775]: W1112 22:04:02.252185 2775 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.75.202.249:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.202.249:6443: connect: connection refused Nov 12 22:04:02.252248 kubelet[2775]: E1112 22:04:02.252221 2775 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.75.202.249:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.202.249:6443: connect: connection refused Nov 12 22:04:02.274501 kubelet[2775]: W1112 22:04:02.274461 2775 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.75.202.249:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.202.249:6443: connect: connection refused Nov 12 22:04:02.274501 kubelet[2775]: E1112 22:04:02.274483 2775 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.75.202.249:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.202.249:6443: connect: connection refused Nov 12 22:04:02.328834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4019748086.mount: Deactivated successfully. Nov 12 22:04:02.330986 containerd[1811]: time="2024-11-12T22:04:02.330940847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:04:02.331250 containerd[1811]: time="2024-11-12T22:04:02.331195056Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 12 22:04:02.331616 containerd[1811]: time="2024-11-12T22:04:02.331575202Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:04:02.332107 containerd[1811]: time="2024-11-12T22:04:02.332051104Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:04:02.332498 containerd[1811]: time="2024-11-12T22:04:02.332455007Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 22:04:02.332589 containerd[1811]: time="2024-11-12T22:04:02.332523082Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:04:02.332799 containerd[1811]: time="2024-11-12T22:04:02.332761949Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 22:04:02.333850 containerd[1811]: time="2024-11-12T22:04:02.333810384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:04:02.335477 containerd[1811]: time="2024-11-12T22:04:02.335436512Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 552.93188ms" Nov 12 22:04:02.336138 containerd[1811]: time="2024-11-12T22:04:02.336096581Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 531.500526ms" Nov 12 22:04:02.337401 containerd[1811]: time="2024-11-12T22:04:02.337361087Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 541.549666ms" Nov 12 22:04:02.460995 containerd[1811]: time="2024-11-12T22:04:02.460893734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:04:02.460995 containerd[1811]: time="2024-11-12T22:04:02.460924325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:04:02.461105 containerd[1811]: time="2024-11-12T22:04:02.461069343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:04:02.461132 containerd[1811]: time="2024-11-12T22:04:02.461099897Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:04:02.461132 containerd[1811]: time="2024-11-12T22:04:02.461111249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:04:02.461166 containerd[1811]: time="2024-11-12T22:04:02.460952788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:04:02.461181 containerd[1811]: time="2024-11-12T22:04:02.461164476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:04:02.461204 containerd[1811]: time="2024-11-12T22:04:02.461189284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:04:02.461379 containerd[1811]: time="2024-11-12T22:04:02.461347855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:04:02.461568 containerd[1811]: time="2024-11-12T22:04:02.461550095Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:04:02.461568 containerd[1811]: time="2024-11-12T22:04:02.461560399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:04:02.461639 containerd[1811]: time="2024-11-12T22:04:02.461604814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:04:02.480417 systemd[1]: Started cri-containerd-1bd1ce0b46781870d3d3615b37f3d60997f3b8c499dda55a7aee8d512c309fcc.scope - libcontainer container 1bd1ce0b46781870d3d3615b37f3d60997f3b8c499dda55a7aee8d512c309fcc. Nov 12 22:04:02.481077 systemd[1]: Started cri-containerd-372cf005998daeb37727fa23b59609d866c2029d9c0620f7374f666fd76e20c7.scope - libcontainer container 372cf005998daeb37727fa23b59609d866c2029d9c0620f7374f666fd76e20c7. Nov 12 22:04:02.482019 systemd[1]: Started cri-containerd-75213cd78948bf853f4d54cf28173ed0542497b5d7cb129cbd6b4772e1cf3222.scope - libcontainer container 75213cd78948bf853f4d54cf28173ed0542497b5d7cb129cbd6b4772e1cf3222. Nov 12 22:04:02.503006 containerd[1811]: time="2024-11-12T22:04:02.502982283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.0-a-a9d0314af7,Uid:9faed0542f01307fdcf83e568abe81bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"1bd1ce0b46781870d3d3615b37f3d60997f3b8c499dda55a7aee8d512c309fcc\"" Nov 12 22:04:02.504749 containerd[1811]: time="2024-11-12T22:04:02.504735337Z" level=info msg="CreateContainer within sandbox \"1bd1ce0b46781870d3d3615b37f3d60997f3b8c499dda55a7aee8d512c309fcc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 22:04:02.510424 containerd[1811]: time="2024-11-12T22:04:02.510399393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.0-a-a9d0314af7,Uid:56525a49ef592d0fb385cf91ce4ea459,Namespace:kube-system,Attempt:0,} returns sandbox id \"372cf005998daeb37727fa23b59609d866c2029d9c0620f7374f666fd76e20c7\"" Nov 12 22:04:02.510554 containerd[1811]: time="2024-11-12T22:04:02.510492593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.0-a-a9d0314af7,Uid:7cacfc83d7f0994f92dd46e67a3b41f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"75213cd78948bf853f4d54cf28173ed0542497b5d7cb129cbd6b4772e1cf3222\"" Nov 12 22:04:02.511672 containerd[1811]: time="2024-11-12T22:04:02.511657480Z" level=info msg="CreateContainer within sandbox \"372cf005998daeb37727fa23b59609d866c2029d9c0620f7374f666fd76e20c7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 22:04:02.511710 containerd[1811]: time="2024-11-12T22:04:02.511688036Z" level=info msg="CreateContainer within sandbox \"75213cd78948bf853f4d54cf28173ed0542497b5d7cb129cbd6b4772e1cf3222\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 22:04:02.512010 containerd[1811]: time="2024-11-12T22:04:02.511998240Z" level=info msg="CreateContainer within sandbox \"1bd1ce0b46781870d3d3615b37f3d60997f3b8c499dda55a7aee8d512c309fcc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"013ccf86e66754b6a7a1fd37ee49e155edd6da0a370e95a2792a6b8cd72022e6\"" Nov 12 22:04:02.512269 containerd[1811]: time="2024-11-12T22:04:02.512255256Z" level=info msg="StartContainer for \"013ccf86e66754b6a7a1fd37ee49e155edd6da0a370e95a2792a6b8cd72022e6\"" Nov 12 22:04:02.518313 containerd[1811]: time="2024-11-12T22:04:02.518289471Z" level=info msg="CreateContainer within sandbox \"75213cd78948bf853f4d54cf28173ed0542497b5d7cb129cbd6b4772e1cf3222\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fa16d52f692d3cf2494ac1b7d90bad65fa4271a2a6ffc8dc6dda1f2db8b39b3f\"" Nov 12 22:04:02.518614 containerd[1811]: time="2024-11-12T22:04:02.518598338Z" level=info msg="CreateContainer within sandbox \"372cf005998daeb37727fa23b59609d866c2029d9c0620f7374f666fd76e20c7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5441c615e84265a124ef490c23e2d738318ae6076656ae12d85ee67211eaee72\"" Nov 12 22:04:02.518663 containerd[1811]: time="2024-11-12T22:04:02.518604689Z" level=info msg="StartContainer for \"fa16d52f692d3cf2494ac1b7d90bad65fa4271a2a6ffc8dc6dda1f2db8b39b3f\"" Nov 12 22:04:02.518812 containerd[1811]: time="2024-11-12T22:04:02.518799140Z" level=info msg="StartContainer for \"5441c615e84265a124ef490c23e2d738318ae6076656ae12d85ee67211eaee72\"" Nov 12 22:04:02.531269 systemd[1]: Started cri-containerd-013ccf86e66754b6a7a1fd37ee49e155edd6da0a370e95a2792a6b8cd72022e6.scope - libcontainer container 013ccf86e66754b6a7a1fd37ee49e155edd6da0a370e95a2792a6b8cd72022e6. Nov 12 22:04:02.533306 systemd[1]: Started cri-containerd-5441c615e84265a124ef490c23e2d738318ae6076656ae12d85ee67211eaee72.scope - libcontainer container 5441c615e84265a124ef490c23e2d738318ae6076656ae12d85ee67211eaee72. Nov 12 22:04:02.533959 systemd[1]: Started cri-containerd-fa16d52f692d3cf2494ac1b7d90bad65fa4271a2a6ffc8dc6dda1f2db8b39b3f.scope - libcontainer container fa16d52f692d3cf2494ac1b7d90bad65fa4271a2a6ffc8dc6dda1f2db8b39b3f. Nov 12 22:04:02.557591 containerd[1811]: time="2024-11-12T22:04:02.557564531Z" level=info msg="StartContainer for \"5441c615e84265a124ef490c23e2d738318ae6076656ae12d85ee67211eaee72\" returns successfully" Nov 12 22:04:02.557672 containerd[1811]: time="2024-11-12T22:04:02.557564697Z" level=info msg="StartContainer for \"fa16d52f692d3cf2494ac1b7d90bad65fa4271a2a6ffc8dc6dda1f2db8b39b3f\" returns successfully" Nov 12 22:04:02.557672 containerd[1811]: time="2024-11-12T22:04:02.557564613Z" level=info msg="StartContainer for \"013ccf86e66754b6a7a1fd37ee49e155edd6da0a370e95a2792a6b8cd72022e6\" returns successfully" Nov 12 22:04:02.850663 kubelet[2775]: I1112 22:04:02.850217 2775 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:03.014268 kubelet[2775]: E1112 22:04:03.014241 2775 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.2.0-a-a9d0314af7\" not found" node="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:03.117622 kubelet[2775]: I1112 22:04:03.117580 2775 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:03.126727 kubelet[2775]: E1112 22:04:03.126709 2775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.0-a-a9d0314af7\" not found" Nov 12 22:04:03.227360 kubelet[2775]: E1112 22:04:03.227303 2775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.0-a-a9d0314af7\" not found" Nov 12 22:04:03.327957 kubelet[2775]: E1112 22:04:03.327851 2775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.0-a-a9d0314af7\" not found" Nov 12 22:04:03.428812 kubelet[2775]: E1112 22:04:03.428571 2775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.0-a-a9d0314af7\" not found" Nov 12 22:04:03.529749 kubelet[2775]: E1112 22:04:03.529639 2775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.0-a-a9d0314af7\" not found" Nov 12 22:04:03.630239 kubelet[2775]: E1112 22:04:03.630115 2775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.0-a-a9d0314af7\" not found" Nov 12 22:04:03.731492 kubelet[2775]: E1112 22:04:03.731258 2775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.0-a-a9d0314af7\" not found" Nov 12 22:04:03.832514 kubelet[2775]: E1112 22:04:03.832404 2775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.0-a-a9d0314af7\" not found" Nov 12 22:04:03.932767 kubelet[2775]: E1112 22:04:03.932651 2775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.0-a-a9d0314af7\" not found" Nov 12 22:04:04.033074 kubelet[2775]: E1112 22:04:04.032829 2775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.0-a-a9d0314af7\" not found" Nov 12 22:04:04.133444 kubelet[2775]: E1112 22:04:04.133348 2775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.0-a-a9d0314af7\" not found" Nov 12 22:04:04.234321 kubelet[2775]: E1112 22:04:04.234230 2775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.0-a-a9d0314af7\" not found" Nov 12 22:04:04.335141 kubelet[2775]: E1112 22:04:04.334940 2775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.0-a-a9d0314af7\" not found" Nov 12 22:04:04.435410 kubelet[2775]: E1112 22:04:04.435303 2775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.0-a-a9d0314af7\" not found" Nov 12 22:04:04.536215 kubelet[2775]: E1112 22:04:04.536158 2775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.0-a-a9d0314af7\" not found" Nov 12 22:04:04.636381 kubelet[2775]: E1112 22:04:04.636308 2775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.0-a-a9d0314af7\" not found" Nov 12 22:04:04.737389 kubelet[2775]: E1112 22:04:04.737323 2775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.0-a-a9d0314af7\" not found" Nov 12 22:04:04.838329 kubelet[2775]: E1112 22:04:04.838254 2775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.0-a-a9d0314af7\" not found" Nov 12 22:04:04.938802 kubelet[2775]: E1112 22:04:04.938618 2775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.0-a-a9d0314af7\" not found" Nov 12 22:04:05.039850 kubelet[2775]: E1112 22:04:05.039779 2775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.0-a-a9d0314af7\" not found" Nov 12 22:04:05.140598 kubelet[2775]: E1112 22:04:05.140576 2775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.0-a-a9d0314af7\" not found" Nov 12 22:04:05.330056 kubelet[2775]: I1112 22:04:05.329826 2775 apiserver.go:52] "Watching apiserver" Nov 12 22:04:05.334808 kubelet[2775]: I1112 22:04:05.334725 2775 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Nov 12 22:04:05.539232 systemd[1]: Reloading requested from client PID 3092 ('systemctl') (unit session-9.scope)... Nov 12 22:04:05.539239 systemd[1]: Reloading... Nov 12 22:04:05.580109 zram_generator::config[3131]: No configuration found. Nov 12 22:04:05.645100 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 22:04:05.712680 systemd[1]: Reloading finished in 173 ms. Nov 12 22:04:05.738929 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:04:05.740984 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 22:04:05.741086 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:04:05.757556 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:04:05.968640 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:04:05.974375 (kubelet)[3195]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 22:04:06.017145 kubelet[3195]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 22:04:06.017145 kubelet[3195]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 22:04:06.017145 kubelet[3195]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 22:04:06.017527 kubelet[3195]: I1112 22:04:06.017166 3195 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 22:04:06.021639 kubelet[3195]: I1112 22:04:06.021617 3195 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Nov 12 22:04:06.021639 kubelet[3195]: I1112 22:04:06.021637 3195 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 22:04:06.021859 kubelet[3195]: I1112 22:04:06.021844 3195 server.go:927] "Client rotation is on, will bootstrap in background" Nov 12 22:04:06.023086 kubelet[3195]: I1112 22:04:06.023069 3195 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 22:04:06.024078 kubelet[3195]: I1112 22:04:06.024056 3195 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 22:04:06.036353 kubelet[3195]: I1112 22:04:06.036307 3195 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 22:04:06.036540 kubelet[3195]: I1112 22:04:06.036483 3195 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 22:04:06.036700 kubelet[3195]: I1112 22:04:06.036511 3195 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.2.0-a-a9d0314af7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 22:04:06.036700 kubelet[3195]: I1112 22:04:06.036689 3195 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 22:04:06.036700 kubelet[3195]: I1112 22:04:06.036700 3195 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 22:04:06.036869 kubelet[3195]: I1112 22:04:06.036736 3195 state_mem.go:36] "Initialized new in-memory state store" Nov 12 22:04:06.036869 kubelet[3195]: I1112 22:04:06.036827 3195 kubelet.go:400] "Attempting to sync node with API server" Nov 12 22:04:06.036869 kubelet[3195]: I1112 22:04:06.036841 3195 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 22:04:06.036869 kubelet[3195]: I1112 22:04:06.036860 3195 kubelet.go:312] "Adding apiserver pod source" Nov 12 22:04:06.036999 kubelet[3195]: I1112 22:04:06.036873 3195 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 22:04:06.037390 kubelet[3195]: I1112 22:04:06.037371 3195 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 22:04:06.037580 kubelet[3195]: I1112 22:04:06.037541 3195 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 22:04:06.037911 kubelet[3195]: I1112 22:04:06.037880 3195 server.go:1264] "Started kubelet" Nov 12 22:04:06.038023 kubelet[3195]: I1112 22:04:06.037966 3195 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 22:04:06.038332 kubelet[3195]: I1112 22:04:06.038001 3195 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 22:04:06.038547 kubelet[3195]: I1112 22:04:06.038533 3195 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 22:04:06.039343 kubelet[3195]: I1112 22:04:06.039325 3195 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 22:04:06.039427 kubelet[3195]: I1112 22:04:06.039375 3195 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 22:04:06.039427 kubelet[3195]: I1112 22:04:06.039412 3195 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Nov 12 22:04:06.039427 kubelet[3195]: E1112 22:04:06.039397 3195 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.0-a-a9d0314af7\" not found" Nov 12 22:04:06.039579 kubelet[3195]: I1112 22:04:06.039562 3195 server.go:455] "Adding debug handlers to kubelet server" Nov 12 22:04:06.039985 kubelet[3195]: I1112 22:04:06.039966 3195 reconciler.go:26] "Reconciler: start to sync state" Nov 12 22:04:06.040346 kubelet[3195]: I1112 22:04:06.040325 3195 factory.go:221] Registration of the systemd container factory successfully Nov 12 22:04:06.040481 kubelet[3195]: I1112 22:04:06.040456 3195 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 22:04:06.041438 kubelet[3195]: E1112 22:04:06.041414 3195 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 22:04:06.042638 kubelet[3195]: I1112 22:04:06.042619 3195 factory.go:221] Registration of the containerd container factory successfully Nov 12 22:04:06.049104 kubelet[3195]: I1112 22:04:06.049066 3195 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 22:04:06.050111 kubelet[3195]: I1112 22:04:06.050087 3195 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 22:04:06.050180 kubelet[3195]: I1112 22:04:06.050121 3195 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 22:04:06.050180 kubelet[3195]: I1112 22:04:06.050140 3195 kubelet.go:2337] "Starting kubelet main sync loop" Nov 12 22:04:06.050255 kubelet[3195]: E1112 22:04:06.050194 3195 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 22:04:06.066873 kubelet[3195]: I1112 22:04:06.066846 3195 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 22:04:06.066873 kubelet[3195]: I1112 22:04:06.066864 3195 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 22:04:06.066873 kubelet[3195]: I1112 22:04:06.066882 3195 state_mem.go:36] "Initialized new in-memory state store" Nov 12 22:04:06.067042 kubelet[3195]: I1112 22:04:06.067021 3195 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 22:04:06.067076 kubelet[3195]: I1112 22:04:06.067032 3195 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 22:04:06.067076 kubelet[3195]: I1112 22:04:06.067051 3195 policy_none.go:49] "None policy: Start" Nov 12 22:04:06.067621 kubelet[3195]: I1112 22:04:06.067607 3195 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 22:04:06.067668 kubelet[3195]: I1112 22:04:06.067624 3195 state_mem.go:35] "Initializing new in-memory state store" Nov 12 22:04:06.067777 kubelet[3195]: I1112 22:04:06.067766 3195 state_mem.go:75] "Updated machine memory state" Nov 12 22:04:06.071149 kubelet[3195]: I1112 22:04:06.071126 3195 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 22:04:06.071304 kubelet[3195]: I1112 22:04:06.071274 3195 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 12 22:04:06.071389 kubelet[3195]: I1112 22:04:06.071365 3195 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 22:04:06.147310 kubelet[3195]: I1112 22:04:06.147240 3195 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:06.150592 kubelet[3195]: I1112 22:04:06.150474 3195 topology_manager.go:215] "Topology Admit Handler" podUID="9faed0542f01307fdcf83e568abe81bd" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:06.150830 kubelet[3195]: I1112 22:04:06.150653 3195 topology_manager.go:215] "Topology Admit Handler" podUID="56525a49ef592d0fb385cf91ce4ea459" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:06.150830 kubelet[3195]: I1112 22:04:06.150819 3195 topology_manager.go:215] "Topology Admit Handler" podUID="7cacfc83d7f0994f92dd46e67a3b41f3" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:06.156745 kubelet[3195]: I1112 22:04:06.156689 3195 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:06.157003 kubelet[3195]: I1112 22:04:06.156877 3195 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:06.165160 kubelet[3195]: W1112 22:04:06.165073 3195 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 22:04:06.165397 kubelet[3195]: W1112 22:04:06.165202 3195 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 22:04:06.165397 kubelet[3195]: W1112 22:04:06.165345 3195 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 22:04:06.341739 kubelet[3195]: I1112 22:04:06.341515 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9faed0542f01307fdcf83e568abe81bd-kubeconfig\") pod \"kube-scheduler-ci-4081.2.0-a-a9d0314af7\" (UID: \"9faed0542f01307fdcf83e568abe81bd\") " pod="kube-system/kube-scheduler-ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:06.341739 kubelet[3195]: I1112 22:04:06.341645 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/56525a49ef592d0fb385cf91ce4ea459-ca-certs\") pod \"kube-apiserver-ci-4081.2.0-a-a9d0314af7\" (UID: \"56525a49ef592d0fb385cf91ce4ea459\") " pod="kube-system/kube-apiserver-ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:06.341739 kubelet[3195]: I1112 22:04:06.341732 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/56525a49ef592d0fb385cf91ce4ea459-k8s-certs\") pod \"kube-apiserver-ci-4081.2.0-a-a9d0314af7\" (UID: \"56525a49ef592d0fb385cf91ce4ea459\") " pod="kube-system/kube-apiserver-ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:06.342231 kubelet[3195]: I1112 22:04:06.341812 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7cacfc83d7f0994f92dd46e67a3b41f3-ca-certs\") pod \"kube-controller-manager-ci-4081.2.0-a-a9d0314af7\" (UID: \"7cacfc83d7f0994f92dd46e67a3b41f3\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:06.342231 kubelet[3195]: I1112 22:04:06.341901 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7cacfc83d7f0994f92dd46e67a3b41f3-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.0-a-a9d0314af7\" (UID: \"7cacfc83d7f0994f92dd46e67a3b41f3\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:06.342231 kubelet[3195]: I1112 22:04:06.341967 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7cacfc83d7f0994f92dd46e67a3b41f3-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.0-a-a9d0314af7\" (UID: \"7cacfc83d7f0994f92dd46e67a3b41f3\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:06.342231 kubelet[3195]: I1112 22:04:06.342052 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7cacfc83d7f0994f92dd46e67a3b41f3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.0-a-a9d0314af7\" (UID: \"7cacfc83d7f0994f92dd46e67a3b41f3\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:06.342231 kubelet[3195]: I1112 22:04:06.342149 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/56525a49ef592d0fb385cf91ce4ea459-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.0-a-a9d0314af7\" (UID: \"56525a49ef592d0fb385cf91ce4ea459\") " pod="kube-system/kube-apiserver-ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:06.342914 kubelet[3195]: I1112 22:04:06.342230 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7cacfc83d7f0994f92dd46e67a3b41f3-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.0-a-a9d0314af7\" (UID: \"7cacfc83d7f0994f92dd46e67a3b41f3\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:07.037506 kubelet[3195]: I1112 22:04:07.037425 3195 apiserver.go:52] "Watching apiserver" Nov 12 22:04:07.067831 kubelet[3195]: W1112 22:04:07.067640 3195 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 22:04:07.068105 kubelet[3195]: E1112 22:04:07.067827 3195 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.2.0-a-a9d0314af7\" already exists" pod="kube-system/kube-scheduler-ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:07.071145 kubelet[3195]: W1112 22:04:07.068977 3195 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 22:04:07.071145 kubelet[3195]: E1112 22:04:07.069160 3195 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.2.0-a-a9d0314af7\" already exists" pod="kube-system/kube-apiserver-ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:07.072187 kubelet[3195]: W1112 22:04:07.072130 3195 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 22:04:07.072388 kubelet[3195]: E1112 22:04:07.072377 3195 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.2.0-a-a9d0314af7\" already exists" pod="kube-system/kube-controller-manager-ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:07.084087 kubelet[3195]: I1112 22:04:07.084055 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.2.0-a-a9d0314af7" podStartSLOduration=1.084029711 podStartE2EDuration="1.084029711s" podCreationTimestamp="2024-11-12 22:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:04:07.084010546 +0000 UTC m=+1.105849213" watchObservedRunningTime="2024-11-12 22:04:07.084029711 +0000 UTC m=+1.105868374" Nov 12 22:04:07.108393 kubelet[3195]: I1112 22:04:07.108277 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.2.0-a-a9d0314af7" podStartSLOduration=1.108234912 podStartE2EDuration="1.108234912s" podCreationTimestamp="2024-11-12 22:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:04:07.087811867 +0000 UTC m=+1.109650531" watchObservedRunningTime="2024-11-12 22:04:07.108234912 +0000 UTC m=+1.130073644" Nov 12 22:04:07.124066 kubelet[3195]: I1112 22:04:07.123958 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.2.0-a-a9d0314af7" podStartSLOduration=1.123922373 podStartE2EDuration="1.123922373s" podCreationTimestamp="2024-11-12 22:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:04:07.108524488 +0000 UTC m=+1.130363227" watchObservedRunningTime="2024-11-12 22:04:07.123922373 +0000 UTC m=+1.145761087" Nov 12 22:04:07.140235 kubelet[3195]: I1112 22:04:07.140169 3195 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Nov 12 22:04:10.442617 sudo[2041]: pam_unix(sudo:session): session closed for user root Nov 12 22:04:10.443440 sshd[2038]: pam_unix(sshd:session): session closed for user core Nov 12 22:04:10.444858 systemd[1]: sshd@6-147.75.202.249:22-147.75.109.163:42510.service: Deactivated successfully. Nov 12 22:04:10.445770 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 22:04:10.445863 systemd[1]: session-9.scope: Consumed 3.463s CPU time, 201.0M memory peak, 0B memory swap peak. Nov 12 22:04:10.446435 systemd-logind[1794]: Session 9 logged out. Waiting for processes to exit. Nov 12 22:04:10.446976 systemd-logind[1794]: Removed session 9. Nov 12 22:04:20.320413 kubelet[3195]: I1112 22:04:20.320330 3195 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 22:04:20.321775 kubelet[3195]: I1112 22:04:20.321559 3195 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 22:04:20.321995 containerd[1811]: time="2024-11-12T22:04:20.321087426Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 22:04:20.463180 kubelet[3195]: I1112 22:04:20.463103 3195 topology_manager.go:215] "Topology Admit Handler" podUID="f419bdfb-8fc9-473b-bf52-36adc7d8a919" podNamespace="tigera-operator" podName="tigera-operator-5645cfc98-d99rt" Nov 12 22:04:20.471724 systemd[1]: Created slice kubepods-besteffort-podf419bdfb_8fc9_473b_bf52_36adc7d8a919.slice - libcontainer container kubepods-besteffort-podf419bdfb_8fc9_473b_bf52_36adc7d8a919.slice. Nov 12 22:04:20.549288 kubelet[3195]: I1112 22:04:20.549171 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xptsq\" (UniqueName: \"kubernetes.io/projected/f419bdfb-8fc9-473b-bf52-36adc7d8a919-kube-api-access-xptsq\") pod \"tigera-operator-5645cfc98-d99rt\" (UID: \"f419bdfb-8fc9-473b-bf52-36adc7d8a919\") " pod="tigera-operator/tigera-operator-5645cfc98-d99rt" Nov 12 22:04:20.549547 kubelet[3195]: I1112 22:04:20.549286 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f419bdfb-8fc9-473b-bf52-36adc7d8a919-var-lib-calico\") pod \"tigera-operator-5645cfc98-d99rt\" (UID: \"f419bdfb-8fc9-473b-bf52-36adc7d8a919\") " pod="tigera-operator/tigera-operator-5645cfc98-d99rt" Nov 12 22:04:20.728709 kubelet[3195]: I1112 22:04:20.728628 3195 topology_manager.go:215] "Topology Admit Handler" podUID="c58dba0c-a300-4402-bccf-86ffd843bb1e" podNamespace="kube-system" podName="kube-proxy-hlqrx" Nov 12 22:04:20.747245 systemd[1]: Created slice kubepods-besteffort-podc58dba0c_a300_4402_bccf_86ffd843bb1e.slice - libcontainer container kubepods-besteffort-podc58dba0c_a300_4402_bccf_86ffd843bb1e.slice. Nov 12 22:04:20.750576 kubelet[3195]: I1112 22:04:20.750524 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c58dba0c-a300-4402-bccf-86ffd843bb1e-kube-proxy\") pod \"kube-proxy-hlqrx\" (UID: \"c58dba0c-a300-4402-bccf-86ffd843bb1e\") " pod="kube-system/kube-proxy-hlqrx" Nov 12 22:04:20.750757 kubelet[3195]: I1112 22:04:20.750583 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c58dba0c-a300-4402-bccf-86ffd843bb1e-xtables-lock\") pod \"kube-proxy-hlqrx\" (UID: \"c58dba0c-a300-4402-bccf-86ffd843bb1e\") " pod="kube-system/kube-proxy-hlqrx" Nov 12 22:04:20.750757 kubelet[3195]: I1112 22:04:20.750620 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c58dba0c-a300-4402-bccf-86ffd843bb1e-lib-modules\") pod \"kube-proxy-hlqrx\" (UID: \"c58dba0c-a300-4402-bccf-86ffd843bb1e\") " pod="kube-system/kube-proxy-hlqrx" Nov 12 22:04:20.750757 kubelet[3195]: I1112 22:04:20.750659 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q4tp\" (UniqueName: \"kubernetes.io/projected/c58dba0c-a300-4402-bccf-86ffd843bb1e-kube-api-access-8q4tp\") pod \"kube-proxy-hlqrx\" (UID: \"c58dba0c-a300-4402-bccf-86ffd843bb1e\") " pod="kube-system/kube-proxy-hlqrx" Nov 12 22:04:20.793475 containerd[1811]: time="2024-11-12T22:04:20.793362483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5645cfc98-d99rt,Uid:f419bdfb-8fc9-473b-bf52-36adc7d8a919,Namespace:tigera-operator,Attempt:0,}" Nov 12 22:04:20.804232 containerd[1811]: time="2024-11-12T22:04:20.804193456Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:04:20.804232 containerd[1811]: time="2024-11-12T22:04:20.804222546Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:04:20.804232 containerd[1811]: time="2024-11-12T22:04:20.804230322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:04:20.804346 containerd[1811]: time="2024-11-12T22:04:20.804273246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:04:20.824382 systemd[1]: Started cri-containerd-e92a692b3907dfcb4e333824ba4d31497c26b888806336402366bb9786716f32.scope - libcontainer container e92a692b3907dfcb4e333824ba4d31497c26b888806336402366bb9786716f32. Nov 12 22:04:20.850521 containerd[1811]: time="2024-11-12T22:04:20.850498440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5645cfc98-d99rt,Uid:f419bdfb-8fc9-473b-bf52-36adc7d8a919,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e92a692b3907dfcb4e333824ba4d31497c26b888806336402366bb9786716f32\"" Nov 12 22:04:20.851516 containerd[1811]: time="2024-11-12T22:04:20.851499245Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\"" Nov 12 22:04:21.053006 containerd[1811]: time="2024-11-12T22:04:21.052913552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hlqrx,Uid:c58dba0c-a300-4402-bccf-86ffd843bb1e,Namespace:kube-system,Attempt:0,}" Nov 12 22:04:21.064533 containerd[1811]: time="2024-11-12T22:04:21.064393110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:04:21.064533 containerd[1811]: time="2024-11-12T22:04:21.064471843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:04:21.064533 containerd[1811]: time="2024-11-12T22:04:21.064478924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:04:21.064677 containerd[1811]: time="2024-11-12T22:04:21.064559747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:04:21.083432 systemd[1]: Started cri-containerd-74ab558ae36b2bf9a7c3b4dce7a1c7dda05246eb4a28b45a266c621dfa31c5b3.scope - libcontainer container 74ab558ae36b2bf9a7c3b4dce7a1c7dda05246eb4a28b45a266c621dfa31c5b3. Nov 12 22:04:21.093383 containerd[1811]: time="2024-11-12T22:04:21.093330169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hlqrx,Uid:c58dba0c-a300-4402-bccf-86ffd843bb1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"74ab558ae36b2bf9a7c3b4dce7a1c7dda05246eb4a28b45a266c621dfa31c5b3\"" Nov 12 22:04:21.094670 containerd[1811]: time="2024-11-12T22:04:21.094618743Z" level=info msg="CreateContainer within sandbox \"74ab558ae36b2bf9a7c3b4dce7a1c7dda05246eb4a28b45a266c621dfa31c5b3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 22:04:21.100457 containerd[1811]: time="2024-11-12T22:04:21.100415763Z" level=info msg="CreateContainer within sandbox \"74ab558ae36b2bf9a7c3b4dce7a1c7dda05246eb4a28b45a266c621dfa31c5b3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cf88c42081edd20d792112ec0963bf195de2cd7486fab436dff8708511f68be6\"" Nov 12 22:04:21.100782 containerd[1811]: time="2024-11-12T22:04:21.100717753Z" level=info msg="StartContainer for \"cf88c42081edd20d792112ec0963bf195de2cd7486fab436dff8708511f68be6\"" Nov 12 22:04:21.124323 systemd[1]: Started cri-containerd-cf88c42081edd20d792112ec0963bf195de2cd7486fab436dff8708511f68be6.scope - libcontainer container cf88c42081edd20d792112ec0963bf195de2cd7486fab436dff8708511f68be6. Nov 12 22:04:21.137712 containerd[1811]: time="2024-11-12T22:04:21.137664031Z" level=info msg="StartContainer for \"cf88c42081edd20d792112ec0963bf195de2cd7486fab436dff8708511f68be6\" returns successfully" Nov 12 22:04:22.107791 kubelet[3195]: I1112 22:04:22.107733 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hlqrx" podStartSLOduration=2.107723743 podStartE2EDuration="2.107723743s" podCreationTimestamp="2024-11-12 22:04:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:04:22.107643364 +0000 UTC m=+16.129482029" watchObservedRunningTime="2024-11-12 22:04:22.107723743 +0000 UTC m=+16.129562404" Nov 12 22:04:23.524381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4063942332.mount: Deactivated successfully. Nov 12 22:04:23.739802 containerd[1811]: time="2024-11-12T22:04:23.739776791Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:04:23.740009 containerd[1811]: time="2024-11-12T22:04:23.739990278Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.0: active requests=0, bytes read=21763339" Nov 12 22:04:23.740289 containerd[1811]: time="2024-11-12T22:04:23.740248192Z" level=info msg="ImageCreate event name:\"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:04:23.741391 containerd[1811]: time="2024-11-12T22:04:23.741350399Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:04:23.742136 containerd[1811]: time="2024-11-12T22:04:23.742078118Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.0\" with image id \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\", repo tag \"quay.io/tigera/operator:v1.36.0\", repo digest \"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\", size \"21757542\" in 2.890558974s" Nov 12 22:04:23.742136 containerd[1811]: time="2024-11-12T22:04:23.742097821Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\" returns image reference \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\"" Nov 12 22:04:23.743139 containerd[1811]: time="2024-11-12T22:04:23.743127501Z" level=info msg="CreateContainer within sandbox \"e92a692b3907dfcb4e333824ba4d31497c26b888806336402366bb9786716f32\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 12 22:04:23.746822 containerd[1811]: time="2024-11-12T22:04:23.746774193Z" level=info msg="CreateContainer within sandbox \"e92a692b3907dfcb4e333824ba4d31497c26b888806336402366bb9786716f32\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"febae96803203ed00ae23084526cb7be4e6783ee5c5c68e8e764babcd1794e64\"" Nov 12 22:04:23.746985 containerd[1811]: time="2024-11-12T22:04:23.746973226Z" level=info msg="StartContainer for \"febae96803203ed00ae23084526cb7be4e6783ee5c5c68e8e764babcd1794e64\"" Nov 12 22:04:23.767376 systemd[1]: Started cri-containerd-febae96803203ed00ae23084526cb7be4e6783ee5c5c68e8e764babcd1794e64.scope - libcontainer container febae96803203ed00ae23084526cb7be4e6783ee5c5c68e8e764babcd1794e64. Nov 12 22:04:23.777998 containerd[1811]: time="2024-11-12T22:04:23.777943639Z" level=info msg="StartContainer for \"febae96803203ed00ae23084526cb7be4e6783ee5c5c68e8e764babcd1794e64\" returns successfully" Nov 12 22:04:24.118619 kubelet[3195]: I1112 22:04:24.118522 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5645cfc98-d99rt" podStartSLOduration=1.227255338 podStartE2EDuration="4.11848737s" podCreationTimestamp="2024-11-12 22:04:20 +0000 UTC" firstStartedPulling="2024-11-12 22:04:20.851261501 +0000 UTC m=+14.873100167" lastFinishedPulling="2024-11-12 22:04:23.742493535 +0000 UTC m=+17.764332199" observedRunningTime="2024-11-12 22:04:24.118292683 +0000 UTC m=+18.140131445" watchObservedRunningTime="2024-11-12 22:04:24.11848737 +0000 UTC m=+18.140326083" Nov 12 22:04:25.183641 update_engine[1799]: I20241112 22:04:25.183491 1799 update_attempter.cc:509] Updating boot flags... Nov 12 22:04:25.224100 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 37 scanned by (udev-worker) (3661) Nov 12 22:04:26.563866 kubelet[3195]: I1112 22:04:26.563485 3195 topology_manager.go:215] "Topology Admit Handler" podUID="69a6a41a-9ffb-4b15-b959-5e02864b1cb1" podNamespace="calico-system" podName="calico-typha-5dcf5c799d-pxcf9" Nov 12 22:04:26.575420 systemd[1]: Created slice kubepods-besteffort-pod69a6a41a_9ffb_4b15_b959_5e02864b1cb1.slice - libcontainer container kubepods-besteffort-pod69a6a41a_9ffb_4b15_b959_5e02864b1cb1.slice. Nov 12 22:04:26.591406 kubelet[3195]: I1112 22:04:26.591379 3195 topology_manager.go:215] "Topology Admit Handler" podUID="9956537c-d3e7-4a14-bec4-dcbb471e59d8" podNamespace="calico-system" podName="calico-node-c4thf" Nov 12 22:04:26.594604 kubelet[3195]: I1112 22:04:26.594584 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/69a6a41a-9ffb-4b15-b959-5e02864b1cb1-typha-certs\") pod \"calico-typha-5dcf5c799d-pxcf9\" (UID: \"69a6a41a-9ffb-4b15-b959-5e02864b1cb1\") " pod="calico-system/calico-typha-5dcf5c799d-pxcf9" Nov 12 22:04:26.594683 kubelet[3195]: I1112 22:04:26.594611 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69a6a41a-9ffb-4b15-b959-5e02864b1cb1-tigera-ca-bundle\") pod \"calico-typha-5dcf5c799d-pxcf9\" (UID: \"69a6a41a-9ffb-4b15-b959-5e02864b1cb1\") " pod="calico-system/calico-typha-5dcf5c799d-pxcf9" Nov 12 22:04:26.594683 kubelet[3195]: I1112 22:04:26.594629 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tww7\" (UniqueName: \"kubernetes.io/projected/69a6a41a-9ffb-4b15-b959-5e02864b1cb1-kube-api-access-8tww7\") pod \"calico-typha-5dcf5c799d-pxcf9\" (UID: \"69a6a41a-9ffb-4b15-b959-5e02864b1cb1\") " pod="calico-system/calico-typha-5dcf5c799d-pxcf9" Nov 12 22:04:26.595956 systemd[1]: Created slice kubepods-besteffort-pod9956537c_d3e7_4a14_bec4_dcbb471e59d8.slice - libcontainer container kubepods-besteffort-pod9956537c_d3e7_4a14_bec4_dcbb471e59d8.slice. Nov 12 22:04:26.695323 kubelet[3195]: I1112 22:04:26.695252 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9956537c-d3e7-4a14-bec4-dcbb471e59d8-node-certs\") pod \"calico-node-c4thf\" (UID: \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\") " pod="calico-system/calico-node-c4thf" Nov 12 22:04:26.695560 kubelet[3195]: I1112 22:04:26.695349 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-cni-bin-dir\") pod \"calico-node-c4thf\" (UID: \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\") " pod="calico-system/calico-node-c4thf" Nov 12 22:04:26.695560 kubelet[3195]: I1112 22:04:26.695411 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-cni-net-dir\") pod \"calico-node-c4thf\" (UID: \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\") " pod="calico-system/calico-node-c4thf" Nov 12 22:04:26.695797 kubelet[3195]: I1112 22:04:26.695587 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-var-run-calico\") pod \"calico-node-c4thf\" (UID: \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\") " pod="calico-system/calico-node-c4thf" Nov 12 22:04:26.695797 kubelet[3195]: I1112 22:04:26.695684 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-cni-log-dir\") pod \"calico-node-c4thf\" (UID: \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\") " pod="calico-system/calico-node-c4thf" Nov 12 22:04:26.695797 kubelet[3195]: I1112 22:04:26.695742 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-xtables-lock\") pod \"calico-node-c4thf\" (UID: \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\") " pod="calico-system/calico-node-c4thf" Nov 12 22:04:26.696134 kubelet[3195]: I1112 22:04:26.695792 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rkkw\" (UniqueName: \"kubernetes.io/projected/9956537c-d3e7-4a14-bec4-dcbb471e59d8-kube-api-access-5rkkw\") pod \"calico-node-c4thf\" (UID: \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\") " pod="calico-system/calico-node-c4thf" Nov 12 22:04:26.696134 kubelet[3195]: I1112 22:04:26.695873 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-policysync\") pod \"calico-node-c4thf\" (UID: \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\") " pod="calico-system/calico-node-c4thf" Nov 12 22:04:26.696134 kubelet[3195]: I1112 22:04:26.695926 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-var-lib-calico\") pod \"calico-node-c4thf\" (UID: \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\") " pod="calico-system/calico-node-c4thf" Nov 12 22:04:26.696134 kubelet[3195]: I1112 22:04:26.695974 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-flexvol-driver-host\") pod \"calico-node-c4thf\" (UID: \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\") " pod="calico-system/calico-node-c4thf" Nov 12 22:04:26.696680 kubelet[3195]: I1112 22:04:26.696233 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9956537c-d3e7-4a14-bec4-dcbb471e59d8-tigera-ca-bundle\") pod \"calico-node-c4thf\" (UID: \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\") " pod="calico-system/calico-node-c4thf" Nov 12 22:04:26.696680 kubelet[3195]: I1112 22:04:26.696342 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-lib-modules\") pod \"calico-node-c4thf\" (UID: \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\") " pod="calico-system/calico-node-c4thf" Nov 12 22:04:26.736351 kubelet[3195]: I1112 22:04:26.736286 3195 topology_manager.go:215] "Topology Admit Handler" podUID="2ab49c9f-8e53-44c9-8b09-d25ffd106921" podNamespace="calico-system" podName="csi-node-driver-5zp6z" Nov 12 22:04:26.736946 kubelet[3195]: E1112 22:04:26.736899 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5zp6z" podUID="2ab49c9f-8e53-44c9-8b09-d25ffd106921" Nov 12 22:04:26.796966 kubelet[3195]: I1112 22:04:26.796943 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2ab49c9f-8e53-44c9-8b09-d25ffd106921-socket-dir\") pod \"csi-node-driver-5zp6z\" (UID: \"2ab49c9f-8e53-44c9-8b09-d25ffd106921\") " pod="calico-system/csi-node-driver-5zp6z" Nov 12 22:04:26.796966 kubelet[3195]: I1112 22:04:26.796968 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27xqz\" (UniqueName: \"kubernetes.io/projected/2ab49c9f-8e53-44c9-8b09-d25ffd106921-kube-api-access-27xqz\") pod \"csi-node-driver-5zp6z\" (UID: \"2ab49c9f-8e53-44c9-8b09-d25ffd106921\") " pod="calico-system/csi-node-driver-5zp6z" Nov 12 22:04:26.797083 kubelet[3195]: I1112 22:04:26.796991 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ab49c9f-8e53-44c9-8b09-d25ffd106921-kubelet-dir\") pod \"csi-node-driver-5zp6z\" (UID: \"2ab49c9f-8e53-44c9-8b09-d25ffd106921\") " pod="calico-system/csi-node-driver-5zp6z" Nov 12 22:04:26.797083 kubelet[3195]: I1112 22:04:26.797017 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2ab49c9f-8e53-44c9-8b09-d25ffd106921-varrun\") pod \"csi-node-driver-5zp6z\" (UID: \"2ab49c9f-8e53-44c9-8b09-d25ffd106921\") " pod="calico-system/csi-node-driver-5zp6z" Nov 12 22:04:26.797083 kubelet[3195]: I1112 22:04:26.797028 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2ab49c9f-8e53-44c9-8b09-d25ffd106921-registration-dir\") pod \"csi-node-driver-5zp6z\" (UID: \"2ab49c9f-8e53-44c9-8b09-d25ffd106921\") " pod="calico-system/csi-node-driver-5zp6z" Nov 12 22:04:26.797873 kubelet[3195]: E1112 22:04:26.797842 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:04:26.797873 kubelet[3195]: W1112 22:04:26.797852 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:04:26.797873 kubelet[3195]: E1112 22:04:26.797861 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:04:26.798918 kubelet[3195]: E1112 22:04:26.798880 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:04:26.798918 kubelet[3195]: W1112 22:04:26.798888 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:04:26.798918 kubelet[3195]: E1112 22:04:26.798895 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:04:26.802237 kubelet[3195]: E1112 22:04:26.802196 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:04:26.802237 kubelet[3195]: W1112 22:04:26.802203 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:04:26.802237 kubelet[3195]: E1112 22:04:26.802215 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:04:26.879034 containerd[1811]: time="2024-11-12T22:04:26.879010172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5dcf5c799d-pxcf9,Uid:69a6a41a-9ffb-4b15-b959-5e02864b1cb1,Namespace:calico-system,Attempt:0,}" Nov 12 22:04:26.890466 containerd[1811]: time="2024-11-12T22:04:26.890404627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:04:26.890466 containerd[1811]: time="2024-11-12T22:04:26.890452750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:04:26.890466 containerd[1811]: time="2024-11-12T22:04:26.890464827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:04:26.890582 containerd[1811]: time="2024-11-12T22:04:26.890528943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:04:26.897456 kubelet[3195]: E1112 22:04:26.897438 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:04:26.897456 kubelet[3195]: W1112 22:04:26.897451 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:04:26.897555 kubelet[3195]: E1112 22:04:26.897466 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:04:26.897588 kubelet[3195]: E1112 22:04:26.897581 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:04:26.897614 kubelet[3195]: W1112 22:04:26.897588 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:04:26.897614 kubelet[3195]: E1112 22:04:26.897597 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:04:26.897654 containerd[1811]: time="2024-11-12T22:04:26.897613274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-c4thf,Uid:9956537c-d3e7-4a14-bec4-dcbb471e59d8,Namespace:calico-system,Attempt:0,}" Nov 12 22:04:26.897691 kubelet[3195]: E1112 22:04:26.897686 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:04:26.897691 kubelet[3195]: W1112 22:04:26.897691 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:04:26.897728 kubelet[3195]: E1112 22:04:26.897696 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:04:26.897800 kubelet[3195]: E1112 22:04:26.897793 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:04:26.897800 kubelet[3195]: W1112 22:04:26.897799 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:04:26.897847 kubelet[3195]: E1112 22:04:26.897806 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:04:26.897934 kubelet[3195]: E1112 22:04:26.897923 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:04:26.897961 kubelet[3195]: W1112 22:04:26.897937 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:04:26.897961 kubelet[3195]: E1112 22:04:26.897949 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:04:26.898045 kubelet[3195]: E1112 22:04:26.898039 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:04:26.898045 kubelet[3195]: W1112 22:04:26.898044 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:04:26.898109 kubelet[3195]: E1112 22:04:26.898051 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:04:26.898145 kubelet[3195]: E1112 22:04:26.898138 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:04:26.898145 kubelet[3195]: W1112 22:04:26.898143 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:04:26.898200 kubelet[3195]: E1112 22:04:26.898148 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:04:26.898295 kubelet[3195]: E1112 22:04:26.898289 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:04:26.898295 kubelet[3195]: W1112 22:04:26.898293 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:04:26.898338 kubelet[3195]: E1112 22:04:26.898299 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:04:26.898397 kubelet[3195]: E1112 22:04:26.898390 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:04:26.898397 kubelet[3195]: W1112 22:04:26.898396 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:04:26.898434 kubelet[3195]: E1112 22:04:26.898403 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:04:26.898483 kubelet[3195]: E1112 22:04:26.898479 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:04:26.898502 kubelet[3195]: W1112 22:04:26.898483 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:04:26.898502 kubelet[3195]: E1112 22:04:26.898489 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:04:26.898565 kubelet[3195]: E1112 22:04:26.898561 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:04:26.898584 kubelet[3195]: W1112 22:04:26.898565 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:04:26.898584 kubelet[3195]: E1112 22:04:26.898571 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:04:26.898663 kubelet[3195]: E1112 22:04:26.898658 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:04:26.898681 kubelet[3195]: W1112 22:04:26.898663 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:04:26.898681 kubelet[3195]: E1112 22:04:26.898668 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:04:26.898759 kubelet[3195]: E1112 22:04:26.898754 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:04:26.898779 kubelet[3195]: W1112 22:04:26.898759 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:04:26.898779 kubelet[3195]: E1112 22:04:26.898766 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:04:26.898857 kubelet[3195]: E1112 22:04:26.898853 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:04:26.898878 kubelet[3195]: W1112 22:04:26.898857 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:04:26.898878 kubelet[3195]: E1112 22:04:26.898867 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:04:26.898933 kubelet[3195]: E1112 22:04:26.898929 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:04:26.898933 kubelet[3195]: W1112 22:04:26.898933 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:04:26.898971 kubelet[3195]: E1112 22:04:26.898943 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:04:26.899005 kubelet[3195]: E1112 22:04:26.899001 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:04:26.899005 kubelet[3195]: W1112 22:04:26.899005 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:04:26.899075 kubelet[3195]: E1112 22:04:26.899019 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:04:26.899075 kubelet[3195]: E1112 22:04:26.899074 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:04:26.899131 kubelet[3195]: W1112 22:04:26.899078 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:04:26.899131 kubelet[3195]: E1112 22:04:26.899084 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:04:26.899373 kubelet[3195]: E1112 22:04:26.899192 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:04:26.899373 kubelet[3195]: W1112 22:04:26.899198 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:04:26.899373 kubelet[3195]: E1112 22:04:26.899206 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:04:26.899373 kubelet[3195]: E1112 22:04:26.899289 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:04:26.899373 kubelet[3195]: W1112 22:04:26.899294 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:04:26.899373 kubelet[3195]: E1112 22:04:26.899301 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:04:26.899534 kubelet[3195]: E1112 22:04:26.899381 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:04:26.899534 kubelet[3195]: W1112 22:04:26.899387 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:04:26.899534 kubelet[3195]: E1112 22:04:26.899395 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:04:26.899534 kubelet[3195]: E1112 22:04:26.899506 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:04:26.899534 kubelet[3195]: W1112 22:04:26.899512 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:04:26.899534 kubelet[3195]: E1112 22:04:26.899521 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:04:26.899688 kubelet[3195]: E1112 22:04:26.899611 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:04:26.899688 kubelet[3195]: W1112 22:04:26.899618 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:04:26.899688 kubelet[3195]: E1112 22:04:26.899626 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:04:26.899739 kubelet[3195]: E1112 22:04:26.899712 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:04:26.899739 kubelet[3195]: W1112 22:04:26.899718 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:04:26.899739 kubelet[3195]: E1112 22:04:26.899726 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:04:26.899837 kubelet[3195]: E1112 22:04:26.899831 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:04:26.899837 kubelet[3195]: W1112 22:04:26.899837 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:04:26.899877 kubelet[3195]: E1112 22:04:26.899843 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:04:26.899937 kubelet[3195]: E1112 22:04:26.899932 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:04:26.899937 kubelet[3195]: W1112 22:04:26.899937 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:04:26.899971 kubelet[3195]: E1112 22:04:26.899941 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:04:26.906388 containerd[1811]: time="2024-11-12T22:04:26.906296480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:04:26.906388 containerd[1811]: time="2024-11-12T22:04:26.906348840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:04:26.906579 containerd[1811]: time="2024-11-12T22:04:26.906532286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:04:26.906657 containerd[1811]: time="2024-11-12T22:04:26.906584208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:04:26.909245 systemd[1]: Started cri-containerd-4e6e3003c810385f8e2c9fa533f8bfeeb20172e1d3cb3d7e246e2b4669037e59.scope - libcontainer container 4e6e3003c810385f8e2c9fa533f8bfeeb20172e1d3cb3d7e246e2b4669037e59. Nov 12 22:04:26.912455 systemd[1]: Started cri-containerd-edfe889ba9d743f17fc26fca6b6f7f04094e56a0483e8ac575bd4eb6e99e81fc.scope - libcontainer container edfe889ba9d743f17fc26fca6b6f7f04094e56a0483e8ac575bd4eb6e99e81fc. Nov 12 22:04:26.918950 kubelet[3195]: E1112 22:04:26.918935 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 22:04:26.918950 kubelet[3195]: W1112 22:04:26.918946 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 22:04:26.919046 kubelet[3195]: E1112 22:04:26.918960 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 22:04:26.923575 containerd[1811]: time="2024-11-12T22:04:26.923546069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-c4thf,Uid:9956537c-d3e7-4a14-bec4-dcbb471e59d8,Namespace:calico-system,Attempt:0,} returns sandbox id \"edfe889ba9d743f17fc26fca6b6f7f04094e56a0483e8ac575bd4eb6e99e81fc\"" Nov 12 22:04:26.924298 containerd[1811]: time="2024-11-12T22:04:26.924285519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\"" Nov 12 22:04:26.933106 containerd[1811]: time="2024-11-12T22:04:26.933056814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5dcf5c799d-pxcf9,Uid:69a6a41a-9ffb-4b15-b959-5e02864b1cb1,Namespace:calico-system,Attempt:0,} returns sandbox id \"4e6e3003c810385f8e2c9fa533f8bfeeb20172e1d3cb3d7e246e2b4669037e59\"" Nov 12 22:04:27.105539 systemd-timesyncd[1727]: Timed out waiting for reply from [2604:4300:a:299::164]:123 (2.flatcar.pool.ntp.org). Nov 12 22:04:27.621896 systemd-resolved[1726]: Clock change detected. Flushing caches. Nov 12 22:04:27.622306 systemd-timesyncd[1727]: Contacted time server [2001:559:2be:3::1001]:123 (2.flatcar.pool.ntp.org). Nov 12 22:04:27.622435 systemd-timesyncd[1727]: Initial clock synchronization to Tue 2024-11-12 22:04:27.621747 UTC. Nov 12 22:04:28.495940 kubelet[3195]: E1112 22:04:28.495889 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5zp6z" podUID="2ab49c9f-8e53-44c9-8b09-d25ffd106921" Nov 12 22:04:28.871075 containerd[1811]: time="2024-11-12T22:04:28.871011581Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:04:28.871344 containerd[1811]: time="2024-11-12T22:04:28.871269364Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0: active requests=0, bytes read=5362116" Nov 12 22:04:28.871680 containerd[1811]: time="2024-11-12T22:04:28.871668704Z" level=info msg="ImageCreate event name:\"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:04:28.872612 containerd[1811]: time="2024-11-12T22:04:28.872598740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:04:28.873033 containerd[1811]: time="2024-11-12T22:04:28.873003606Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" with image id \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\", size \"6855168\" in 1.503552329s" Nov 12 22:04:28.873106 containerd[1811]: time="2024-11-12T22:04:28.873050937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" returns image reference \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\"" Nov 12 22:04:28.874023 containerd[1811]: time="2024-11-12T22:04:28.873950693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\"" Nov 12 22:04:28.874707 containerd[1811]: time="2024-11-12T22:04:28.874668765Z" level=info msg="CreateContainer within sandbox \"edfe889ba9d743f17fc26fca6b6f7f04094e56a0483e8ac575bd4eb6e99e81fc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 12 22:04:28.897058 containerd[1811]: time="2024-11-12T22:04:28.897036878Z" level=info msg="CreateContainer within sandbox \"edfe889ba9d743f17fc26fca6b6f7f04094e56a0483e8ac575bd4eb6e99e81fc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"84825fc6436bf61d75616116071f708b33902ca56e00b9653d74c8853ea2f21f\"" Nov 12 22:04:28.897330 containerd[1811]: time="2024-11-12T22:04:28.897316711Z" level=info msg="StartContainer for \"84825fc6436bf61d75616116071f708b33902ca56e00b9653d74c8853ea2f21f\"" Nov 12 22:04:28.918372 systemd[1]: Started cri-containerd-84825fc6436bf61d75616116071f708b33902ca56e00b9653d74c8853ea2f21f.scope - libcontainer container 84825fc6436bf61d75616116071f708b33902ca56e00b9653d74c8853ea2f21f. Nov 12 22:04:28.931658 containerd[1811]: time="2024-11-12T22:04:28.931607499Z" level=info msg="StartContainer for \"84825fc6436bf61d75616116071f708b33902ca56e00b9653d74c8853ea2f21f\" returns successfully" Nov 12 22:04:28.937865 systemd[1]: cri-containerd-84825fc6436bf61d75616116071f708b33902ca56e00b9653d74c8853ea2f21f.scope: Deactivated successfully. Nov 12 22:04:29.149144 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84825fc6436bf61d75616116071f708b33902ca56e00b9653d74c8853ea2f21f-rootfs.mount: Deactivated successfully. Nov 12 22:04:29.158316 containerd[1811]: time="2024-11-12T22:04:29.158225603Z" level=info msg="shim disconnected" id=84825fc6436bf61d75616116071f708b33902ca56e00b9653d74c8853ea2f21f namespace=k8s.io Nov 12 22:04:29.158316 containerd[1811]: time="2024-11-12T22:04:29.158281693Z" level=warning msg="cleaning up after shim disconnected" id=84825fc6436bf61d75616116071f708b33902ca56e00b9653d74c8853ea2f21f namespace=k8s.io Nov 12 22:04:29.158316 containerd[1811]: time="2024-11-12T22:04:29.158288703Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:04:30.496529 kubelet[3195]: E1112 22:04:30.496491 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5zp6z" podUID="2ab49c9f-8e53-44c9-8b09-d25ffd106921" Nov 12 22:04:30.982030 containerd[1811]: time="2024-11-12T22:04:30.981976022Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:04:30.982246 containerd[1811]: time="2024-11-12T22:04:30.982204536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.0: active requests=0, bytes read=29849168" Nov 12 22:04:30.982533 containerd[1811]: time="2024-11-12T22:04:30.982497403Z" level=info msg="ImageCreate event name:\"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:04:30.983636 containerd[1811]: time="2024-11-12T22:04:30.983595820Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:04:30.983880 containerd[1811]: time="2024-11-12T22:04:30.983838419Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.0\" with image id \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\", size \"31342252\" in 2.109841603s" Nov 12 22:04:30.983880 containerd[1811]: time="2024-11-12T22:04:30.983855398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\" returns image reference \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\"" Nov 12 22:04:30.984292 containerd[1811]: time="2024-11-12T22:04:30.984250882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\"" Nov 12 22:04:30.987308 containerd[1811]: time="2024-11-12T22:04:30.987295159Z" level=info msg="CreateContainer within sandbox \"4e6e3003c810385f8e2c9fa533f8bfeeb20172e1d3cb3d7e246e2b4669037e59\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 12 22:04:30.991290 containerd[1811]: time="2024-11-12T22:04:30.991253312Z" level=info msg="CreateContainer within sandbox \"4e6e3003c810385f8e2c9fa533f8bfeeb20172e1d3cb3d7e246e2b4669037e59\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"24f4be8ddb31752dd37401bb6b44ce5901715622dd23508d8b556d72118b00fb\"" Nov 12 22:04:30.991508 containerd[1811]: time="2024-11-12T22:04:30.991442372Z" level=info msg="StartContainer for \"24f4be8ddb31752dd37401bb6b44ce5901715622dd23508d8b556d72118b00fb\"" Nov 12 22:04:31.012362 systemd[1]: Started cri-containerd-24f4be8ddb31752dd37401bb6b44ce5901715622dd23508d8b556d72118b00fb.scope - libcontainer container 24f4be8ddb31752dd37401bb6b44ce5901715622dd23508d8b556d72118b00fb. Nov 12 22:04:31.038086 containerd[1811]: time="2024-11-12T22:04:31.038067062Z" level=info msg="StartContainer for \"24f4be8ddb31752dd37401bb6b44ce5901715622dd23508d8b556d72118b00fb\" returns successfully" Nov 12 22:04:31.608847 kubelet[3195]: I1112 22:04:31.608683 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5dcf5c799d-pxcf9" podStartSLOduration=2.003188949 podStartE2EDuration="5.608631204s" podCreationTimestamp="2024-11-12 22:04:26 +0000 UTC" firstStartedPulling="2024-11-12 22:04:26.933604965 +0000 UTC m=+20.955443631" lastFinishedPulling="2024-11-12 22:04:30.984196406 +0000 UTC m=+24.560885886" observedRunningTime="2024-11-12 22:04:31.608416167 +0000 UTC m=+25.185105753" watchObservedRunningTime="2024-11-12 22:04:31.608631204 +0000 UTC m=+25.185320736" Nov 12 22:04:32.496619 kubelet[3195]: E1112 22:04:32.496498 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5zp6z" podUID="2ab49c9f-8e53-44c9-8b09-d25ffd106921" Nov 12 22:04:32.590089 kubelet[3195]: I1112 22:04:32.590022 3195 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 22:04:34.496452 kubelet[3195]: E1112 22:04:34.496424 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5zp6z" podUID="2ab49c9f-8e53-44c9-8b09-d25ffd106921" Nov 12 22:04:34.935637 containerd[1811]: time="2024-11-12T22:04:34.935584810Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:04:34.935844 containerd[1811]: time="2024-11-12T22:04:34.935778924Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.0: active requests=0, bytes read=96163683" Nov 12 22:04:34.936195 containerd[1811]: time="2024-11-12T22:04:34.936154554Z" level=info msg="ImageCreate event name:\"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:04:34.937131 containerd[1811]: time="2024-11-12T22:04:34.937090206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:04:34.937536 containerd[1811]: time="2024-11-12T22:04:34.937492845Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.0\" with image id \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\", size \"97656775\" in 3.953225977s" Nov 12 22:04:34.937536 containerd[1811]: time="2024-11-12T22:04:34.937511080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\" returns image reference \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\"" Nov 12 22:04:34.938565 containerd[1811]: time="2024-11-12T22:04:34.938543073Z" level=info msg="CreateContainer within sandbox \"edfe889ba9d743f17fc26fca6b6f7f04094e56a0483e8ac575bd4eb6e99e81fc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 12 22:04:34.943066 containerd[1811]: time="2024-11-12T22:04:34.943025172Z" level=info msg="CreateContainer within sandbox \"edfe889ba9d743f17fc26fca6b6f7f04094e56a0483e8ac575bd4eb6e99e81fc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b59fbaec25931636533d6dcda4103bf123de5314f237f3862bccecb07853bf0b\"" Nov 12 22:04:34.943232 containerd[1811]: time="2024-11-12T22:04:34.943220194Z" level=info msg="StartContainer for \"b59fbaec25931636533d6dcda4103bf123de5314f237f3862bccecb07853bf0b\"" Nov 12 22:04:34.963453 systemd[1]: Started cri-containerd-b59fbaec25931636533d6dcda4103bf123de5314f237f3862bccecb07853bf0b.scope - libcontainer container b59fbaec25931636533d6dcda4103bf123de5314f237f3862bccecb07853bf0b. Nov 12 22:04:34.976047 containerd[1811]: time="2024-11-12T22:04:34.975960538Z" level=info msg="StartContainer for \"b59fbaec25931636533d6dcda4103bf123de5314f237f3862bccecb07853bf0b\" returns successfully" Nov 12 22:04:35.206921 kubelet[3195]: I1112 22:04:35.206820 3195 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 22:04:35.512677 systemd[1]: cri-containerd-b59fbaec25931636533d6dcda4103bf123de5314f237f3862bccecb07853bf0b.scope: Deactivated successfully. Nov 12 22:04:35.524058 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b59fbaec25931636533d6dcda4103bf123de5314f237f3862bccecb07853bf0b-rootfs.mount: Deactivated successfully. Nov 12 22:04:35.561750 kubelet[3195]: I1112 22:04:35.561674 3195 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 12 22:04:35.595574 kubelet[3195]: I1112 22:04:35.595173 3195 topology_manager.go:215] "Topology Admit Handler" podUID="d6ab8fe9-0b2f-42b5-9df3-8800f48bbf2b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4b7bm" Nov 12 22:04:35.596918 kubelet[3195]: I1112 22:04:35.596712 3195 topology_manager.go:215] "Topology Admit Handler" podUID="1044f927-3795-4d3f-aa90-111cd417f193" podNamespace="kube-system" podName="coredns-7db6d8ff4d-b8h7n" Nov 12 22:04:35.598473 kubelet[3195]: I1112 22:04:35.598395 3195 topology_manager.go:215] "Topology Admit Handler" podUID="f7d2b67c-78b9-4645-b1a5-5cfb719e011e" podNamespace="calico-system" podName="calico-kube-controllers-7546446886-8b475" Nov 12 22:04:35.599445 kubelet[3195]: I1112 22:04:35.599380 3195 topology_manager.go:215] "Topology Admit Handler" podUID="920c6399-0c92-4fd2-94f7-2a8b4fbecff5" podNamespace="calico-apiserver" podName="calico-apiserver-7d4b7b8c7-97n44" Nov 12 22:04:35.600783 kubelet[3195]: I1112 22:04:35.600679 3195 topology_manager.go:215] "Topology Admit Handler" podUID="a9130193-3858-4a34-9c6f-60605387578f" podNamespace="calico-apiserver" podName="calico-apiserver-7d4b7b8c7-kll9h" Nov 12 22:04:35.602840 kubelet[3195]: I1112 22:04:35.602776 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qkrs\" (UniqueName: \"kubernetes.io/projected/d6ab8fe9-0b2f-42b5-9df3-8800f48bbf2b-kube-api-access-4qkrs\") pod \"coredns-7db6d8ff4d-4b7bm\" (UID: \"d6ab8fe9-0b2f-42b5-9df3-8800f48bbf2b\") " pod="kube-system/coredns-7db6d8ff4d-4b7bm" Nov 12 22:04:35.603080 kubelet[3195]: I1112 22:04:35.602892 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/920c6399-0c92-4fd2-94f7-2a8b4fbecff5-calico-apiserver-certs\") pod \"calico-apiserver-7d4b7b8c7-97n44\" (UID: \"920c6399-0c92-4fd2-94f7-2a8b4fbecff5\") " pod="calico-apiserver/calico-apiserver-7d4b7b8c7-97n44" Nov 12 22:04:35.603080 kubelet[3195]: I1112 22:04:35.602996 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7jjg\" (UniqueName: \"kubernetes.io/projected/920c6399-0c92-4fd2-94f7-2a8b4fbecff5-kube-api-access-q7jjg\") pod \"calico-apiserver-7d4b7b8c7-97n44\" (UID: \"920c6399-0c92-4fd2-94f7-2a8b4fbecff5\") " pod="calico-apiserver/calico-apiserver-7d4b7b8c7-97n44" Nov 12 22:04:35.603403 kubelet[3195]: I1112 22:04:35.603110 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6ab8fe9-0b2f-42b5-9df3-8800f48bbf2b-config-volume\") pod \"coredns-7db6d8ff4d-4b7bm\" (UID: \"d6ab8fe9-0b2f-42b5-9df3-8800f48bbf2b\") " pod="kube-system/coredns-7db6d8ff4d-4b7bm" Nov 12 22:04:35.603403 kubelet[3195]: I1112 22:04:35.603187 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1044f927-3795-4d3f-aa90-111cd417f193-config-volume\") pod \"coredns-7db6d8ff4d-b8h7n\" (UID: \"1044f927-3795-4d3f-aa90-111cd417f193\") " pod="kube-system/coredns-7db6d8ff4d-b8h7n" Nov 12 22:04:35.603403 kubelet[3195]: I1112 22:04:35.603282 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p5h5\" (UniqueName: \"kubernetes.io/projected/1044f927-3795-4d3f-aa90-111cd417f193-kube-api-access-2p5h5\") pod \"coredns-7db6d8ff4d-b8h7n\" (UID: \"1044f927-3795-4d3f-aa90-111cd417f193\") " pod="kube-system/coredns-7db6d8ff4d-b8h7n" Nov 12 22:04:35.603770 kubelet[3195]: I1112 22:04:35.603383 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttvl7\" (UniqueName: \"kubernetes.io/projected/f7d2b67c-78b9-4645-b1a5-5cfb719e011e-kube-api-access-ttvl7\") pod \"calico-kube-controllers-7546446886-8b475\" (UID: \"f7d2b67c-78b9-4645-b1a5-5cfb719e011e\") " pod="calico-system/calico-kube-controllers-7546446886-8b475" Nov 12 22:04:35.603770 kubelet[3195]: I1112 22:04:35.603572 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7d2b67c-78b9-4645-b1a5-5cfb719e011e-tigera-ca-bundle\") pod \"calico-kube-controllers-7546446886-8b475\" (UID: \"f7d2b67c-78b9-4645-b1a5-5cfb719e011e\") " pod="calico-system/calico-kube-controllers-7546446886-8b475" Nov 12 22:04:35.615471 systemd[1]: Created slice kubepods-burstable-podd6ab8fe9_0b2f_42b5_9df3_8800f48bbf2b.slice - libcontainer container kubepods-burstable-podd6ab8fe9_0b2f_42b5_9df3_8800f48bbf2b.slice. Nov 12 22:04:35.627998 systemd[1]: Created slice kubepods-burstable-pod1044f927_3795_4d3f_aa90_111cd417f193.slice - libcontainer container kubepods-burstable-pod1044f927_3795_4d3f_aa90_111cd417f193.slice. Nov 12 22:04:35.637804 systemd[1]: Created slice kubepods-besteffort-podf7d2b67c_78b9_4645_b1a5_5cfb719e011e.slice - libcontainer container kubepods-besteffort-podf7d2b67c_78b9_4645_b1a5_5cfb719e011e.slice. Nov 12 22:04:35.643795 systemd[1]: Created slice kubepods-besteffort-pod920c6399_0c92_4fd2_94f7_2a8b4fbecff5.slice - libcontainer container kubepods-besteffort-pod920c6399_0c92_4fd2_94f7_2a8b4fbecff5.slice. Nov 12 22:04:35.648903 systemd[1]: Created slice kubepods-besteffort-poda9130193_3858_4a34_9c6f_60605387578f.slice - libcontainer container kubepods-besteffort-poda9130193_3858_4a34_9c6f_60605387578f.slice. Nov 12 22:04:35.704092 kubelet[3195]: I1112 22:04:35.704025 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvg7p\" (UniqueName: \"kubernetes.io/projected/a9130193-3858-4a34-9c6f-60605387578f-kube-api-access-rvg7p\") pod \"calico-apiserver-7d4b7b8c7-kll9h\" (UID: \"a9130193-3858-4a34-9c6f-60605387578f\") " pod="calico-apiserver/calico-apiserver-7d4b7b8c7-kll9h" Nov 12 22:04:35.704092 kubelet[3195]: I1112 22:04:35.704053 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a9130193-3858-4a34-9c6f-60605387578f-calico-apiserver-certs\") pod \"calico-apiserver-7d4b7b8c7-kll9h\" (UID: \"a9130193-3858-4a34-9c6f-60605387578f\") " pod="calico-apiserver/calico-apiserver-7d4b7b8c7-kll9h" Nov 12 22:04:35.923127 containerd[1811]: time="2024-11-12T22:04:35.923023066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4b7bm,Uid:d6ab8fe9-0b2f-42b5-9df3-8800f48bbf2b,Namespace:kube-system,Attempt:0,}" Nov 12 22:04:35.933810 containerd[1811]: time="2024-11-12T22:04:35.933743582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b8h7n,Uid:1044f927-3795-4d3f-aa90-111cd417f193,Namespace:kube-system,Attempt:0,}" Nov 12 22:04:35.941782 containerd[1811]: time="2024-11-12T22:04:35.941702811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7546446886-8b475,Uid:f7d2b67c-78b9-4645-b1a5-5cfb719e011e,Namespace:calico-system,Attempt:0,}" Nov 12 22:04:35.947693 containerd[1811]: time="2024-11-12T22:04:35.947577797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4b7b8c7-97n44,Uid:920c6399-0c92-4fd2-94f7-2a8b4fbecff5,Namespace:calico-apiserver,Attempt:0,}" Nov 12 22:04:35.951769 containerd[1811]: time="2024-11-12T22:04:35.951658496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4b7b8c7-kll9h,Uid:a9130193-3858-4a34-9c6f-60605387578f,Namespace:calico-apiserver,Attempt:0,}" Nov 12 22:04:36.206007 containerd[1811]: time="2024-11-12T22:04:36.205927427Z" level=info msg="shim disconnected" id=b59fbaec25931636533d6dcda4103bf123de5314f237f3862bccecb07853bf0b namespace=k8s.io Nov 12 22:04:36.206007 containerd[1811]: time="2024-11-12T22:04:36.205955604Z" level=warning msg="cleaning up after shim disconnected" id=b59fbaec25931636533d6dcda4103bf123de5314f237f3862bccecb07853bf0b namespace=k8s.io Nov 12 22:04:36.206007 containerd[1811]: time="2024-11-12T22:04:36.205962035Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:04:36.243755 containerd[1811]: time="2024-11-12T22:04:36.243714478Z" level=error msg="Failed to destroy network for sandbox \"2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:04:36.243972 containerd[1811]: time="2024-11-12T22:04:36.243957131Z" level=error msg="encountered an error cleaning up failed sandbox \"2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:04:36.244003 containerd[1811]: time="2024-11-12T22:04:36.243989617Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b8h7n,Uid:1044f927-3795-4d3f-aa90-111cd417f193,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:04:36.244161 kubelet[3195]: E1112 22:04:36.244132 3195 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:04:36.244229 kubelet[3195]: E1112 22:04:36.244186 3195 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-b8h7n" Nov 12 22:04:36.244229 kubelet[3195]: E1112 22:04:36.244202 3195 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-b8h7n" Nov 12 22:04:36.244299 kubelet[3195]: E1112 22:04:36.244236 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-b8h7n_kube-system(1044f927-3795-4d3f-aa90-111cd417f193)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-b8h7n_kube-system(1044f927-3795-4d3f-aa90-111cd417f193)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-b8h7n" podUID="1044f927-3795-4d3f-aa90-111cd417f193" Nov 12 22:04:36.244359 containerd[1811]: time="2024-11-12T22:04:36.244332472Z" level=error msg="Failed to destroy network for sandbox \"a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:04:36.244567 containerd[1811]: time="2024-11-12T22:04:36.244547212Z" level=error msg="encountered an error cleaning up failed sandbox \"a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:04:36.244603 containerd[1811]: time="2024-11-12T22:04:36.244586959Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7546446886-8b475,Uid:f7d2b67c-78b9-4645-b1a5-5cfb719e011e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:04:36.244674 containerd[1811]: time="2024-11-12T22:04:36.244655751Z" level=error msg="Failed to destroy network for sandbox \"bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:04:36.244701 kubelet[3195]: E1112 22:04:36.244685 3195 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:04:36.244727 kubelet[3195]: E1112 22:04:36.244710 3195 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7546446886-8b475" Nov 12 22:04:36.244727 kubelet[3195]: E1112 22:04:36.244723 3195 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7546446886-8b475" Nov 12 22:04:36.244803 kubelet[3195]: E1112 22:04:36.244746 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7546446886-8b475_calico-system(f7d2b67c-78b9-4645-b1a5-5cfb719e011e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7546446886-8b475_calico-system(f7d2b67c-78b9-4645-b1a5-5cfb719e011e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7546446886-8b475" podUID="f7d2b67c-78b9-4645-b1a5-5cfb719e011e" Nov 12 22:04:36.244871 containerd[1811]: time="2024-11-12T22:04:36.244855020Z" level=error msg="encountered an error cleaning up failed sandbox \"bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:04:36.244904 containerd[1811]: time="2024-11-12T22:04:36.244886518Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4b7bm,Uid:d6ab8fe9-0b2f-42b5-9df3-8800f48bbf2b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:04:36.244986 kubelet[3195]: E1112 22:04:36.244969 3195 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:04:36.245014 kubelet[3195]: E1112 22:04:36.245000 3195 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4b7bm" Nov 12 22:04:36.245037 kubelet[3195]: E1112 22:04:36.245021 3195 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4b7bm" Nov 12 22:04:36.245068 kubelet[3195]: E1112 22:04:36.245051 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-4b7bm_kube-system(d6ab8fe9-0b2f-42b5-9df3-8800f48bbf2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-4b7bm_kube-system(d6ab8fe9-0b2f-42b5-9df3-8800f48bbf2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-4b7bm" podUID="d6ab8fe9-0b2f-42b5-9df3-8800f48bbf2b" Nov 12 22:04:36.245300 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c-shm.mount: Deactivated successfully. Nov 12 22:04:36.253564 containerd[1811]: time="2024-11-12T22:04:36.253495707Z" level=error msg="Failed to destroy network for sandbox \"5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:04:36.253754 containerd[1811]: time="2024-11-12T22:04:36.253701631Z" level=error msg="encountered an error cleaning up failed sandbox \"5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:04:36.253797 containerd[1811]: time="2024-11-12T22:04:36.253751952Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4b7b8c7-kll9h,Uid:a9130193-3858-4a34-9c6f-60605387578f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:04:36.253868 containerd[1811]: time="2024-11-12T22:04:36.253791463Z" level=error msg="Failed to destroy network for sandbox \"0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:04:36.253967 kubelet[3195]: E1112 22:04:36.253943 3195 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:04:36.254013 kubelet[3195]: E1112 22:04:36.253981 3195 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4b7b8c7-kll9h" Nov 12 22:04:36.254013 kubelet[3195]: E1112 22:04:36.253994 3195 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4b7b8c7-kll9h" Nov 12 22:04:36.254075 containerd[1811]: time="2024-11-12T22:04:36.253989718Z" level=error msg="encountered an error cleaning up failed sandbox \"0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:04:36.254103 kubelet[3195]: E1112 22:04:36.254022 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d4b7b8c7-kll9h_calico-apiserver(a9130193-3858-4a34-9c6f-60605387578f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d4b7b8c7-kll9h_calico-apiserver(a9130193-3858-4a34-9c6f-60605387578f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d4b7b8c7-kll9h" podUID="a9130193-3858-4a34-9c6f-60605387578f" Nov 12 22:04:36.254854 containerd[1811]: time="2024-11-12T22:04:36.254023807Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4b7b8c7-97n44,Uid:920c6399-0c92-4fd2-94f7-2a8b4fbecff5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:04:36.254930 kubelet[3195]: E1112 22:04:36.254398 3195 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:04:36.254930 kubelet[3195]: E1112 22:04:36.254432 3195 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4b7b8c7-97n44" Nov 12 22:04:36.254930 kubelet[3195]: E1112 22:04:36.254447 3195 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4b7b8c7-97n44" Nov 12 22:04:36.254995 kubelet[3195]: E1112 22:04:36.254472 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d4b7b8c7-97n44_calico-apiserver(920c6399-0c92-4fd2-94f7-2a8b4fbecff5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d4b7b8c7-97n44_calico-apiserver(920c6399-0c92-4fd2-94f7-2a8b4fbecff5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d4b7b8c7-97n44" podUID="920c6399-0c92-4fd2-94f7-2a8b4fbecff5" Nov 12 22:04:36.512081 systemd[1]: Created slice kubepods-besteffort-pod2ab49c9f_8e53_44c9_8b09_d25ffd106921.slice - libcontainer container kubepods-besteffort-pod2ab49c9f_8e53_44c9_8b09_d25ffd106921.slice. Nov 12 22:04:36.518146 containerd[1811]: time="2024-11-12T22:04:36.518028527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5zp6z,Uid:2ab49c9f-8e53-44c9-8b09-d25ffd106921,Namespace:calico-system,Attempt:0,}" Nov 12 22:04:36.548821 containerd[1811]: time="2024-11-12T22:04:36.548792549Z" level=error msg="Failed to destroy network for sandbox \"79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:04:36.548997 containerd[1811]: time="2024-11-12T22:04:36.548982670Z" level=error msg="encountered an error cleaning up failed sandbox \"79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:04:36.549030 containerd[1811]: time="2024-11-12T22:04:36.549019710Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5zp6z,Uid:2ab49c9f-8e53-44c9-8b09-d25ffd106921,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:04:36.549175 kubelet[3195]: E1112 22:04:36.549148 3195 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:04:36.549210 kubelet[3195]: E1112 22:04:36.549199 3195 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5zp6z" Nov 12 22:04:36.549229 kubelet[3195]: E1112 22:04:36.549217 3195 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5zp6z" Nov 12 22:04:36.549276 kubelet[3195]: E1112 22:04:36.549256 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5zp6z_calico-system(2ab49c9f-8e53-44c9-8b09-d25ffd106921)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5zp6z_calico-system(2ab49c9f-8e53-44c9-8b09-d25ffd106921)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5zp6z" podUID="2ab49c9f-8e53-44c9-8b09-d25ffd106921" Nov 12 22:04:36.604020 kubelet[3195]: I1112 22:04:36.604006 3195 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" Nov 12 22:04:36.604344 containerd[1811]: time="2024-11-12T22:04:36.604326888Z" level=info msg="StopPodSandbox for \"0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a\"" Nov 12 22:04:36.604427 kubelet[3195]: I1112 22:04:36.604418 3195 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" Nov 12 22:04:36.604453 containerd[1811]: time="2024-11-12T22:04:36.604422604Z" level=info msg="Ensure that sandbox 0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a in task-service has been cleanup successfully" Nov 12 22:04:36.604626 containerd[1811]: time="2024-11-12T22:04:36.604615525Z" level=info msg="StopPodSandbox for \"2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c\"" Nov 12 22:04:36.604727 containerd[1811]: time="2024-11-12T22:04:36.604713102Z" level=info msg="Ensure that sandbox 2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c in task-service has been cleanup successfully" Nov 12 22:04:36.605711 kubelet[3195]: I1112 22:04:36.605695 3195 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" Nov 12 22:04:36.605787 containerd[1811]: time="2024-11-12T22:04:36.605744714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\"" Nov 12 22:04:36.605969 containerd[1811]: time="2024-11-12T22:04:36.605951275Z" level=info msg="StopPodSandbox for \"5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97\"" Nov 12 22:04:36.606108 containerd[1811]: time="2024-11-12T22:04:36.606094230Z" level=info msg="Ensure that sandbox 5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97 in task-service has been cleanup successfully" Nov 12 22:04:36.606281 kubelet[3195]: I1112 22:04:36.606266 3195 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" Nov 12 22:04:36.606586 containerd[1811]: time="2024-11-12T22:04:36.606569456Z" level=info msg="StopPodSandbox for \"a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475\"" Nov 12 22:04:36.606709 containerd[1811]: time="2024-11-12T22:04:36.606694507Z" level=info msg="Ensure that sandbox a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475 in task-service has been cleanup successfully" Nov 12 22:04:36.606912 kubelet[3195]: I1112 22:04:36.606900 3195 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" Nov 12 22:04:36.607211 containerd[1811]: time="2024-11-12T22:04:36.607192095Z" level=info msg="StopPodSandbox for \"bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd\"" Nov 12 22:04:36.607359 containerd[1811]: time="2024-11-12T22:04:36.607339404Z" level=info msg="Ensure that sandbox bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd in task-service has been cleanup successfully" Nov 12 22:04:36.607561 kubelet[3195]: I1112 22:04:36.607544 3195 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" Nov 12 22:04:36.607959 containerd[1811]: time="2024-11-12T22:04:36.607934996Z" level=info msg="StopPodSandbox for \"79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995\"" Nov 12 22:04:36.608164 containerd[1811]: time="2024-11-12T22:04:36.608149298Z" level=info msg="Ensure that sandbox 79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995 in task-service has been cleanup successfully" Nov 12 22:04:36.622812 containerd[1811]: time="2024-11-12T22:04:36.622759256Z" level=error msg="StopPodSandbox for \"2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c\" failed" error="failed to destroy network for sandbox \"2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:04:36.623285 kubelet[3195]: E1112 22:04:36.622995 3195 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" Nov 12 22:04:36.623285 kubelet[3195]: E1112 22:04:36.623057 3195 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c"} Nov 12 22:04:36.623285 kubelet[3195]: E1112 22:04:36.623116 3195 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1044f927-3795-4d3f-aa90-111cd417f193\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 22:04:36.623285 kubelet[3195]: E1112 22:04:36.623132 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1044f927-3795-4d3f-aa90-111cd417f193\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-b8h7n" podUID="1044f927-3795-4d3f-aa90-111cd417f193" Nov 12 22:04:36.624057 containerd[1811]: time="2024-11-12T22:04:36.624029130Z" level=error msg="StopPodSandbox for \"5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97\" failed" error="failed to destroy network for sandbox \"5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:04:36.624134 kubelet[3195]: E1112 22:04:36.624116 3195 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" Nov 12 22:04:36.624175 kubelet[3195]: E1112 22:04:36.624140 3195 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97"} Nov 12 22:04:36.624175 kubelet[3195]: E1112 22:04:36.624162 3195 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a9130193-3858-4a34-9c6f-60605387578f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 22:04:36.624257 kubelet[3195]: E1112 22:04:36.624177 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a9130193-3858-4a34-9c6f-60605387578f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d4b7b8c7-kll9h" podUID="a9130193-3858-4a34-9c6f-60605387578f" Nov 12 22:04:36.624313 containerd[1811]: time="2024-11-12T22:04:36.624286590Z" level=error msg="StopPodSandbox for \"0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a\" failed" error="failed to destroy network for sandbox \"0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:04:36.624353 containerd[1811]: time="2024-11-12T22:04:36.624335228Z" level=error msg="StopPodSandbox for \"a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475\" failed" error="failed to destroy network for sandbox \"a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:04:36.624388 kubelet[3195]: E1112 22:04:36.624373 3195 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" Nov 12 22:04:36.624413 kubelet[3195]: E1112 22:04:36.624396 3195 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a"} Nov 12 22:04:36.624434 kubelet[3195]: E1112 22:04:36.624417 3195 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"920c6399-0c92-4fd2-94f7-2a8b4fbecff5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 22:04:36.624434 kubelet[3195]: E1112 22:04:36.624410 3195 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" Nov 12 22:04:36.624492 kubelet[3195]: E1112 22:04:36.624430 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"920c6399-0c92-4fd2-94f7-2a8b4fbecff5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d4b7b8c7-97n44" podUID="920c6399-0c92-4fd2-94f7-2a8b4fbecff5" Nov 12 22:04:36.624492 kubelet[3195]: E1112 22:04:36.624436 3195 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475"} Nov 12 22:04:36.624492 kubelet[3195]: E1112 22:04:36.624455 3195 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f7d2b67c-78b9-4645-b1a5-5cfb719e011e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 22:04:36.624492 kubelet[3195]: E1112 22:04:36.624470 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f7d2b67c-78b9-4645-b1a5-5cfb719e011e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7546446886-8b475" podUID="f7d2b67c-78b9-4645-b1a5-5cfb719e011e" Nov 12 22:04:36.624846 containerd[1811]: time="2024-11-12T22:04:36.624829041Z" level=error msg="StopPodSandbox for \"bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd\" failed" error="failed to destroy network for sandbox \"bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:04:36.624894 kubelet[3195]: E1112 22:04:36.624884 3195 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" Nov 12 22:04:36.624918 kubelet[3195]: E1112 22:04:36.624898 3195 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd"} Nov 12 22:04:36.624918 kubelet[3195]: E1112 22:04:36.624910 3195 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d6ab8fe9-0b2f-42b5-9df3-8800f48bbf2b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 22:04:36.624962 kubelet[3195]: E1112 22:04:36.624920 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d6ab8fe9-0b2f-42b5-9df3-8800f48bbf2b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-4b7bm" podUID="d6ab8fe9-0b2f-42b5-9df3-8800f48bbf2b" Nov 12 22:04:36.627478 containerd[1811]: time="2024-11-12T22:04:36.627425755Z" level=error msg="StopPodSandbox for \"79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995\" failed" error="failed to destroy network for sandbox \"79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 22:04:36.627521 kubelet[3195]: E1112 22:04:36.627496 3195 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" Nov 12 22:04:36.627521 kubelet[3195]: E1112 22:04:36.627514 3195 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995"} Nov 12 22:04:36.627560 kubelet[3195]: E1112 22:04:36.627531 3195 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2ab49c9f-8e53-44c9-8b09-d25ffd106921\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 22:04:36.627560 kubelet[3195]: E1112 22:04:36.627542 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2ab49c9f-8e53-44c9-8b09-d25ffd106921\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5zp6z" podUID="2ab49c9f-8e53-44c9-8b09-d25ffd106921" Nov 12 22:04:36.947328 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97-shm.mount: Deactivated successfully. Nov 12 22:04:36.947372 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a-shm.mount: Deactivated successfully. Nov 12 22:04:36.947406 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475-shm.mount: Deactivated successfully. Nov 12 22:04:36.947436 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd-shm.mount: Deactivated successfully. Nov 12 22:04:41.473381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3995303520.mount: Deactivated successfully. Nov 12 22:04:41.491278 containerd[1811]: time="2024-11-12T22:04:41.491258856Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:04:41.491489 containerd[1811]: time="2024-11-12T22:04:41.491475456Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.0: active requests=0, bytes read=140580710" Nov 12 22:04:41.491786 containerd[1811]: time="2024-11-12T22:04:41.491775045Z" level=info msg="ImageCreate event name:\"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:04:41.492668 containerd[1811]: time="2024-11-12T22:04:41.492626839Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:04:41.493030 containerd[1811]: time="2024-11-12T22:04:41.492986912Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.0\" with image id \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\", size \"140580572\" in 4.887221225s" Nov 12 22:04:41.493030 containerd[1811]: time="2024-11-12T22:04:41.493003241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\" returns image reference \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\"" Nov 12 22:04:41.496550 containerd[1811]: time="2024-11-12T22:04:41.496504626Z" level=info msg="CreateContainer within sandbox \"edfe889ba9d743f17fc26fca6b6f7f04094e56a0483e8ac575bd4eb6e99e81fc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 12 22:04:41.501970 containerd[1811]: time="2024-11-12T22:04:41.501925049Z" level=info msg="CreateContainer within sandbox \"edfe889ba9d743f17fc26fca6b6f7f04094e56a0483e8ac575bd4eb6e99e81fc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4a745838641a0b226fbbe586423c30dd4d0c0c271ba63f3a7cb5fa4f29b167c4\"" Nov 12 22:04:41.502144 containerd[1811]: time="2024-11-12T22:04:41.502130634Z" level=info msg="StartContainer for \"4a745838641a0b226fbbe586423c30dd4d0c0c271ba63f3a7cb5fa4f29b167c4\"" Nov 12 22:04:41.529558 systemd[1]: Started cri-containerd-4a745838641a0b226fbbe586423c30dd4d0c0c271ba63f3a7cb5fa4f29b167c4.scope - libcontainer container 4a745838641a0b226fbbe586423c30dd4d0c0c271ba63f3a7cb5fa4f29b167c4. Nov 12 22:04:41.547140 containerd[1811]: time="2024-11-12T22:04:41.547112036Z" level=info msg="StartContainer for \"4a745838641a0b226fbbe586423c30dd4d0c0c271ba63f3a7cb5fa4f29b167c4\" returns successfully" Nov 12 22:04:41.612136 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 12 22:04:41.612188 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 12 22:04:41.632178 kubelet[3195]: I1112 22:04:41.632136 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-c4thf" podStartSLOduration=1.5080499280000002 podStartE2EDuration="15.63212023s" podCreationTimestamp="2024-11-12 22:04:26 +0000 UTC" firstStartedPulling="2024-11-12 22:04:26.924157992 +0000 UTC m=+20.945996655" lastFinishedPulling="2024-11-12 22:04:41.493377475 +0000 UTC m=+35.070066957" observedRunningTime="2024-11-12 22:04:41.632015667 +0000 UTC m=+35.208705152" watchObservedRunningTime="2024-11-12 22:04:41.63212023 +0000 UTC m=+35.208809715" Nov 12 22:04:42.626320 kubelet[3195]: I1112 22:04:42.626236 3195 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 22:04:42.892250 kernel: bpftool[4735]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 12 22:04:43.038319 systemd-networkd[1725]: vxlan.calico: Link UP Nov 12 22:04:43.038322 systemd-networkd[1725]: vxlan.calico: Gained carrier Nov 12 22:04:44.101548 systemd-networkd[1725]: vxlan.calico: Gained IPv6LL Nov 12 22:04:47.496592 containerd[1811]: time="2024-11-12T22:04:47.496541586Z" level=info msg="StopPodSandbox for \"79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995\"" Nov 12 22:04:47.497009 containerd[1811]: time="2024-11-12T22:04:47.496546399Z" level=info msg="StopPodSandbox for \"2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c\"" Nov 12 22:04:47.556972 containerd[1811]: 2024-11-12 22:04:47.535 [INFO][4877] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" Nov 12 22:04:47.556972 containerd[1811]: 2024-11-12 22:04:47.535 [INFO][4877] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" iface="eth0" netns="/var/run/netns/cni-2e55ce47-4446-4f0f-d4fe-ec65f3bca0f1" Nov 12 22:04:47.556972 containerd[1811]: 2024-11-12 22:04:47.536 [INFO][4877] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" iface="eth0" netns="/var/run/netns/cni-2e55ce47-4446-4f0f-d4fe-ec65f3bca0f1" Nov 12 22:04:47.556972 containerd[1811]: 2024-11-12 22:04:47.536 [INFO][4877] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" iface="eth0" netns="/var/run/netns/cni-2e55ce47-4446-4f0f-d4fe-ec65f3bca0f1" Nov 12 22:04:47.556972 containerd[1811]: 2024-11-12 22:04:47.536 [INFO][4877] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" Nov 12 22:04:47.556972 containerd[1811]: 2024-11-12 22:04:47.536 [INFO][4877] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" Nov 12 22:04:47.556972 containerd[1811]: 2024-11-12 22:04:47.548 [INFO][4903] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" HandleID="k8s-pod-network.2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" Workload="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--b8h7n-eth0" Nov 12 22:04:47.556972 containerd[1811]: 2024-11-12 22:04:47.548 [INFO][4903] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 22:04:47.556972 containerd[1811]: 2024-11-12 22:04:47.548 [INFO][4903] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 22:04:47.556972 containerd[1811]: 2024-11-12 22:04:47.552 [WARNING][4903] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" HandleID="k8s-pod-network.2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" Workload="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--b8h7n-eth0" Nov 12 22:04:47.556972 containerd[1811]: 2024-11-12 22:04:47.552 [INFO][4903] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" HandleID="k8s-pod-network.2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" Workload="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--b8h7n-eth0" Nov 12 22:04:47.556972 containerd[1811]: 2024-11-12 22:04:47.554 [INFO][4903] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 22:04:47.556972 containerd[1811]: 2024-11-12 22:04:47.556 [INFO][4877] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" Nov 12 22:04:47.557392 containerd[1811]: time="2024-11-12T22:04:47.557066685Z" level=info msg="TearDown network for sandbox \"2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c\" successfully" Nov 12 22:04:47.557392 containerd[1811]: time="2024-11-12T22:04:47.557091520Z" level=info msg="StopPodSandbox for \"2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c\" returns successfully" Nov 12 22:04:47.557578 containerd[1811]: time="2024-11-12T22:04:47.557561572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b8h7n,Uid:1044f927-3795-4d3f-aa90-111cd417f193,Namespace:kube-system,Attempt:1,}" Nov 12 22:04:47.558763 systemd[1]: run-netns-cni\x2d2e55ce47\x2d4446\x2d4f0f\x2dd4fe\x2dec65f3bca0f1.mount: Deactivated successfully. Nov 12 22:04:47.560592 containerd[1811]: 2024-11-12 22:04:47.536 [INFO][4878] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" Nov 12 22:04:47.560592 containerd[1811]: 2024-11-12 22:04:47.536 [INFO][4878] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" iface="eth0" netns="/var/run/netns/cni-6ea11a64-866f-46f4-e42b-1f3b2ffe5003" Nov 12 22:04:47.560592 containerd[1811]: 2024-11-12 22:04:47.536 [INFO][4878] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" iface="eth0" netns="/var/run/netns/cni-6ea11a64-866f-46f4-e42b-1f3b2ffe5003" Nov 12 22:04:47.560592 containerd[1811]: 2024-11-12 22:04:47.536 [INFO][4878] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" iface="eth0" netns="/var/run/netns/cni-6ea11a64-866f-46f4-e42b-1f3b2ffe5003" Nov 12 22:04:47.560592 containerd[1811]: 2024-11-12 22:04:47.536 [INFO][4878] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" Nov 12 22:04:47.560592 containerd[1811]: 2024-11-12 22:04:47.536 [INFO][4878] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" Nov 12 22:04:47.560592 containerd[1811]: 2024-11-12 22:04:47.548 [INFO][4904] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" HandleID="k8s-pod-network.79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" Workload="ci--4081.2.0--a--a9d0314af7-k8s-csi--node--driver--5zp6z-eth0" Nov 12 22:04:47.560592 containerd[1811]: 2024-11-12 22:04:47.548 [INFO][4904] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 22:04:47.560592 containerd[1811]: 2024-11-12 22:04:47.554 [INFO][4904] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 22:04:47.560592 containerd[1811]: 2024-11-12 22:04:47.558 [WARNING][4904] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" HandleID="k8s-pod-network.79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" Workload="ci--4081.2.0--a--a9d0314af7-k8s-csi--node--driver--5zp6z-eth0" Nov 12 22:04:47.560592 containerd[1811]: 2024-11-12 22:04:47.558 [INFO][4904] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" HandleID="k8s-pod-network.79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" Workload="ci--4081.2.0--a--a9d0314af7-k8s-csi--node--driver--5zp6z-eth0" Nov 12 22:04:47.560592 containerd[1811]: 2024-11-12 22:04:47.559 [INFO][4904] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 22:04:47.560592 containerd[1811]: 2024-11-12 22:04:47.559 [INFO][4878] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" Nov 12 22:04:47.560943 containerd[1811]: time="2024-11-12T22:04:47.560687308Z" level=info msg="TearDown network for sandbox \"79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995\" successfully" Nov 12 22:04:47.560943 containerd[1811]: time="2024-11-12T22:04:47.560698766Z" level=info msg="StopPodSandbox for \"79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995\" returns successfully" Nov 12 22:04:47.561038 containerd[1811]: time="2024-11-12T22:04:47.561018938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5zp6z,Uid:2ab49c9f-8e53-44c9-8b09-d25ffd106921,Namespace:calico-system,Attempt:1,}" Nov 12 22:04:47.562050 systemd[1]: run-netns-cni\x2d6ea11a64\x2d866f\x2d46f4\x2de42b\x2d1f3b2ffe5003.mount: Deactivated successfully. Nov 12 22:04:47.620609 systemd-networkd[1725]: cali2cce9a5cbe3: Link UP Nov 12 22:04:47.620754 systemd-networkd[1725]: cali2cce9a5cbe3: Gained carrier Nov 12 22:04:47.625564 containerd[1811]: 2024-11-12 22:04:47.584 [INFO][4934] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--b8h7n-eth0 coredns-7db6d8ff4d- kube-system 1044f927-3795-4d3f-aa90-111cd417f193 806 0 2024-11-12 22:04:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.2.0-a-a9d0314af7 coredns-7db6d8ff4d-b8h7n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2cce9a5cbe3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c88aa18793443e5f7b21fc2f1e4d3e994d7cfd6844464b43a12ae454f4a5c638" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b8h7n" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--b8h7n-" Nov 12 22:04:47.625564 containerd[1811]: 2024-11-12 22:04:47.585 [INFO][4934] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c88aa18793443e5f7b21fc2f1e4d3e994d7cfd6844464b43a12ae454f4a5c638" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b8h7n" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--b8h7n-eth0" Nov 12 22:04:47.625564 containerd[1811]: 2024-11-12 22:04:47.600 [INFO][4976] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c88aa18793443e5f7b21fc2f1e4d3e994d7cfd6844464b43a12ae454f4a5c638" HandleID="k8s-pod-network.c88aa18793443e5f7b21fc2f1e4d3e994d7cfd6844464b43a12ae454f4a5c638" Workload="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--b8h7n-eth0" Nov 12 22:04:47.625564 containerd[1811]: 2024-11-12 22:04:47.605 [INFO][4976] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c88aa18793443e5f7b21fc2f1e4d3e994d7cfd6844464b43a12ae454f4a5c638" HandleID="k8s-pod-network.c88aa18793443e5f7b21fc2f1e4d3e994d7cfd6844464b43a12ae454f4a5c638" Workload="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--b8h7n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000369a70), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.2.0-a-a9d0314af7", "pod":"coredns-7db6d8ff4d-b8h7n", "timestamp":"2024-11-12 22:04:47.600049609 +0000 UTC"}, Hostname:"ci-4081.2.0-a-a9d0314af7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 22:04:47.625564 containerd[1811]: 2024-11-12 22:04:47.605 [INFO][4976] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 22:04:47.625564 containerd[1811]: 2024-11-12 22:04:47.605 [INFO][4976] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 22:04:47.625564 containerd[1811]: 2024-11-12 22:04:47.605 [INFO][4976] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-a-a9d0314af7' Nov 12 22:04:47.625564 containerd[1811]: 2024-11-12 22:04:47.606 [INFO][4976] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c88aa18793443e5f7b21fc2f1e4d3e994d7cfd6844464b43a12ae454f4a5c638" host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:47.625564 containerd[1811]: 2024-11-12 22:04:47.608 [INFO][4976] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:47.625564 containerd[1811]: 2024-11-12 22:04:47.610 [INFO][4976] ipam/ipam.go 489: Trying affinity for 192.168.20.64/26 host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:47.625564 containerd[1811]: 2024-11-12 22:04:47.611 [INFO][4976] ipam/ipam.go 155: Attempting to load block cidr=192.168.20.64/26 host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:47.625564 containerd[1811]: 2024-11-12 22:04:47.612 [INFO][4976] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.20.64/26 host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:47.625564 containerd[1811]: 2024-11-12 22:04:47.612 [INFO][4976] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.20.64/26 handle="k8s-pod-network.c88aa18793443e5f7b21fc2f1e4d3e994d7cfd6844464b43a12ae454f4a5c638" host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:47.625564 containerd[1811]: 2024-11-12 22:04:47.613 [INFO][4976] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c88aa18793443e5f7b21fc2f1e4d3e994d7cfd6844464b43a12ae454f4a5c638 Nov 12 22:04:47.625564 containerd[1811]: 2024-11-12 22:04:47.616 [INFO][4976] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.20.64/26 handle="k8s-pod-network.c88aa18793443e5f7b21fc2f1e4d3e994d7cfd6844464b43a12ae454f4a5c638" host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:47.625564 containerd[1811]: 2024-11-12 22:04:47.618 [INFO][4976] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.20.65/26] block=192.168.20.64/26 handle="k8s-pod-network.c88aa18793443e5f7b21fc2f1e4d3e994d7cfd6844464b43a12ae454f4a5c638" host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:47.625564 containerd[1811]: 2024-11-12 22:04:47.618 [INFO][4976] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.20.65/26] handle="k8s-pod-network.c88aa18793443e5f7b21fc2f1e4d3e994d7cfd6844464b43a12ae454f4a5c638" host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:47.625564 containerd[1811]: 2024-11-12 22:04:47.618 [INFO][4976] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 22:04:47.625564 containerd[1811]: 2024-11-12 22:04:47.618 [INFO][4976] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.65/26] IPv6=[] ContainerID="c88aa18793443e5f7b21fc2f1e4d3e994d7cfd6844464b43a12ae454f4a5c638" HandleID="k8s-pod-network.c88aa18793443e5f7b21fc2f1e4d3e994d7cfd6844464b43a12ae454f4a5c638" Workload="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--b8h7n-eth0" Nov 12 22:04:47.626122 containerd[1811]: 2024-11-12 22:04:47.619 [INFO][4934] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c88aa18793443e5f7b21fc2f1e4d3e994d7cfd6844464b43a12ae454f4a5c638" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b8h7n" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--b8h7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--b8h7n-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1044f927-3795-4d3f-aa90-111cd417f193", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 4, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-a9d0314af7", ContainerID:"", Pod:"coredns-7db6d8ff4d-b8h7n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2cce9a5cbe3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:04:47.626122 containerd[1811]: 2024-11-12 22:04:47.619 [INFO][4934] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.20.65/32] ContainerID="c88aa18793443e5f7b21fc2f1e4d3e994d7cfd6844464b43a12ae454f4a5c638" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b8h7n" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--b8h7n-eth0" Nov 12 22:04:47.626122 containerd[1811]: 2024-11-12 22:04:47.619 [INFO][4934] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2cce9a5cbe3 ContainerID="c88aa18793443e5f7b21fc2f1e4d3e994d7cfd6844464b43a12ae454f4a5c638" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b8h7n" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--b8h7n-eth0" Nov 12 22:04:47.626122 containerd[1811]: 2024-11-12 22:04:47.620 [INFO][4934] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c88aa18793443e5f7b21fc2f1e4d3e994d7cfd6844464b43a12ae454f4a5c638" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b8h7n" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--b8h7n-eth0" Nov 12 22:04:47.626249 containerd[1811]: 2024-11-12 22:04:47.620 [INFO][4934] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c88aa18793443e5f7b21fc2f1e4d3e994d7cfd6844464b43a12ae454f4a5c638" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b8h7n" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--b8h7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--b8h7n-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1044f927-3795-4d3f-aa90-111cd417f193", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 4, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-a9d0314af7", ContainerID:"c88aa18793443e5f7b21fc2f1e4d3e994d7cfd6844464b43a12ae454f4a5c638", Pod:"coredns-7db6d8ff4d-b8h7n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2cce9a5cbe3", MAC:"d2:7b:a7:72:00:eb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:04:47.626249 containerd[1811]: 2024-11-12 22:04:47.624 [INFO][4934] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c88aa18793443e5f7b21fc2f1e4d3e994d7cfd6844464b43a12ae454f4a5c638" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b8h7n" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--b8h7n-eth0" Nov 12 22:04:47.634603 systemd-networkd[1725]: calib3dfe231bdd: Link UP Nov 12 22:04:47.634731 systemd-networkd[1725]: calib3dfe231bdd: Gained carrier Nov 12 22:04:47.634802 containerd[1811]: time="2024-11-12T22:04:47.634746502Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:04:47.635018 containerd[1811]: time="2024-11-12T22:04:47.634999528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:04:47.635018 containerd[1811]: time="2024-11-12T22:04:47.635009189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:04:47.635094 containerd[1811]: time="2024-11-12T22:04:47.635055792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:04:47.640644 containerd[1811]: 2024-11-12 22:04:47.584 [INFO][4943] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--a--a9d0314af7-k8s-csi--node--driver--5zp6z-eth0 csi-node-driver- calico-system 2ab49c9f-8e53-44c9-8b09-d25ffd106921 807 0 2024-11-12 22:04:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:85bdc57578 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.2.0-a-a9d0314af7 csi-node-driver-5zp6z eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib3dfe231bdd [] []}} ContainerID="1d0d81bfd9cf6deddca629a43c5ca2a6288c98052207a17c735bb8ac1cded317" Namespace="calico-system" Pod="csi-node-driver-5zp6z" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-csi--node--driver--5zp6z-" Nov 12 22:04:47.640644 containerd[1811]: 2024-11-12 22:04:47.585 [INFO][4943] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1d0d81bfd9cf6deddca629a43c5ca2a6288c98052207a17c735bb8ac1cded317" Namespace="calico-system" Pod="csi-node-driver-5zp6z" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-csi--node--driver--5zp6z-eth0" Nov 12 22:04:47.640644 containerd[1811]: 2024-11-12 22:04:47.600 [INFO][4977] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d0d81bfd9cf6deddca629a43c5ca2a6288c98052207a17c735bb8ac1cded317" HandleID="k8s-pod-network.1d0d81bfd9cf6deddca629a43c5ca2a6288c98052207a17c735bb8ac1cded317" Workload="ci--4081.2.0--a--a9d0314af7-k8s-csi--node--driver--5zp6z-eth0" Nov 12 22:04:47.640644 containerd[1811]: 2024-11-12 22:04:47.605 [INFO][4977] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1d0d81bfd9cf6deddca629a43c5ca2a6288c98052207a17c735bb8ac1cded317" HandleID="k8s-pod-network.1d0d81bfd9cf6deddca629a43c5ca2a6288c98052207a17c735bb8ac1cded317" Workload="ci--4081.2.0--a--a9d0314af7-k8s-csi--node--driver--5zp6z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00028ba10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.2.0-a-a9d0314af7", "pod":"csi-node-driver-5zp6z", "timestamp":"2024-11-12 22:04:47.600178912 +0000 UTC"}, Hostname:"ci-4081.2.0-a-a9d0314af7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 22:04:47.640644 containerd[1811]: 2024-11-12 22:04:47.605 [INFO][4977] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 22:04:47.640644 containerd[1811]: 2024-11-12 22:04:47.618 [INFO][4977] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 22:04:47.640644 containerd[1811]: 2024-11-12 22:04:47.619 [INFO][4977] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-a-a9d0314af7' Nov 12 22:04:47.640644 containerd[1811]: 2024-11-12 22:04:47.619 [INFO][4977] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1d0d81bfd9cf6deddca629a43c5ca2a6288c98052207a17c735bb8ac1cded317" host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:47.640644 containerd[1811]: 2024-11-12 22:04:47.621 [INFO][4977] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:47.640644 containerd[1811]: 2024-11-12 22:04:47.624 [INFO][4977] ipam/ipam.go 489: Trying affinity for 192.168.20.64/26 host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:47.640644 containerd[1811]: 2024-11-12 22:04:47.624 [INFO][4977] ipam/ipam.go 155: Attempting to load block cidr=192.168.20.64/26 host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:47.640644 containerd[1811]: 2024-11-12 22:04:47.626 [INFO][4977] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.20.64/26 host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:47.640644 containerd[1811]: 2024-11-12 22:04:47.626 [INFO][4977] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.20.64/26 handle="k8s-pod-network.1d0d81bfd9cf6deddca629a43c5ca2a6288c98052207a17c735bb8ac1cded317" host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:47.640644 containerd[1811]: 2024-11-12 22:04:47.627 [INFO][4977] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1d0d81bfd9cf6deddca629a43c5ca2a6288c98052207a17c735bb8ac1cded317 Nov 12 22:04:47.640644 containerd[1811]: 2024-11-12 22:04:47.630 [INFO][4977] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.20.64/26 handle="k8s-pod-network.1d0d81bfd9cf6deddca629a43c5ca2a6288c98052207a17c735bb8ac1cded317" host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:47.640644 containerd[1811]: 2024-11-12 22:04:47.632 [INFO][4977] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.20.66/26] block=192.168.20.64/26 handle="k8s-pod-network.1d0d81bfd9cf6deddca629a43c5ca2a6288c98052207a17c735bb8ac1cded317" host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:47.640644 containerd[1811]: 2024-11-12 22:04:47.632 [INFO][4977] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.20.66/26] handle="k8s-pod-network.1d0d81bfd9cf6deddca629a43c5ca2a6288c98052207a17c735bb8ac1cded317" host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:47.640644 containerd[1811]: 2024-11-12 22:04:47.632 [INFO][4977] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 22:04:47.640644 containerd[1811]: 2024-11-12 22:04:47.632 [INFO][4977] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.66/26] IPv6=[] ContainerID="1d0d81bfd9cf6deddca629a43c5ca2a6288c98052207a17c735bb8ac1cded317" HandleID="k8s-pod-network.1d0d81bfd9cf6deddca629a43c5ca2a6288c98052207a17c735bb8ac1cded317" Workload="ci--4081.2.0--a--a9d0314af7-k8s-csi--node--driver--5zp6z-eth0" Nov 12 22:04:47.641066 containerd[1811]: 2024-11-12 22:04:47.633 [INFO][4943] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1d0d81bfd9cf6deddca629a43c5ca2a6288c98052207a17c735bb8ac1cded317" Namespace="calico-system" Pod="csi-node-driver-5zp6z" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-csi--node--driver--5zp6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--a9d0314af7-k8s-csi--node--driver--5zp6z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2ab49c9f-8e53-44c9-8b09-d25ffd106921", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 4, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85bdc57578", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-a9d0314af7", ContainerID:"", Pod:"csi-node-driver-5zp6z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.20.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib3dfe231bdd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:04:47.641066 containerd[1811]: 2024-11-12 22:04:47.633 [INFO][4943] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.20.66/32] ContainerID="1d0d81bfd9cf6deddca629a43c5ca2a6288c98052207a17c735bb8ac1cded317" Namespace="calico-system" Pod="csi-node-driver-5zp6z" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-csi--node--driver--5zp6z-eth0" Nov 12 22:04:47.641066 containerd[1811]: 2024-11-12 22:04:47.633 [INFO][4943] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib3dfe231bdd ContainerID="1d0d81bfd9cf6deddca629a43c5ca2a6288c98052207a17c735bb8ac1cded317" Namespace="calico-system" Pod="csi-node-driver-5zp6z" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-csi--node--driver--5zp6z-eth0" Nov 12 22:04:47.641066 containerd[1811]: 2024-11-12 22:04:47.634 [INFO][4943] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d0d81bfd9cf6deddca629a43c5ca2a6288c98052207a17c735bb8ac1cded317" Namespace="calico-system" Pod="csi-node-driver-5zp6z" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-csi--node--driver--5zp6z-eth0" Nov 12 22:04:47.641066 containerd[1811]: 2024-11-12 22:04:47.635 [INFO][4943] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1d0d81bfd9cf6deddca629a43c5ca2a6288c98052207a17c735bb8ac1cded317" Namespace="calico-system" Pod="csi-node-driver-5zp6z" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-csi--node--driver--5zp6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--a9d0314af7-k8s-csi--node--driver--5zp6z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2ab49c9f-8e53-44c9-8b09-d25ffd106921", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 4, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85bdc57578", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-a9d0314af7", ContainerID:"1d0d81bfd9cf6deddca629a43c5ca2a6288c98052207a17c735bb8ac1cded317", Pod:"csi-node-driver-5zp6z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.20.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib3dfe231bdd", MAC:"52:c6:2c:10:f6:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:04:47.641066 containerd[1811]: 2024-11-12 22:04:47.639 [INFO][4943] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1d0d81bfd9cf6deddca629a43c5ca2a6288c98052207a17c735bb8ac1cded317" Namespace="calico-system" Pod="csi-node-driver-5zp6z" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-csi--node--driver--5zp6z-eth0" Nov 12 22:04:47.649189 containerd[1811]: time="2024-11-12T22:04:47.649117044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:04:47.649419 containerd[1811]: time="2024-11-12T22:04:47.649161876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:04:47.649419 containerd[1811]: time="2024-11-12T22:04:47.649385198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:04:47.649477 containerd[1811]: time="2024-11-12T22:04:47.649431623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:04:47.651371 systemd[1]: Started cri-containerd-c88aa18793443e5f7b21fc2f1e4d3e994d7cfd6844464b43a12ae454f4a5c638.scope - libcontainer container c88aa18793443e5f7b21fc2f1e4d3e994d7cfd6844464b43a12ae454f4a5c638. Nov 12 22:04:47.654857 systemd[1]: Started cri-containerd-1d0d81bfd9cf6deddca629a43c5ca2a6288c98052207a17c735bb8ac1cded317.scope - libcontainer container 1d0d81bfd9cf6deddca629a43c5ca2a6288c98052207a17c735bb8ac1cded317. Nov 12 22:04:47.664871 containerd[1811]: time="2024-11-12T22:04:47.664847319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5zp6z,Uid:2ab49c9f-8e53-44c9-8b09-d25ffd106921,Namespace:calico-system,Attempt:1,} returns sandbox id \"1d0d81bfd9cf6deddca629a43c5ca2a6288c98052207a17c735bb8ac1cded317\"" Nov 12 22:04:47.665607 containerd[1811]: time="2024-11-12T22:04:47.665594790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\"" Nov 12 22:04:47.674773 containerd[1811]: time="2024-11-12T22:04:47.674723556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b8h7n,Uid:1044f927-3795-4d3f-aa90-111cd417f193,Namespace:kube-system,Attempt:1,} returns sandbox id \"c88aa18793443e5f7b21fc2f1e4d3e994d7cfd6844464b43a12ae454f4a5c638\"" Nov 12 22:04:47.675856 containerd[1811]: time="2024-11-12T22:04:47.675844332Z" level=info msg="CreateContainer within sandbox \"c88aa18793443e5f7b21fc2f1e4d3e994d7cfd6844464b43a12ae454f4a5c638\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 22:04:47.680300 containerd[1811]: time="2024-11-12T22:04:47.680249297Z" level=info msg="CreateContainer within sandbox \"c88aa18793443e5f7b21fc2f1e4d3e994d7cfd6844464b43a12ae454f4a5c638\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a07ed210b13a1ec7b27be2f06188a865112f4ec1babe0bd3599c641823350fb9\"" Nov 12 22:04:47.680539 containerd[1811]: time="2024-11-12T22:04:47.680489093Z" level=info msg="StartContainer for \"a07ed210b13a1ec7b27be2f06188a865112f4ec1babe0bd3599c641823350fb9\"" Nov 12 22:04:47.697423 systemd[1]: Started cri-containerd-a07ed210b13a1ec7b27be2f06188a865112f4ec1babe0bd3599c641823350fb9.scope - libcontainer container a07ed210b13a1ec7b27be2f06188a865112f4ec1babe0bd3599c641823350fb9. Nov 12 22:04:47.708457 containerd[1811]: time="2024-11-12T22:04:47.708434983Z" level=info msg="StartContainer for \"a07ed210b13a1ec7b27be2f06188a865112f4ec1babe0bd3599c641823350fb9\" returns successfully" Nov 12 22:04:48.121162 kubelet[3195]: I1112 22:04:48.121040 3195 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 22:04:48.497341 containerd[1811]: time="2024-11-12T22:04:48.497035288Z" level=info msg="StopPodSandbox for \"bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd\"" Nov 12 22:04:48.559372 containerd[1811]: 2024-11-12 22:04:48.542 [INFO][5246] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" Nov 12 22:04:48.559372 containerd[1811]: 2024-11-12 22:04:48.542 [INFO][5246] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" iface="eth0" netns="/var/run/netns/cni-51a20cd7-ae4c-2c56-82ce-f4a93d52247b" Nov 12 22:04:48.559372 containerd[1811]: 2024-11-12 22:04:48.542 [INFO][5246] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" iface="eth0" netns="/var/run/netns/cni-51a20cd7-ae4c-2c56-82ce-f4a93d52247b" Nov 12 22:04:48.559372 containerd[1811]: 2024-11-12 22:04:48.542 [INFO][5246] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" iface="eth0" netns="/var/run/netns/cni-51a20cd7-ae4c-2c56-82ce-f4a93d52247b" Nov 12 22:04:48.559372 containerd[1811]: 2024-11-12 22:04:48.542 [INFO][5246] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" Nov 12 22:04:48.559372 containerd[1811]: 2024-11-12 22:04:48.542 [INFO][5246] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" Nov 12 22:04:48.559372 containerd[1811]: 2024-11-12 22:04:48.553 [INFO][5259] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" HandleID="k8s-pod-network.bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" Workload="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--4b7bm-eth0" Nov 12 22:04:48.559372 containerd[1811]: 2024-11-12 22:04:48.553 [INFO][5259] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 22:04:48.559372 containerd[1811]: 2024-11-12 22:04:48.553 [INFO][5259] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 22:04:48.559372 containerd[1811]: 2024-11-12 22:04:48.557 [WARNING][5259] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" HandleID="k8s-pod-network.bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" Workload="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--4b7bm-eth0" Nov 12 22:04:48.559372 containerd[1811]: 2024-11-12 22:04:48.557 [INFO][5259] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" HandleID="k8s-pod-network.bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" Workload="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--4b7bm-eth0" Nov 12 22:04:48.559372 containerd[1811]: 2024-11-12 22:04:48.558 [INFO][5259] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 22:04:48.559372 containerd[1811]: 2024-11-12 22:04:48.558 [INFO][5246] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" Nov 12 22:04:48.559686 containerd[1811]: time="2024-11-12T22:04:48.559453816Z" level=info msg="TearDown network for sandbox \"bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd\" successfully" Nov 12 22:04:48.559686 containerd[1811]: time="2024-11-12T22:04:48.559471394Z" level=info msg="StopPodSandbox for \"bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd\" returns successfully" Nov 12 22:04:48.559894 containerd[1811]: time="2024-11-12T22:04:48.559880932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4b7bm,Uid:d6ab8fe9-0b2f-42b5-9df3-8800f48bbf2b,Namespace:kube-system,Attempt:1,}" Nov 12 22:04:48.561636 systemd[1]: run-netns-cni\x2d51a20cd7\x2dae4c\x2d2c56\x2d82ce\x2df4a93d52247b.mount: Deactivated successfully. Nov 12 22:04:48.615240 systemd-networkd[1725]: cali5fa94260055: Link UP Nov 12 22:04:48.615384 systemd-networkd[1725]: cali5fa94260055: Gained carrier Nov 12 22:04:48.620834 containerd[1811]: 2024-11-12 22:04:48.580 [INFO][5271] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--4b7bm-eth0 coredns-7db6d8ff4d- kube-system d6ab8fe9-0b2f-42b5-9df3-8800f48bbf2b 822 0 2024-11-12 22:04:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.2.0-a-a9d0314af7 coredns-7db6d8ff4d-4b7bm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5fa94260055 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a4cfabd43766698e75bb3b9388fbff6bc3870f7d5a14e487ec8a87007e999fef" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4b7bm" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--4b7bm-" Nov 12 22:04:48.620834 containerd[1811]: 2024-11-12 22:04:48.580 [INFO][5271] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a4cfabd43766698e75bb3b9388fbff6bc3870f7d5a14e487ec8a87007e999fef" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4b7bm" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--4b7bm-eth0" Nov 12 22:04:48.620834 containerd[1811]: 2024-11-12 22:04:48.594 [INFO][5295] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a4cfabd43766698e75bb3b9388fbff6bc3870f7d5a14e487ec8a87007e999fef" HandleID="k8s-pod-network.a4cfabd43766698e75bb3b9388fbff6bc3870f7d5a14e487ec8a87007e999fef" Workload="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--4b7bm-eth0" Nov 12 22:04:48.620834 containerd[1811]: 2024-11-12 22:04:48.600 [INFO][5295] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a4cfabd43766698e75bb3b9388fbff6bc3870f7d5a14e487ec8a87007e999fef" HandleID="k8s-pod-network.a4cfabd43766698e75bb3b9388fbff6bc3870f7d5a14e487ec8a87007e999fef" Workload="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--4b7bm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002bc850), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.2.0-a-a9d0314af7", "pod":"coredns-7db6d8ff4d-4b7bm", "timestamp":"2024-11-12 22:04:48.594752455 +0000 UTC"}, Hostname:"ci-4081.2.0-a-a9d0314af7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 22:04:48.620834 containerd[1811]: 2024-11-12 22:04:48.600 [INFO][5295] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 22:04:48.620834 containerd[1811]: 2024-11-12 22:04:48.600 [INFO][5295] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 22:04:48.620834 containerd[1811]: 2024-11-12 22:04:48.600 [INFO][5295] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-a-a9d0314af7' Nov 12 22:04:48.620834 containerd[1811]: 2024-11-12 22:04:48.601 [INFO][5295] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a4cfabd43766698e75bb3b9388fbff6bc3870f7d5a14e487ec8a87007e999fef" host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:48.620834 containerd[1811]: 2024-11-12 22:04:48.604 [INFO][5295] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:48.620834 containerd[1811]: 2024-11-12 22:04:48.606 [INFO][5295] ipam/ipam.go 489: Trying affinity for 192.168.20.64/26 host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:48.620834 containerd[1811]: 2024-11-12 22:04:48.607 [INFO][5295] ipam/ipam.go 155: Attempting to load block cidr=192.168.20.64/26 host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:48.620834 containerd[1811]: 2024-11-12 22:04:48.608 [INFO][5295] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.20.64/26 host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:48.620834 containerd[1811]: 2024-11-12 22:04:48.608 [INFO][5295] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.20.64/26 handle="k8s-pod-network.a4cfabd43766698e75bb3b9388fbff6bc3870f7d5a14e487ec8a87007e999fef" host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:48.620834 containerd[1811]: 2024-11-12 22:04:48.608 [INFO][5295] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a4cfabd43766698e75bb3b9388fbff6bc3870f7d5a14e487ec8a87007e999fef Nov 12 22:04:48.620834 containerd[1811]: 2024-11-12 22:04:48.610 [INFO][5295] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.20.64/26 handle="k8s-pod-network.a4cfabd43766698e75bb3b9388fbff6bc3870f7d5a14e487ec8a87007e999fef" host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:48.620834 containerd[1811]: 2024-11-12 22:04:48.613 [INFO][5295] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.20.67/26] block=192.168.20.64/26 handle="k8s-pod-network.a4cfabd43766698e75bb3b9388fbff6bc3870f7d5a14e487ec8a87007e999fef" host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:48.620834 containerd[1811]: 2024-11-12 22:04:48.613 [INFO][5295] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.20.67/26] handle="k8s-pod-network.a4cfabd43766698e75bb3b9388fbff6bc3870f7d5a14e487ec8a87007e999fef" host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:48.620834 containerd[1811]: 2024-11-12 22:04:48.613 [INFO][5295] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 22:04:48.620834 containerd[1811]: 2024-11-12 22:04:48.613 [INFO][5295] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.67/26] IPv6=[] ContainerID="a4cfabd43766698e75bb3b9388fbff6bc3870f7d5a14e487ec8a87007e999fef" HandleID="k8s-pod-network.a4cfabd43766698e75bb3b9388fbff6bc3870f7d5a14e487ec8a87007e999fef" Workload="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--4b7bm-eth0" Nov 12 22:04:48.621290 containerd[1811]: 2024-11-12 22:04:48.614 [INFO][5271] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a4cfabd43766698e75bb3b9388fbff6bc3870f7d5a14e487ec8a87007e999fef" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4b7bm" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--4b7bm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--4b7bm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d6ab8fe9-0b2f-42b5-9df3-8800f48bbf2b", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 4, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-a9d0314af7", ContainerID:"", Pod:"coredns-7db6d8ff4d-4b7bm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5fa94260055", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:04:48.621290 containerd[1811]: 2024-11-12 22:04:48.614 [INFO][5271] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.20.67/32] ContainerID="a4cfabd43766698e75bb3b9388fbff6bc3870f7d5a14e487ec8a87007e999fef" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4b7bm" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--4b7bm-eth0" Nov 12 22:04:48.621290 containerd[1811]: 2024-11-12 22:04:48.614 [INFO][5271] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5fa94260055 ContainerID="a4cfabd43766698e75bb3b9388fbff6bc3870f7d5a14e487ec8a87007e999fef" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4b7bm" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--4b7bm-eth0" Nov 12 22:04:48.621290 containerd[1811]: 2024-11-12 22:04:48.615 [INFO][5271] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a4cfabd43766698e75bb3b9388fbff6bc3870f7d5a14e487ec8a87007e999fef" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4b7bm" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--4b7bm-eth0" Nov 12 22:04:48.621397 containerd[1811]: 2024-11-12 22:04:48.615 [INFO][5271] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a4cfabd43766698e75bb3b9388fbff6bc3870f7d5a14e487ec8a87007e999fef" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4b7bm" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--4b7bm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--4b7bm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d6ab8fe9-0b2f-42b5-9df3-8800f48bbf2b", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 4, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-a9d0314af7", ContainerID:"a4cfabd43766698e75bb3b9388fbff6bc3870f7d5a14e487ec8a87007e999fef", Pod:"coredns-7db6d8ff4d-4b7bm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5fa94260055", MAC:"b2:d1:f8:76:ef:2a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:04:48.621397 containerd[1811]: 2024-11-12 22:04:48.619 [INFO][5271] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a4cfabd43766698e75bb3b9388fbff6bc3870f7d5a14e487ec8a87007e999fef" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4b7bm" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--4b7bm-eth0" Nov 12 22:04:48.630485 containerd[1811]: time="2024-11-12T22:04:48.630232869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:04:48.630485 containerd[1811]: time="2024-11-12T22:04:48.630444983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:04:48.630485 containerd[1811]: time="2024-11-12T22:04:48.630453152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:04:48.630571 containerd[1811]: time="2024-11-12T22:04:48.630494274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:04:48.645604 kubelet[3195]: I1112 22:04:48.645567 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-b8h7n" podStartSLOduration=28.645554792 podStartE2EDuration="28.645554792s" podCreationTimestamp="2024-11-12 22:04:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:04:48.645466493 +0000 UTC m=+42.222155981" watchObservedRunningTime="2024-11-12 22:04:48.645554792 +0000 UTC m=+42.222244282" Nov 12 22:04:48.648411 systemd[1]: Started cri-containerd-a4cfabd43766698e75bb3b9388fbff6bc3870f7d5a14e487ec8a87007e999fef.scope - libcontainer container a4cfabd43766698e75bb3b9388fbff6bc3870f7d5a14e487ec8a87007e999fef. Nov 12 22:04:48.671125 containerd[1811]: time="2024-11-12T22:04:48.671100418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4b7bm,Uid:d6ab8fe9-0b2f-42b5-9df3-8800f48bbf2b,Namespace:kube-system,Attempt:1,} returns sandbox id \"a4cfabd43766698e75bb3b9388fbff6bc3870f7d5a14e487ec8a87007e999fef\"" Nov 12 22:04:48.672422 containerd[1811]: time="2024-11-12T22:04:48.672406538Z" level=info msg="CreateContainer within sandbox \"a4cfabd43766698e75bb3b9388fbff6bc3870f7d5a14e487ec8a87007e999fef\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 22:04:48.676518 containerd[1811]: time="2024-11-12T22:04:48.676504294Z" level=info msg="CreateContainer within sandbox \"a4cfabd43766698e75bb3b9388fbff6bc3870f7d5a14e487ec8a87007e999fef\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"daa08fcb1824b098e3ee703c7ac1a502595683861f27630bb81b39876805fb95\"" Nov 12 22:04:48.676707 containerd[1811]: time="2024-11-12T22:04:48.676695966Z" level=info msg="StartContainer for \"daa08fcb1824b098e3ee703c7ac1a502595683861f27630bb81b39876805fb95\"" Nov 12 22:04:48.702542 systemd[1]: Started cri-containerd-daa08fcb1824b098e3ee703c7ac1a502595683861f27630bb81b39876805fb95.scope - libcontainer container daa08fcb1824b098e3ee703c7ac1a502595683861f27630bb81b39876805fb95. Nov 12 22:04:48.716499 containerd[1811]: time="2024-11-12T22:04:48.716453920Z" level=info msg="StartContainer for \"daa08fcb1824b098e3ee703c7ac1a502595683861f27630bb81b39876805fb95\" returns successfully" Nov 12 22:04:49.029489 systemd-networkd[1725]: cali2cce9a5cbe3: Gained IPv6LL Nov 12 22:04:49.457127 containerd[1811]: time="2024-11-12T22:04:49.457073597Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:04:49.457344 containerd[1811]: time="2024-11-12T22:04:49.457250192Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.0: active requests=0, bytes read=7902635" Nov 12 22:04:49.457780 containerd[1811]: time="2024-11-12T22:04:49.457744182Z" level=info msg="ImageCreate event name:\"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:04:49.458831 containerd[1811]: time="2024-11-12T22:04:49.458791285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:04:49.459203 containerd[1811]: time="2024-11-12T22:04:49.459156506Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.0\" with image id \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\", size \"9395727\" in 1.793543122s" Nov 12 22:04:49.459203 containerd[1811]: time="2024-11-12T22:04:49.459170120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\" returns image reference \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\"" Nov 12 22:04:49.460721 containerd[1811]: time="2024-11-12T22:04:49.460667698Z" level=info msg="CreateContainer within sandbox \"1d0d81bfd9cf6deddca629a43c5ca2a6288c98052207a17c735bb8ac1cded317\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Nov 12 22:04:49.466382 containerd[1811]: time="2024-11-12T22:04:49.466340945Z" level=info msg="CreateContainer within sandbox \"1d0d81bfd9cf6deddca629a43c5ca2a6288c98052207a17c735bb8ac1cded317\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"af9b5ebdac0be771617b36e02d4bf842e34133d94398414387ef857f56adce48\"" Nov 12 22:04:49.466586 containerd[1811]: time="2024-11-12T22:04:49.466569065Z" level=info msg="StartContainer for \"af9b5ebdac0be771617b36e02d4bf842e34133d94398414387ef857f56adce48\"" Nov 12 22:04:49.490548 systemd[1]: Started cri-containerd-af9b5ebdac0be771617b36e02d4bf842e34133d94398414387ef857f56adce48.scope - libcontainer container af9b5ebdac0be771617b36e02d4bf842e34133d94398414387ef857f56adce48. Nov 12 22:04:49.496543 containerd[1811]: time="2024-11-12T22:04:49.496513852Z" level=info msg="StopPodSandbox for \"0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a\"" Nov 12 22:04:49.506405 containerd[1811]: time="2024-11-12T22:04:49.506346862Z" level=info msg="StartContainer for \"af9b5ebdac0be771617b36e02d4bf842e34133d94398414387ef857f56adce48\" returns successfully" Nov 12 22:04:49.507084 containerd[1811]: time="2024-11-12T22:04:49.507064526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\"" Nov 12 22:04:49.545770 containerd[1811]: 2024-11-12 22:04:49.521 [INFO][5459] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" Nov 12 22:04:49.545770 containerd[1811]: 2024-11-12 22:04:49.521 [INFO][5459] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" iface="eth0" netns="/var/run/netns/cni-49eeaecb-3f89-878a-02c7-61ccedfbc0fd" Nov 12 22:04:49.545770 containerd[1811]: 2024-11-12 22:04:49.521 [INFO][5459] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" iface="eth0" netns="/var/run/netns/cni-49eeaecb-3f89-878a-02c7-61ccedfbc0fd" Nov 12 22:04:49.545770 containerd[1811]: 2024-11-12 22:04:49.521 [INFO][5459] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" iface="eth0" netns="/var/run/netns/cni-49eeaecb-3f89-878a-02c7-61ccedfbc0fd" Nov 12 22:04:49.545770 containerd[1811]: 2024-11-12 22:04:49.521 [INFO][5459] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" Nov 12 22:04:49.545770 containerd[1811]: 2024-11-12 22:04:49.521 [INFO][5459] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" Nov 12 22:04:49.545770 containerd[1811]: 2024-11-12 22:04:49.532 [INFO][5485] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" HandleID="k8s-pod-network.0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--97n44-eth0" Nov 12 22:04:49.545770 containerd[1811]: 2024-11-12 22:04:49.532 [INFO][5485] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 22:04:49.545770 containerd[1811]: 2024-11-12 22:04:49.532 [INFO][5485] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 22:04:49.545770 containerd[1811]: 2024-11-12 22:04:49.536 [WARNING][5485] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" HandleID="k8s-pod-network.0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--97n44-eth0" Nov 12 22:04:49.545770 containerd[1811]: 2024-11-12 22:04:49.537 [INFO][5485] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" HandleID="k8s-pod-network.0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--97n44-eth0" Nov 12 22:04:49.545770 containerd[1811]: 2024-11-12 22:04:49.539 [INFO][5485] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 22:04:49.545770 containerd[1811]: 2024-11-12 22:04:49.542 [INFO][5459] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" Nov 12 22:04:49.547110 containerd[1811]: time="2024-11-12T22:04:49.546180520Z" level=info msg="TearDown network for sandbox \"0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a\" successfully" Nov 12 22:04:49.547110 containerd[1811]: time="2024-11-12T22:04:49.546277226Z" level=info msg="StopPodSandbox for \"0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a\" returns successfully" Nov 12 22:04:49.547716 containerd[1811]: time="2024-11-12T22:04:49.547602352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4b7b8c7-97n44,Uid:920c6399-0c92-4fd2-94f7-2a8b4fbecff5,Namespace:calico-apiserver,Attempt:1,}" Nov 12 22:04:49.562761 systemd[1]: run-netns-cni\x2d49eeaecb\x2d3f89\x2d878a\x2d02c7\x2d61ccedfbc0fd.mount: Deactivated successfully. Nov 12 22:04:49.605600 systemd-networkd[1725]: calib3dfe231bdd: Gained IPv6LL Nov 12 22:04:49.652583 kubelet[3195]: I1112 22:04:49.652540 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-4b7bm" podStartSLOduration=29.652523296 podStartE2EDuration="29.652523296s" podCreationTimestamp="2024-11-12 22:04:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:04:49.652483939 +0000 UTC m=+43.229173430" watchObservedRunningTime="2024-11-12 22:04:49.652523296 +0000 UTC m=+43.229212781" Nov 12 22:04:49.657632 systemd-networkd[1725]: cali01f9ee2b857: Link UP Nov 12 22:04:49.657837 systemd-networkd[1725]: cali01f9ee2b857: Gained carrier Nov 12 22:04:49.665451 containerd[1811]: 2024-11-12 22:04:49.569 [INFO][5499] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--97n44-eth0 calico-apiserver-7d4b7b8c7- calico-apiserver 920c6399-0c92-4fd2-94f7-2a8b4fbecff5 844 0 2024-11-12 22:04:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d4b7b8c7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.2.0-a-a9d0314af7 calico-apiserver-7d4b7b8c7-97n44 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali01f9ee2b857 [] []}} ContainerID="2e677a7152699ae0cb62b77c34e6f0ebb98396bbf83d028dd9a121f4832b8444" Namespace="calico-apiserver" Pod="calico-apiserver-7d4b7b8c7-97n44" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--97n44-" Nov 12 22:04:49.665451 containerd[1811]: 2024-11-12 22:04:49.569 [INFO][5499] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2e677a7152699ae0cb62b77c34e6f0ebb98396bbf83d028dd9a121f4832b8444" Namespace="calico-apiserver" Pod="calico-apiserver-7d4b7b8c7-97n44" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--97n44-eth0" Nov 12 22:04:49.665451 containerd[1811]: 2024-11-12 22:04:49.625 [INFO][5520] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e677a7152699ae0cb62b77c34e6f0ebb98396bbf83d028dd9a121f4832b8444" HandleID="k8s-pod-network.2e677a7152699ae0cb62b77c34e6f0ebb98396bbf83d028dd9a121f4832b8444" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--97n44-eth0" Nov 12 22:04:49.665451 containerd[1811]: 2024-11-12 22:04:49.633 [INFO][5520] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2e677a7152699ae0cb62b77c34e6f0ebb98396bbf83d028dd9a121f4832b8444" HandleID="k8s-pod-network.2e677a7152699ae0cb62b77c34e6f0ebb98396bbf83d028dd9a121f4832b8444" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--97n44-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000505d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.2.0-a-a9d0314af7", "pod":"calico-apiserver-7d4b7b8c7-97n44", "timestamp":"2024-11-12 22:04:49.625386721 +0000 UTC"}, Hostname:"ci-4081.2.0-a-a9d0314af7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 22:04:49.665451 containerd[1811]: 2024-11-12 22:04:49.633 [INFO][5520] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 22:04:49.665451 containerd[1811]: 2024-11-12 22:04:49.633 [INFO][5520] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 22:04:49.665451 containerd[1811]: 2024-11-12 22:04:49.633 [INFO][5520] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-a-a9d0314af7' Nov 12 22:04:49.665451 containerd[1811]: 2024-11-12 22:04:49.635 [INFO][5520] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2e677a7152699ae0cb62b77c34e6f0ebb98396bbf83d028dd9a121f4832b8444" host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:49.665451 containerd[1811]: 2024-11-12 22:04:49.638 [INFO][5520] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:49.665451 containerd[1811]: 2024-11-12 22:04:49.642 [INFO][5520] ipam/ipam.go 489: Trying affinity for 192.168.20.64/26 host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:49.665451 containerd[1811]: 2024-11-12 22:04:49.644 [INFO][5520] ipam/ipam.go 155: Attempting to load block cidr=192.168.20.64/26 host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:49.665451 containerd[1811]: 2024-11-12 22:04:49.646 [INFO][5520] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.20.64/26 host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:49.665451 containerd[1811]: 2024-11-12 22:04:49.646 [INFO][5520] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.20.64/26 handle="k8s-pod-network.2e677a7152699ae0cb62b77c34e6f0ebb98396bbf83d028dd9a121f4832b8444" host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:49.665451 containerd[1811]: 2024-11-12 22:04:49.647 [INFO][5520] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2e677a7152699ae0cb62b77c34e6f0ebb98396bbf83d028dd9a121f4832b8444 Nov 12 22:04:49.665451 containerd[1811]: 2024-11-12 22:04:49.650 [INFO][5520] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.20.64/26 handle="k8s-pod-network.2e677a7152699ae0cb62b77c34e6f0ebb98396bbf83d028dd9a121f4832b8444" host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:49.665451 containerd[1811]: 2024-11-12 22:04:49.654 [INFO][5520] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.20.68/26] block=192.168.20.64/26 handle="k8s-pod-network.2e677a7152699ae0cb62b77c34e6f0ebb98396bbf83d028dd9a121f4832b8444" host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:49.665451 containerd[1811]: 2024-11-12 22:04:49.654 [INFO][5520] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.20.68/26] handle="k8s-pod-network.2e677a7152699ae0cb62b77c34e6f0ebb98396bbf83d028dd9a121f4832b8444" host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:49.665451 containerd[1811]: 2024-11-12 22:04:49.654 [INFO][5520] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 22:04:49.666160 containerd[1811]: 2024-11-12 22:04:49.654 [INFO][5520] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.68/26] IPv6=[] ContainerID="2e677a7152699ae0cb62b77c34e6f0ebb98396bbf83d028dd9a121f4832b8444" HandleID="k8s-pod-network.2e677a7152699ae0cb62b77c34e6f0ebb98396bbf83d028dd9a121f4832b8444" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--97n44-eth0" Nov 12 22:04:49.666160 containerd[1811]: 2024-11-12 22:04:49.656 [INFO][5499] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2e677a7152699ae0cb62b77c34e6f0ebb98396bbf83d028dd9a121f4832b8444" Namespace="calico-apiserver" Pod="calico-apiserver-7d4b7b8c7-97n44" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--97n44-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--97n44-eth0", GenerateName:"calico-apiserver-7d4b7b8c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"920c6399-0c92-4fd2-94f7-2a8b4fbecff5", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 4, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4b7b8c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-a9d0314af7", ContainerID:"", Pod:"calico-apiserver-7d4b7b8c7-97n44", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali01f9ee2b857", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:04:49.666160 containerd[1811]: 2024-11-12 22:04:49.656 [INFO][5499] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.20.68/32] ContainerID="2e677a7152699ae0cb62b77c34e6f0ebb98396bbf83d028dd9a121f4832b8444" Namespace="calico-apiserver" Pod="calico-apiserver-7d4b7b8c7-97n44" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--97n44-eth0" Nov 12 22:04:49.666160 containerd[1811]: 2024-11-12 22:04:49.656 [INFO][5499] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali01f9ee2b857 ContainerID="2e677a7152699ae0cb62b77c34e6f0ebb98396bbf83d028dd9a121f4832b8444" Namespace="calico-apiserver" Pod="calico-apiserver-7d4b7b8c7-97n44" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--97n44-eth0" Nov 12 22:04:49.666160 containerd[1811]: 2024-11-12 22:04:49.657 [INFO][5499] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e677a7152699ae0cb62b77c34e6f0ebb98396bbf83d028dd9a121f4832b8444" Namespace="calico-apiserver" Pod="calico-apiserver-7d4b7b8c7-97n44" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--97n44-eth0" Nov 12 22:04:49.666160 containerd[1811]: 2024-11-12 22:04:49.657 [INFO][5499] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2e677a7152699ae0cb62b77c34e6f0ebb98396bbf83d028dd9a121f4832b8444" Namespace="calico-apiserver" Pod="calico-apiserver-7d4b7b8c7-97n44" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--97n44-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--97n44-eth0", GenerateName:"calico-apiserver-7d4b7b8c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"920c6399-0c92-4fd2-94f7-2a8b4fbecff5", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 4, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4b7b8c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-a9d0314af7", ContainerID:"2e677a7152699ae0cb62b77c34e6f0ebb98396bbf83d028dd9a121f4832b8444", Pod:"calico-apiserver-7d4b7b8c7-97n44", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali01f9ee2b857", MAC:"aa:e7:8e:96:f6:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:04:49.666479 containerd[1811]: 2024-11-12 22:04:49.663 [INFO][5499] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2e677a7152699ae0cb62b77c34e6f0ebb98396bbf83d028dd9a121f4832b8444" Namespace="calico-apiserver" Pod="calico-apiserver-7d4b7b8c7-97n44" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--97n44-eth0" Nov 12 22:04:49.679690 containerd[1811]: time="2024-11-12T22:04:49.679619002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:04:49.679690 containerd[1811]: time="2024-11-12T22:04:49.679664986Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:04:49.679690 containerd[1811]: time="2024-11-12T22:04:49.679678681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:04:49.679886 containerd[1811]: time="2024-11-12T22:04:49.679757529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:04:49.710388 systemd[1]: Started cri-containerd-2e677a7152699ae0cb62b77c34e6f0ebb98396bbf83d028dd9a121f4832b8444.scope - libcontainer container 2e677a7152699ae0cb62b77c34e6f0ebb98396bbf83d028dd9a121f4832b8444. Nov 12 22:04:49.751886 containerd[1811]: time="2024-11-12T22:04:49.751828638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4b7b8c7-97n44,Uid:920c6399-0c92-4fd2-94f7-2a8b4fbecff5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2e677a7152699ae0cb62b77c34e6f0ebb98396bbf83d028dd9a121f4832b8444\"" Nov 12 22:04:50.181616 systemd-networkd[1725]: cali5fa94260055: Gained IPv6LL Nov 12 22:04:50.497151 containerd[1811]: time="2024-11-12T22:04:50.496980720Z" level=info msg="StopPodSandbox for \"5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97\"" Nov 12 22:04:50.550540 containerd[1811]: 2024-11-12 22:04:50.533 [INFO][5611] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" Nov 12 22:04:50.550540 containerd[1811]: 2024-11-12 22:04:50.534 [INFO][5611] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" iface="eth0" netns="/var/run/netns/cni-cc427401-12cb-1cb0-cb14-3acbd51d7404" Nov 12 22:04:50.550540 containerd[1811]: 2024-11-12 22:04:50.534 [INFO][5611] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" iface="eth0" netns="/var/run/netns/cni-cc427401-12cb-1cb0-cb14-3acbd51d7404" Nov 12 22:04:50.550540 containerd[1811]: 2024-11-12 22:04:50.534 [INFO][5611] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" iface="eth0" netns="/var/run/netns/cni-cc427401-12cb-1cb0-cb14-3acbd51d7404" Nov 12 22:04:50.550540 containerd[1811]: 2024-11-12 22:04:50.534 [INFO][5611] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" Nov 12 22:04:50.550540 containerd[1811]: 2024-11-12 22:04:50.534 [INFO][5611] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" Nov 12 22:04:50.550540 containerd[1811]: 2024-11-12 22:04:50.544 [INFO][5625] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" HandleID="k8s-pod-network.5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--kll9h-eth0" Nov 12 22:04:50.550540 containerd[1811]: 2024-11-12 22:04:50.544 [INFO][5625] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 22:04:50.550540 containerd[1811]: 2024-11-12 22:04:50.544 [INFO][5625] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 22:04:50.550540 containerd[1811]: 2024-11-12 22:04:50.548 [WARNING][5625] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" HandleID="k8s-pod-network.5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--kll9h-eth0" Nov 12 22:04:50.550540 containerd[1811]: 2024-11-12 22:04:50.548 [INFO][5625] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" HandleID="k8s-pod-network.5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--kll9h-eth0" Nov 12 22:04:50.550540 containerd[1811]: 2024-11-12 22:04:50.549 [INFO][5625] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 22:04:50.550540 containerd[1811]: 2024-11-12 22:04:50.549 [INFO][5611] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" Nov 12 22:04:50.550941 containerd[1811]: time="2024-11-12T22:04:50.550589259Z" level=info msg="TearDown network for sandbox \"5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97\" successfully" Nov 12 22:04:50.550941 containerd[1811]: time="2024-11-12T22:04:50.550609900Z" level=info msg="StopPodSandbox for \"5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97\" returns successfully" Nov 12 22:04:50.551057 containerd[1811]: time="2024-11-12T22:04:50.551014533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4b7b8c7-kll9h,Uid:a9130193-3858-4a34-9c6f-60605387578f,Namespace:calico-apiserver,Attempt:1,}" Nov 12 22:04:50.553229 systemd[1]: run-netns-cni\x2dcc427401\x2d12cb\x2d1cb0\x2dcb14\x2d3acbd51d7404.mount: Deactivated successfully. Nov 12 22:04:50.612813 systemd-networkd[1725]: cali3c80bfbfc20: Link UP Nov 12 22:04:50.612925 systemd-networkd[1725]: cali3c80bfbfc20: Gained carrier Nov 12 22:04:50.618120 containerd[1811]: 2024-11-12 22:04:50.572 [INFO][5639] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--kll9h-eth0 calico-apiserver-7d4b7b8c7- calico-apiserver a9130193-3858-4a34-9c6f-60605387578f 854 0 2024-11-12 22:04:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d4b7b8c7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.2.0-a-a9d0314af7 calico-apiserver-7d4b7b8c7-kll9h eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3c80bfbfc20 [] []}} ContainerID="e9604b733ebbe2ebe88b30eb1977ef780814d711ef2c76cb075da717ee20450d" Namespace="calico-apiserver" Pod="calico-apiserver-7d4b7b8c7-kll9h" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--kll9h-" Nov 12 22:04:50.618120 containerd[1811]: 2024-11-12 22:04:50.572 [INFO][5639] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e9604b733ebbe2ebe88b30eb1977ef780814d711ef2c76cb075da717ee20450d" Namespace="calico-apiserver" Pod="calico-apiserver-7d4b7b8c7-kll9h" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--kll9h-eth0" Nov 12 22:04:50.618120 containerd[1811]: 2024-11-12 22:04:50.587 [INFO][5663] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e9604b733ebbe2ebe88b30eb1977ef780814d711ef2c76cb075da717ee20450d" HandleID="k8s-pod-network.e9604b733ebbe2ebe88b30eb1977ef780814d711ef2c76cb075da717ee20450d" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--kll9h-eth0" Nov 12 22:04:50.618120 containerd[1811]: 2024-11-12 22:04:50.591 [INFO][5663] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e9604b733ebbe2ebe88b30eb1977ef780814d711ef2c76cb075da717ee20450d" HandleID="k8s-pod-network.e9604b733ebbe2ebe88b30eb1977ef780814d711ef2c76cb075da717ee20450d" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--kll9h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050700), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.2.0-a-a9d0314af7", "pod":"calico-apiserver-7d4b7b8c7-kll9h", "timestamp":"2024-11-12 22:04:50.587046011 +0000 UTC"}, Hostname:"ci-4081.2.0-a-a9d0314af7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 22:04:50.618120 containerd[1811]: 2024-11-12 22:04:50.591 [INFO][5663] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 22:04:50.618120 containerd[1811]: 2024-11-12 22:04:50.591 [INFO][5663] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 22:04:50.618120 containerd[1811]: 2024-11-12 22:04:50.591 [INFO][5663] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-a-a9d0314af7' Nov 12 22:04:50.618120 containerd[1811]: 2024-11-12 22:04:50.592 [INFO][5663] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e9604b733ebbe2ebe88b30eb1977ef780814d711ef2c76cb075da717ee20450d" host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:50.618120 containerd[1811]: 2024-11-12 22:04:50.595 [INFO][5663] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:50.618120 containerd[1811]: 2024-11-12 22:04:50.597 [INFO][5663] ipam/ipam.go 489: Trying affinity for 192.168.20.64/26 host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:50.618120 containerd[1811]: 2024-11-12 22:04:50.598 [INFO][5663] ipam/ipam.go 155: Attempting to load block cidr=192.168.20.64/26 host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:50.618120 containerd[1811]: 2024-11-12 22:04:50.599 [INFO][5663] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.20.64/26 host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:50.618120 containerd[1811]: 2024-11-12 22:04:50.599 [INFO][5663] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.20.64/26 handle="k8s-pod-network.e9604b733ebbe2ebe88b30eb1977ef780814d711ef2c76cb075da717ee20450d" host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:50.618120 containerd[1811]: 2024-11-12 22:04:50.600 [INFO][5663] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e9604b733ebbe2ebe88b30eb1977ef780814d711ef2c76cb075da717ee20450d Nov 12 22:04:50.618120 containerd[1811]: 2024-11-12 22:04:50.607 [INFO][5663] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.20.64/26 handle="k8s-pod-network.e9604b733ebbe2ebe88b30eb1977ef780814d711ef2c76cb075da717ee20450d" host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:50.618120 containerd[1811]: 2024-11-12 22:04:50.610 [INFO][5663] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.20.69/26] block=192.168.20.64/26 handle="k8s-pod-network.e9604b733ebbe2ebe88b30eb1977ef780814d711ef2c76cb075da717ee20450d" host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:50.618120 containerd[1811]: 2024-11-12 22:04:50.611 [INFO][5663] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.20.69/26] handle="k8s-pod-network.e9604b733ebbe2ebe88b30eb1977ef780814d711ef2c76cb075da717ee20450d" host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:50.618120 containerd[1811]: 2024-11-12 22:04:50.611 [INFO][5663] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 22:04:50.618529 containerd[1811]: 2024-11-12 22:04:50.611 [INFO][5663] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.69/26] IPv6=[] ContainerID="e9604b733ebbe2ebe88b30eb1977ef780814d711ef2c76cb075da717ee20450d" HandleID="k8s-pod-network.e9604b733ebbe2ebe88b30eb1977ef780814d711ef2c76cb075da717ee20450d" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--kll9h-eth0" Nov 12 22:04:50.618529 containerd[1811]: 2024-11-12 22:04:50.612 [INFO][5639] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e9604b733ebbe2ebe88b30eb1977ef780814d711ef2c76cb075da717ee20450d" Namespace="calico-apiserver" Pod="calico-apiserver-7d4b7b8c7-kll9h" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--kll9h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--kll9h-eth0", GenerateName:"calico-apiserver-7d4b7b8c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"a9130193-3858-4a34-9c6f-60605387578f", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 4, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4b7b8c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-a9d0314af7", ContainerID:"", Pod:"calico-apiserver-7d4b7b8c7-kll9h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3c80bfbfc20", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:04:50.618529 containerd[1811]: 2024-11-12 22:04:50.612 [INFO][5639] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.20.69/32] ContainerID="e9604b733ebbe2ebe88b30eb1977ef780814d711ef2c76cb075da717ee20450d" Namespace="calico-apiserver" Pod="calico-apiserver-7d4b7b8c7-kll9h" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--kll9h-eth0" Nov 12 22:04:50.618529 containerd[1811]: 2024-11-12 22:04:50.612 [INFO][5639] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c80bfbfc20 ContainerID="e9604b733ebbe2ebe88b30eb1977ef780814d711ef2c76cb075da717ee20450d" Namespace="calico-apiserver" Pod="calico-apiserver-7d4b7b8c7-kll9h" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--kll9h-eth0" Nov 12 22:04:50.618529 containerd[1811]: 2024-11-12 22:04:50.612 [INFO][5639] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e9604b733ebbe2ebe88b30eb1977ef780814d711ef2c76cb075da717ee20450d" Namespace="calico-apiserver" Pod="calico-apiserver-7d4b7b8c7-kll9h" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--kll9h-eth0" Nov 12 22:04:50.618529 containerd[1811]: 2024-11-12 22:04:50.613 [INFO][5639] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e9604b733ebbe2ebe88b30eb1977ef780814d711ef2c76cb075da717ee20450d" Namespace="calico-apiserver" Pod="calico-apiserver-7d4b7b8c7-kll9h" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--kll9h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--kll9h-eth0", GenerateName:"calico-apiserver-7d4b7b8c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"a9130193-3858-4a34-9c6f-60605387578f", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 4, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4b7b8c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-a9d0314af7", ContainerID:"e9604b733ebbe2ebe88b30eb1977ef780814d711ef2c76cb075da717ee20450d", Pod:"calico-apiserver-7d4b7b8c7-kll9h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3c80bfbfc20", MAC:"d2:0b:25:5b:29:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:04:50.618669 containerd[1811]: 2024-11-12 22:04:50.617 [INFO][5639] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e9604b733ebbe2ebe88b30eb1977ef780814d711ef2c76cb075da717ee20450d" Namespace="calico-apiserver" Pod="calico-apiserver-7d4b7b8c7-kll9h" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--kll9h-eth0" Nov 12 22:04:50.627445 containerd[1811]: time="2024-11-12T22:04:50.627155500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:04:50.627445 containerd[1811]: time="2024-11-12T22:04:50.627381694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:04:50.627445 containerd[1811]: time="2024-11-12T22:04:50.627390653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:04:50.627566 containerd[1811]: time="2024-11-12T22:04:50.627457360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:04:50.654399 systemd[1]: Started cri-containerd-e9604b733ebbe2ebe88b30eb1977ef780814d711ef2c76cb075da717ee20450d.scope - libcontainer container e9604b733ebbe2ebe88b30eb1977ef780814d711ef2c76cb075da717ee20450d. Nov 12 22:04:50.676648 containerd[1811]: time="2024-11-12T22:04:50.676627523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4b7b8c7-kll9h,Uid:a9130193-3858-4a34-9c6f-60605387578f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e9604b733ebbe2ebe88b30eb1977ef780814d711ef2c76cb075da717ee20450d\"" Nov 12 22:04:51.077537 containerd[1811]: time="2024-11-12T22:04:51.077482837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:04:51.077742 containerd[1811]: time="2024-11-12T22:04:51.077691213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0: active requests=0, bytes read=10501080" Nov 12 22:04:51.078005 containerd[1811]: time="2024-11-12T22:04:51.077970753Z" level=info msg="ImageCreate event name:\"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:04:51.079078 containerd[1811]: time="2024-11-12T22:04:51.079037004Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:04:51.079799 containerd[1811]: time="2024-11-12T22:04:51.079763302Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" with image id \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\", size \"11994124\" in 1.572669366s" Nov 12 22:04:51.079849 containerd[1811]: time="2024-11-12T22:04:51.079799853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" returns image reference \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\"" Nov 12 22:04:51.080783 containerd[1811]: time="2024-11-12T22:04:51.080767661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 22:04:51.081330 containerd[1811]: time="2024-11-12T22:04:51.081315825Z" level=info msg="CreateContainer within sandbox \"1d0d81bfd9cf6deddca629a43c5ca2a6288c98052207a17c735bb8ac1cded317\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Nov 12 22:04:51.086006 containerd[1811]: time="2024-11-12T22:04:51.085963659Z" level=info msg="CreateContainer within sandbox \"1d0d81bfd9cf6deddca629a43c5ca2a6288c98052207a17c735bb8ac1cded317\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"85913b1f2f453c2feca67933150e84d9d349925ec827b337dc0b7d6c30a4c648\"" Nov 12 22:04:51.086197 containerd[1811]: time="2024-11-12T22:04:51.086183900Z" level=info msg="StartContainer for \"85913b1f2f453c2feca67933150e84d9d349925ec827b337dc0b7d6c30a4c648\"" Nov 12 22:04:51.111714 systemd[1]: Started cri-containerd-85913b1f2f453c2feca67933150e84d9d349925ec827b337dc0b7d6c30a4c648.scope - libcontainer container 85913b1f2f453c2feca67933150e84d9d349925ec827b337dc0b7d6c30a4c648. Nov 12 22:04:51.158202 containerd[1811]: time="2024-11-12T22:04:51.158135547Z" level=info msg="StartContainer for \"85913b1f2f453c2feca67933150e84d9d349925ec827b337dc0b7d6c30a4c648\" returns successfully" Nov 12 22:04:51.269506 systemd-networkd[1725]: cali01f9ee2b857: Gained IPv6LL Nov 12 22:04:51.495868 containerd[1811]: time="2024-11-12T22:04:51.495767979Z" level=info msg="StopPodSandbox for \"a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475\"" Nov 12 22:04:51.543173 containerd[1811]: 2024-11-12 22:04:51.520 [INFO][5797] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" Nov 12 22:04:51.543173 containerd[1811]: 2024-11-12 22:04:51.520 [INFO][5797] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" iface="eth0" netns="/var/run/netns/cni-14a9b715-1266-dae6-22da-be7db61d4f14" Nov 12 22:04:51.543173 containerd[1811]: 2024-11-12 22:04:51.521 [INFO][5797] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" iface="eth0" netns="/var/run/netns/cni-14a9b715-1266-dae6-22da-be7db61d4f14" Nov 12 22:04:51.543173 containerd[1811]: 2024-11-12 22:04:51.521 [INFO][5797] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" iface="eth0" netns="/var/run/netns/cni-14a9b715-1266-dae6-22da-be7db61d4f14" Nov 12 22:04:51.543173 containerd[1811]: 2024-11-12 22:04:51.521 [INFO][5797] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" Nov 12 22:04:51.543173 containerd[1811]: 2024-11-12 22:04:51.521 [INFO][5797] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" Nov 12 22:04:51.543173 containerd[1811]: 2024-11-12 22:04:51.535 [INFO][5812] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" HandleID="k8s-pod-network.a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0" Nov 12 22:04:51.543173 containerd[1811]: 2024-11-12 22:04:51.535 [INFO][5812] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 22:04:51.543173 containerd[1811]: 2024-11-12 22:04:51.535 [INFO][5812] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 22:04:51.543173 containerd[1811]: 2024-11-12 22:04:51.540 [WARNING][5812] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" HandleID="k8s-pod-network.a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0" Nov 12 22:04:51.543173 containerd[1811]: 2024-11-12 22:04:51.540 [INFO][5812] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" HandleID="k8s-pod-network.a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0" Nov 12 22:04:51.543173 containerd[1811]: 2024-11-12 22:04:51.541 [INFO][5812] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 22:04:51.543173 containerd[1811]: 2024-11-12 22:04:51.542 [INFO][5797] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" Nov 12 22:04:51.543561 containerd[1811]: time="2024-11-12T22:04:51.543296292Z" level=info msg="TearDown network for sandbox \"a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475\" successfully" Nov 12 22:04:51.543561 containerd[1811]: time="2024-11-12T22:04:51.543321106Z" level=info msg="StopPodSandbox for \"a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475\" returns successfully" Nov 12 22:04:51.543809 containerd[1811]: time="2024-11-12T22:04:51.543749692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7546446886-8b475,Uid:f7d2b67c-78b9-4645-b1a5-5cfb719e011e,Namespace:calico-system,Attempt:1,}" Nov 12 22:04:51.544308 kubelet[3195]: I1112 22:04:51.544264 3195 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Nov 12 22:04:51.544308 kubelet[3195]: I1112 22:04:51.544281 3195 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Nov 12 22:04:51.562638 systemd[1]: run-netns-cni\x2d14a9b715\x2d1266\x2ddae6\x2d22da\x2dbe7db61d4f14.mount: Deactivated successfully. Nov 12 22:04:51.604632 systemd-networkd[1725]: cali36266a55883: Link UP Nov 12 22:04:51.604771 systemd-networkd[1725]: cali36266a55883: Gained carrier Nov 12 22:04:51.609912 containerd[1811]: 2024-11-12 22:04:51.570 [INFO][5826] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0 calico-kube-controllers-7546446886- calico-system f7d2b67c-78b9-4645-b1a5-5cfb719e011e 870 0 2024-11-12 22:04:26 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7546446886 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.2.0-a-a9d0314af7 calico-kube-controllers-7546446886-8b475 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali36266a55883 [] []}} ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" Namespace="calico-system" Pod="calico-kube-controllers-7546446886-8b475" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-" Nov 12 22:04:51.609912 containerd[1811]: 2024-11-12 22:04:51.570 [INFO][5826] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" Namespace="calico-system" Pod="calico-kube-controllers-7546446886-8b475" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0" Nov 12 22:04:51.609912 containerd[1811]: 2024-11-12 22:04:51.584 [INFO][5848] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" HandleID="k8s-pod-network.679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0" Nov 12 22:04:51.609912 containerd[1811]: 2024-11-12 22:04:51.589 [INFO][5848] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" HandleID="k8s-pod-network.679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002782b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.2.0-a-a9d0314af7", "pod":"calico-kube-controllers-7546446886-8b475", "timestamp":"2024-11-12 22:04:51.584333034 +0000 UTC"}, Hostname:"ci-4081.2.0-a-a9d0314af7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 22:04:51.609912 containerd[1811]: 2024-11-12 22:04:51.589 [INFO][5848] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 22:04:51.609912 containerd[1811]: 2024-11-12 22:04:51.589 [INFO][5848] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 22:04:51.609912 containerd[1811]: 2024-11-12 22:04:51.589 [INFO][5848] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-a-a9d0314af7' Nov 12 22:04:51.609912 containerd[1811]: 2024-11-12 22:04:51.590 [INFO][5848] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:51.609912 containerd[1811]: 2024-11-12 22:04:51.593 [INFO][5848] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:51.609912 containerd[1811]: 2024-11-12 22:04:51.595 [INFO][5848] ipam/ipam.go 489: Trying affinity for 192.168.20.64/26 host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:51.609912 containerd[1811]: 2024-11-12 22:04:51.596 [INFO][5848] ipam/ipam.go 155: Attempting to load block cidr=192.168.20.64/26 host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:51.609912 containerd[1811]: 2024-11-12 22:04:51.597 [INFO][5848] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.20.64/26 host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:51.609912 containerd[1811]: 2024-11-12 22:04:51.597 [INFO][5848] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.20.64/26 handle="k8s-pod-network.679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:51.609912 containerd[1811]: 2024-11-12 22:04:51.598 [INFO][5848] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c Nov 12 22:04:51.609912 containerd[1811]: 2024-11-12 22:04:51.600 [INFO][5848] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.20.64/26 handle="k8s-pod-network.679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:51.609912 containerd[1811]: 2024-11-12 22:04:51.602 [INFO][5848] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.20.70/26] block=192.168.20.64/26 handle="k8s-pod-network.679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:51.609912 containerd[1811]: 2024-11-12 22:04:51.603 [INFO][5848] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.20.70/26] handle="k8s-pod-network.679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:51.609912 containerd[1811]: 2024-11-12 22:04:51.603 [INFO][5848] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 22:04:51.610413 containerd[1811]: 2024-11-12 22:04:51.603 [INFO][5848] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.70/26] IPv6=[] ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" HandleID="k8s-pod-network.679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0" Nov 12 22:04:51.610413 containerd[1811]: 2024-11-12 22:04:51.603 [INFO][5826] cni-plugin/k8s.go 386: Populated endpoint ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" Namespace="calico-system" Pod="calico-kube-controllers-7546446886-8b475" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0", GenerateName:"calico-kube-controllers-7546446886-", Namespace:"calico-system", SelfLink:"", UID:"f7d2b67c-78b9-4645-b1a5-5cfb719e011e", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 4, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7546446886", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-a9d0314af7", ContainerID:"", Pod:"calico-kube-controllers-7546446886-8b475", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.20.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali36266a55883", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:04:51.610413 containerd[1811]: 2024-11-12 22:04:51.603 [INFO][5826] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.20.70/32] ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" Namespace="calico-system" Pod="calico-kube-controllers-7546446886-8b475" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0" Nov 12 22:04:51.610413 containerd[1811]: 2024-11-12 22:04:51.603 [INFO][5826] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali36266a55883 ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" Namespace="calico-system" Pod="calico-kube-controllers-7546446886-8b475" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0" Nov 12 22:04:51.610413 containerd[1811]: 2024-11-12 22:04:51.604 [INFO][5826] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" Namespace="calico-system" Pod="calico-kube-controllers-7546446886-8b475" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0" Nov 12 22:04:51.610519 containerd[1811]: 2024-11-12 22:04:51.604 [INFO][5826] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" Namespace="calico-system" Pod="calico-kube-controllers-7546446886-8b475" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0", GenerateName:"calico-kube-controllers-7546446886-", Namespace:"calico-system", SelfLink:"", UID:"f7d2b67c-78b9-4645-b1a5-5cfb719e011e", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 4, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7546446886", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-a9d0314af7", ContainerID:"679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c", Pod:"calico-kube-controllers-7546446886-8b475", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.20.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali36266a55883", MAC:"de:a4:5a:d2:98:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:04:51.610519 containerd[1811]: 2024-11-12 22:04:51.608 [INFO][5826] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" Namespace="calico-system" Pod="calico-kube-controllers-7546446886-8b475" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0" Nov 12 22:04:51.623501 containerd[1811]: time="2024-11-12T22:04:51.623432634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:04:51.623501 containerd[1811]: time="2024-11-12T22:04:51.623462439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:04:51.623501 containerd[1811]: time="2024-11-12T22:04:51.623469625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:04:51.623600 containerd[1811]: time="2024-11-12T22:04:51.623515244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:04:51.659401 systemd[1]: Started cri-containerd-679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c.scope - libcontainer container 679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c. Nov 12 22:04:51.681687 containerd[1811]: time="2024-11-12T22:04:51.681664496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7546446886-8b475,Uid:f7d2b67c-78b9-4645-b1a5-5cfb719e011e,Namespace:calico-system,Attempt:1,} returns sandbox id \"679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c\"" Nov 12 22:04:52.102491 systemd-networkd[1725]: cali3c80bfbfc20: Gained IPv6LL Nov 12 22:04:52.383746 kubelet[3195]: I1112 22:04:52.383644 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-5zp6z" podStartSLOduration=22.968388824 podStartE2EDuration="26.38362969s" podCreationTimestamp="2024-11-12 22:04:26 +0000 UTC" firstStartedPulling="2024-11-12 22:04:47.66544917 +0000 UTC m=+41.242138650" lastFinishedPulling="2024-11-12 22:04:51.080690036 +0000 UTC m=+44.657379516" observedRunningTime="2024-11-12 22:04:51.655559453 +0000 UTC m=+45.232248933" watchObservedRunningTime="2024-11-12 22:04:52.38362969 +0000 UTC m=+45.960319170" Nov 12 22:04:52.384289 containerd[1811]: time="2024-11-12T22:04:52.384268901Z" level=info msg="StopContainer for \"24f4be8ddb31752dd37401bb6b44ce5901715622dd23508d8b556d72118b00fb\" with timeout 300 (s)" Nov 12 22:04:52.384509 containerd[1811]: time="2024-11-12T22:04:52.384492193Z" level=info msg="Stop container \"24f4be8ddb31752dd37401bb6b44ce5901715622dd23508d8b556d72118b00fb\" with signal terminated" Nov 12 22:04:52.445040 containerd[1811]: time="2024-11-12T22:04:52.445018953Z" level=info msg="StopContainer for \"4a745838641a0b226fbbe586423c30dd4d0c0c271ba63f3a7cb5fa4f29b167c4\" with timeout 5 (s)" Nov 12 22:04:52.445149 containerd[1811]: time="2024-11-12T22:04:52.445134501Z" level=info msg="Stop container \"4a745838641a0b226fbbe586423c30dd4d0c0c271ba63f3a7cb5fa4f29b167c4\" with signal terminated" Nov 12 22:04:52.451169 systemd[1]: cri-containerd-4a745838641a0b226fbbe586423c30dd4d0c0c271ba63f3a7cb5fa4f29b167c4.scope: Deactivated successfully. Nov 12 22:04:52.451306 systemd[1]: cri-containerd-4a745838641a0b226fbbe586423c30dd4d0c0c271ba63f3a7cb5fa4f29b167c4.scope: Consumed 1.293s CPU time. Nov 12 22:04:52.460793 containerd[1811]: time="2024-11-12T22:04:52.460751791Z" level=info msg="shim disconnected" id=4a745838641a0b226fbbe586423c30dd4d0c0c271ba63f3a7cb5fa4f29b167c4 namespace=k8s.io Nov 12 22:04:52.460870 containerd[1811]: time="2024-11-12T22:04:52.460791993Z" level=warning msg="cleaning up after shim disconnected" id=4a745838641a0b226fbbe586423c30dd4d0c0c271ba63f3a7cb5fa4f29b167c4 namespace=k8s.io Nov 12 22:04:52.460870 containerd[1811]: time="2024-11-12T22:04:52.460801911Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:04:52.560347 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a745838641a0b226fbbe586423c30dd4d0c0c271ba63f3a7cb5fa4f29b167c4-rootfs.mount: Deactivated successfully. Nov 12 22:04:53.189413 systemd-networkd[1725]: cali36266a55883: Gained IPv6LL Nov 12 22:04:53.549071 containerd[1811]: time="2024-11-12T22:04:53.549017021Z" level=info msg="StopContainer for \"4a745838641a0b226fbbe586423c30dd4d0c0c271ba63f3a7cb5fa4f29b167c4\" returns successfully" Nov 12 22:04:53.549347 containerd[1811]: time="2024-11-12T22:04:53.549335343Z" level=info msg="StopPodSandbox for \"edfe889ba9d743f17fc26fca6b6f7f04094e56a0483e8ac575bd4eb6e99e81fc\"" Nov 12 22:04:53.549376 containerd[1811]: time="2024-11-12T22:04:53.549356325Z" level=info msg="Container to stop \"b59fbaec25931636533d6dcda4103bf123de5314f237f3862bccecb07853bf0b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:04:53.549376 containerd[1811]: time="2024-11-12T22:04:53.549363808Z" level=info msg="Container to stop \"4a745838641a0b226fbbe586423c30dd4d0c0c271ba63f3a7cb5fa4f29b167c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:04:53.549376 containerd[1811]: time="2024-11-12T22:04:53.549368909Z" level=info msg="Container to stop \"84825fc6436bf61d75616116071f708b33902ca56e00b9653d74c8853ea2f21f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:04:53.551147 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-edfe889ba9d743f17fc26fca6b6f7f04094e56a0483e8ac575bd4eb6e99e81fc-shm.mount: Deactivated successfully. Nov 12 22:04:53.552383 systemd[1]: cri-containerd-edfe889ba9d743f17fc26fca6b6f7f04094e56a0483e8ac575bd4eb6e99e81fc.scope: Deactivated successfully. Nov 12 22:04:53.559873 containerd[1811]: time="2024-11-12T22:04:53.559830433Z" level=info msg="shim disconnected" id=edfe889ba9d743f17fc26fca6b6f7f04094e56a0483e8ac575bd4eb6e99e81fc namespace=k8s.io Nov 12 22:04:53.559873 containerd[1811]: time="2024-11-12T22:04:53.559870308Z" level=warning msg="cleaning up after shim disconnected" id=edfe889ba9d743f17fc26fca6b6f7f04094e56a0483e8ac575bd4eb6e99e81fc namespace=k8s.io Nov 12 22:04:53.559873 containerd[1811]: time="2024-11-12T22:04:53.559877353Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:04:53.561168 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-edfe889ba9d743f17fc26fca6b6f7f04094e56a0483e8ac575bd4eb6e99e81fc-rootfs.mount: Deactivated successfully. Nov 12 22:04:53.737017 containerd[1811]: time="2024-11-12T22:04:53.736964206Z" level=info msg="TearDown network for sandbox \"edfe889ba9d743f17fc26fca6b6f7f04094e56a0483e8ac575bd4eb6e99e81fc\" successfully" Nov 12 22:04:53.737017 containerd[1811]: time="2024-11-12T22:04:53.736981812Z" level=info msg="StopPodSandbox for \"edfe889ba9d743f17fc26fca6b6f7f04094e56a0483e8ac575bd4eb6e99e81fc\" returns successfully" Nov 12 22:04:53.741247 containerd[1811]: time="2024-11-12T22:04:53.741226506Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:04:53.741429 containerd[1811]: time="2024-11-12T22:04:53.741411149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=41963930" Nov 12 22:04:53.741729 containerd[1811]: time="2024-11-12T22:04:53.741689302Z" level=info msg="ImageCreate event name:\"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:04:53.743857 containerd[1811]: time="2024-11-12T22:04:53.743841977Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:04:53.744289 containerd[1811]: time="2024-11-12T22:04:53.744243920Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 2.663451953s" Nov 12 22:04:53.744289 containerd[1811]: time="2024-11-12T22:04:53.744261586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 22:04:53.744930 containerd[1811]: time="2024-11-12T22:04:53.744890416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 22:04:53.745511 containerd[1811]: time="2024-11-12T22:04:53.745470032Z" level=info msg="CreateContainer within sandbox \"2e677a7152699ae0cb62b77c34e6f0ebb98396bbf83d028dd9a121f4832b8444\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 22:04:53.749537 containerd[1811]: time="2024-11-12T22:04:53.749510324Z" level=info msg="CreateContainer within sandbox \"2e677a7152699ae0cb62b77c34e6f0ebb98396bbf83d028dd9a121f4832b8444\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"337163ce834663dba425ead59ab9b9e5b67da7cc6812457ab2ca97f5e30f3ce9\"" Nov 12 22:04:53.749837 containerd[1811]: time="2024-11-12T22:04:53.749823656Z" level=info msg="StartContainer for \"337163ce834663dba425ead59ab9b9e5b67da7cc6812457ab2ca97f5e30f3ce9\"" Nov 12 22:04:53.754930 kubelet[3195]: I1112 22:04:53.754902 3195 topology_manager.go:215] "Topology Admit Handler" podUID="97744019-df89-4404-bbe1-3437ef2936f8" podNamespace="calico-system" podName="calico-node-ztfzr" Nov 12 22:04:53.755181 kubelet[3195]: E1112 22:04:53.754952 3195 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9956537c-d3e7-4a14-bec4-dcbb471e59d8" containerName="install-cni" Nov 12 22:04:53.755181 kubelet[3195]: E1112 22:04:53.754963 3195 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9956537c-d3e7-4a14-bec4-dcbb471e59d8" containerName="flexvol-driver" Nov 12 22:04:53.755181 kubelet[3195]: E1112 22:04:53.754969 3195 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9956537c-d3e7-4a14-bec4-dcbb471e59d8" containerName="calico-node" Nov 12 22:04:53.755181 kubelet[3195]: I1112 22:04:53.754992 3195 memory_manager.go:354] "RemoveStaleState removing state" podUID="9956537c-d3e7-4a14-bec4-dcbb471e59d8" containerName="calico-node" Nov 12 22:04:53.786593 systemd[1]: Started cri-containerd-337163ce834663dba425ead59ab9b9e5b67da7cc6812457ab2ca97f5e30f3ce9.scope - libcontainer container 337163ce834663dba425ead59ab9b9e5b67da7cc6812457ab2ca97f5e30f3ce9. Nov 12 22:04:53.787133 systemd[1]: Created slice kubepods-besteffort-pod97744019_df89_4404_bbe1_3437ef2936f8.slice - libcontainer container kubepods-besteffort-pod97744019_df89_4404_bbe1_3437ef2936f8.slice. Nov 12 22:04:53.809967 containerd[1811]: time="2024-11-12T22:04:53.809946035Z" level=info msg="StartContainer for \"337163ce834663dba425ead59ab9b9e5b67da7cc6812457ab2ca97f5e30f3ce9\" returns successfully" Nov 12 22:04:53.926381 kubelet[3195]: I1112 22:04:53.926330 3195 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-cni-bin-dir\") pod \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\" (UID: \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\") " Nov 12 22:04:53.926381 kubelet[3195]: I1112 22:04:53.926340 3195 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "9956537c-d3e7-4a14-bec4-dcbb471e59d8" (UID: "9956537c-d3e7-4a14-bec4-dcbb471e59d8"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:04:53.926381 kubelet[3195]: I1112 22:04:53.926359 3195 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-xtables-lock\") pod \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\" (UID: \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\") " Nov 12 22:04:53.926381 kubelet[3195]: I1112 22:04:53.926370 3195 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9956537c-d3e7-4a14-bec4-dcbb471e59d8" (UID: "9956537c-d3e7-4a14-bec4-dcbb471e59d8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:04:53.926381 kubelet[3195]: I1112 22:04:53.926371 3195 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-flexvol-driver-host\") pod \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\" (UID: \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\") " Nov 12 22:04:53.926544 kubelet[3195]: I1112 22:04:53.926387 3195 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "9956537c-d3e7-4a14-bec4-dcbb471e59d8" (UID: "9956537c-d3e7-4a14-bec4-dcbb471e59d8"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:04:53.926544 kubelet[3195]: I1112 22:04:53.926392 3195 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rkkw\" (UniqueName: \"kubernetes.io/projected/9956537c-d3e7-4a14-bec4-dcbb471e59d8-kube-api-access-5rkkw\") pod \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\" (UID: \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\") " Nov 12 22:04:53.926544 kubelet[3195]: I1112 22:04:53.926403 3195 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-cni-net-dir\") pod \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\" (UID: \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\") " Nov 12 22:04:53.926544 kubelet[3195]: I1112 22:04:53.926412 3195 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-policysync\") pod \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\" (UID: \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\") " Nov 12 22:04:53.926544 kubelet[3195]: I1112 22:04:53.926420 3195 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-var-lib-calico\") pod \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\" (UID: \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\") " Nov 12 22:04:53.926544 kubelet[3195]: I1112 22:04:53.926429 3195 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-var-run-calico\") pod \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\" (UID: \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\") " Nov 12 22:04:53.926669 kubelet[3195]: I1112 22:04:53.926446 3195 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-cni-log-dir\") pod \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\" (UID: \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\") " Nov 12 22:04:53.926669 kubelet[3195]: I1112 22:04:53.926455 3195 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-lib-modules\") pod \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\" (UID: \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\") " Nov 12 22:04:53.926669 kubelet[3195]: I1112 22:04:53.926433 3195 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "9956537c-d3e7-4a14-bec4-dcbb471e59d8" (UID: "9956537c-d3e7-4a14-bec4-dcbb471e59d8"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:04:53.926669 kubelet[3195]: I1112 22:04:53.926467 3195 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9956537c-d3e7-4a14-bec4-dcbb471e59d8-node-certs\") pod \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\" (UID: \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\") " Nov 12 22:04:53.926669 kubelet[3195]: I1112 22:04:53.926437 3195 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-policysync" (OuterVolumeSpecName: "policysync") pod "9956537c-d3e7-4a14-bec4-dcbb471e59d8" (UID: "9956537c-d3e7-4a14-bec4-dcbb471e59d8"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:04:53.926766 kubelet[3195]: I1112 22:04:53.926447 3195 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "9956537c-d3e7-4a14-bec4-dcbb471e59d8" (UID: "9956537c-d3e7-4a14-bec4-dcbb471e59d8"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:04:53.926766 kubelet[3195]: I1112 22:04:53.926454 3195 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "9956537c-d3e7-4a14-bec4-dcbb471e59d8" (UID: "9956537c-d3e7-4a14-bec4-dcbb471e59d8"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:04:53.926766 kubelet[3195]: I1112 22:04:53.926464 3195 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "9956537c-d3e7-4a14-bec4-dcbb471e59d8" (UID: "9956537c-d3e7-4a14-bec4-dcbb471e59d8"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:04:53.926766 kubelet[3195]: I1112 22:04:53.926481 3195 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9956537c-d3e7-4a14-bec4-dcbb471e59d8" (UID: "9956537c-d3e7-4a14-bec4-dcbb471e59d8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:04:53.926766 kubelet[3195]: I1112 22:04:53.926493 3195 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9956537c-d3e7-4a14-bec4-dcbb471e59d8-tigera-ca-bundle\") pod \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\" (UID: \"9956537c-d3e7-4a14-bec4-dcbb471e59d8\") " Nov 12 22:04:53.926864 kubelet[3195]: I1112 22:04:53.926537 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/97744019-df89-4404-bbe1-3437ef2936f8-node-certs\") pod \"calico-node-ztfzr\" (UID: \"97744019-df89-4404-bbe1-3437ef2936f8\") " pod="calico-system/calico-node-ztfzr" Nov 12 22:04:53.926864 kubelet[3195]: I1112 22:04:53.926556 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/97744019-df89-4404-bbe1-3437ef2936f8-cni-net-dir\") pod \"calico-node-ztfzr\" (UID: \"97744019-df89-4404-bbe1-3437ef2936f8\") " pod="calico-system/calico-node-ztfzr" Nov 12 22:04:53.926864 kubelet[3195]: I1112 22:04:53.926584 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/97744019-df89-4404-bbe1-3437ef2936f8-flexvol-driver-host\") pod \"calico-node-ztfzr\" (UID: \"97744019-df89-4404-bbe1-3437ef2936f8\") " pod="calico-system/calico-node-ztfzr" Nov 12 22:04:53.926864 kubelet[3195]: I1112 22:04:53.926597 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/97744019-df89-4404-bbe1-3437ef2936f8-var-lib-calico\") pod \"calico-node-ztfzr\" (UID: \"97744019-df89-4404-bbe1-3437ef2936f8\") " pod="calico-system/calico-node-ztfzr" Nov 12 22:04:53.926864 kubelet[3195]: I1112 22:04:53.926607 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97744019-df89-4404-bbe1-3437ef2936f8-tigera-ca-bundle\") pod \"calico-node-ztfzr\" (UID: \"97744019-df89-4404-bbe1-3437ef2936f8\") " pod="calico-system/calico-node-ztfzr" Nov 12 22:04:53.926971 kubelet[3195]: I1112 22:04:53.926617 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/97744019-df89-4404-bbe1-3437ef2936f8-cni-log-dir\") pod \"calico-node-ztfzr\" (UID: \"97744019-df89-4404-bbe1-3437ef2936f8\") " pod="calico-system/calico-node-ztfzr" Nov 12 22:04:53.926971 kubelet[3195]: I1112 22:04:53.926627 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97744019-df89-4404-bbe1-3437ef2936f8-lib-modules\") pod \"calico-node-ztfzr\" (UID: \"97744019-df89-4404-bbe1-3437ef2936f8\") " pod="calico-system/calico-node-ztfzr" Nov 12 22:04:53.926971 kubelet[3195]: I1112 22:04:53.926636 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv8rz\" (UniqueName: \"kubernetes.io/projected/97744019-df89-4404-bbe1-3437ef2936f8-kube-api-access-jv8rz\") pod \"calico-node-ztfzr\" (UID: \"97744019-df89-4404-bbe1-3437ef2936f8\") " pod="calico-system/calico-node-ztfzr" Nov 12 22:04:53.926971 kubelet[3195]: I1112 22:04:53.926645 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/97744019-df89-4404-bbe1-3437ef2936f8-var-run-calico\") pod \"calico-node-ztfzr\" (UID: \"97744019-df89-4404-bbe1-3437ef2936f8\") " pod="calico-system/calico-node-ztfzr" Nov 12 22:04:53.926971 kubelet[3195]: I1112 22:04:53.926665 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/97744019-df89-4404-bbe1-3437ef2936f8-policysync\") pod \"calico-node-ztfzr\" (UID: \"97744019-df89-4404-bbe1-3437ef2936f8\") " pod="calico-system/calico-node-ztfzr" Nov 12 22:04:53.927056 kubelet[3195]: I1112 22:04:53.926680 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/97744019-df89-4404-bbe1-3437ef2936f8-xtables-lock\") pod \"calico-node-ztfzr\" (UID: \"97744019-df89-4404-bbe1-3437ef2936f8\") " pod="calico-system/calico-node-ztfzr" Nov 12 22:04:53.927056 kubelet[3195]: I1112 22:04:53.926708 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/97744019-df89-4404-bbe1-3437ef2936f8-cni-bin-dir\") pod \"calico-node-ztfzr\" (UID: \"97744019-df89-4404-bbe1-3437ef2936f8\") " pod="calico-system/calico-node-ztfzr" Nov 12 22:04:53.927056 kubelet[3195]: I1112 22:04:53.926739 3195 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-flexvol-driver-host\") on node \"ci-4081.2.0-a-a9d0314af7\" DevicePath \"\"" Nov 12 22:04:53.927056 kubelet[3195]: I1112 22:04:53.926747 3195 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-xtables-lock\") on node \"ci-4081.2.0-a-a9d0314af7\" DevicePath \"\"" Nov 12 22:04:53.927056 kubelet[3195]: I1112 22:04:53.926752 3195 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-cni-net-dir\") on node \"ci-4081.2.0-a-a9d0314af7\" DevicePath \"\"" Nov 12 22:04:53.927056 kubelet[3195]: I1112 22:04:53.926757 3195 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-var-run-calico\") on node \"ci-4081.2.0-a-a9d0314af7\" DevicePath \"\"" Nov 12 22:04:53.927056 kubelet[3195]: I1112 22:04:53.926762 3195 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-lib-modules\") on node \"ci-4081.2.0-a-a9d0314af7\" DevicePath \"\"" Nov 12 22:04:53.927170 kubelet[3195]: I1112 22:04:53.926768 3195 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-cni-bin-dir\") on node \"ci-4081.2.0-a-a9d0314af7\" DevicePath \"\"" Nov 12 22:04:53.927170 kubelet[3195]: I1112 22:04:53.926777 3195 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-policysync\") on node \"ci-4081.2.0-a-a9d0314af7\" DevicePath \"\"" Nov 12 22:04:53.927170 kubelet[3195]: I1112 22:04:53.926783 3195 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-var-lib-calico\") on node \"ci-4081.2.0-a-a9d0314af7\" DevicePath \"\"" Nov 12 22:04:53.927170 kubelet[3195]: I1112 22:04:53.926787 3195 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9956537c-d3e7-4a14-bec4-dcbb471e59d8-cni-log-dir\") on node \"ci-4081.2.0-a-a9d0314af7\" DevicePath \"\"" Nov 12 22:04:53.927823 kubelet[3195]: I1112 22:04:53.927780 3195 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9956537c-d3e7-4a14-bec4-dcbb471e59d8-kube-api-access-5rkkw" (OuterVolumeSpecName: "kube-api-access-5rkkw") pod "9956537c-d3e7-4a14-bec4-dcbb471e59d8" (UID: "9956537c-d3e7-4a14-bec4-dcbb471e59d8"). InnerVolumeSpecName "kube-api-access-5rkkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 22:04:53.927863 kubelet[3195]: I1112 22:04:53.927833 3195 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9956537c-d3e7-4a14-bec4-dcbb471e59d8-node-certs" (OuterVolumeSpecName: "node-certs") pod "9956537c-d3e7-4a14-bec4-dcbb471e59d8" (UID: "9956537c-d3e7-4a14-bec4-dcbb471e59d8"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 12 22:04:53.928344 kubelet[3195]: I1112 22:04:53.928303 3195 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9956537c-d3e7-4a14-bec4-dcbb471e59d8-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "9956537c-d3e7-4a14-bec4-dcbb471e59d8" (UID: "9956537c-d3e7-4a14-bec4-dcbb471e59d8"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 12 22:04:54.027909 kubelet[3195]: I1112 22:04:54.027853 3195 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9956537c-d3e7-4a14-bec4-dcbb471e59d8-node-certs\") on node \"ci-4081.2.0-a-a9d0314af7\" DevicePath \"\"" Nov 12 22:04:54.027909 kubelet[3195]: I1112 22:04:54.027872 3195 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9956537c-d3e7-4a14-bec4-dcbb471e59d8-tigera-ca-bundle\") on node \"ci-4081.2.0-a-a9d0314af7\" DevicePath \"\"" Nov 12 22:04:54.027909 kubelet[3195]: I1112 22:04:54.027882 3195 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-5rkkw\" (UniqueName: \"kubernetes.io/projected/9956537c-d3e7-4a14-bec4-dcbb471e59d8-kube-api-access-5rkkw\") on node \"ci-4081.2.0-a-a9d0314af7\" DevicePath \"\"" Nov 12 22:04:54.038021 systemd[1]: cri-containerd-24f4be8ddb31752dd37401bb6b44ce5901715622dd23508d8b556d72118b00fb.scope: Deactivated successfully. Nov 12 22:04:54.047788 containerd[1811]: time="2024-11-12T22:04:54.047754840Z" level=info msg="shim disconnected" id=24f4be8ddb31752dd37401bb6b44ce5901715622dd23508d8b556d72118b00fb namespace=k8s.io Nov 12 22:04:54.047788 containerd[1811]: time="2024-11-12T22:04:54.047786083Z" level=warning msg="cleaning up after shim disconnected" id=24f4be8ddb31752dd37401bb6b44ce5901715622dd23508d8b556d72118b00fb namespace=k8s.io Nov 12 22:04:54.047788 containerd[1811]: time="2024-11-12T22:04:54.047791902Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:04:54.088714 containerd[1811]: time="2024-11-12T22:04:54.088656583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ztfzr,Uid:97744019-df89-4404-bbe1-3437ef2936f8,Namespace:calico-system,Attempt:0,}" Nov 12 22:04:54.130660 containerd[1811]: time="2024-11-12T22:04:54.130640027Z" level=info msg="StopContainer for \"24f4be8ddb31752dd37401bb6b44ce5901715622dd23508d8b556d72118b00fb\" returns successfully" Nov 12 22:04:54.130937 containerd[1811]: time="2024-11-12T22:04:54.130925931Z" level=info msg="StopPodSandbox for \"4e6e3003c810385f8e2c9fa533f8bfeeb20172e1d3cb3d7e246e2b4669037e59\"" Nov 12 22:04:54.130969 containerd[1811]: time="2024-11-12T22:04:54.130943826Z" level=info msg="Container to stop \"24f4be8ddb31752dd37401bb6b44ce5901715622dd23508d8b556d72118b00fb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:04:54.133548 containerd[1811]: time="2024-11-12T22:04:54.133525298Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:04:54.133634 containerd[1811]: time="2024-11-12T22:04:54.133613608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=77" Nov 12 22:04:54.134497 systemd[1]: cri-containerd-4e6e3003c810385f8e2c9fa533f8bfeeb20172e1d3cb3d7e246e2b4669037e59.scope: Deactivated successfully. Nov 12 22:04:54.135169 containerd[1811]: time="2024-11-12T22:04:54.135147685Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 390.240648ms" Nov 12 22:04:54.135219 containerd[1811]: time="2024-11-12T22:04:54.135175901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 22:04:54.135728 containerd[1811]: time="2024-11-12T22:04:54.135713309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\"" Nov 12 22:04:54.136497 containerd[1811]: time="2024-11-12T22:04:54.136479930Z" level=info msg="CreateContainer within sandbox \"e9604b733ebbe2ebe88b30eb1977ef780814d711ef2c76cb075da717ee20450d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 22:04:54.139193 containerd[1811]: time="2024-11-12T22:04:54.139150483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:04:54.139193 containerd[1811]: time="2024-11-12T22:04:54.139184240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:04:54.139312 containerd[1811]: time="2024-11-12T22:04:54.139195686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:04:54.139312 containerd[1811]: time="2024-11-12T22:04:54.139258334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:04:54.141192 containerd[1811]: time="2024-11-12T22:04:54.141167837Z" level=info msg="CreateContainer within sandbox \"e9604b733ebbe2ebe88b30eb1977ef780814d711ef2c76cb075da717ee20450d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2c4f36d8e955818a14242a7266b695075bb3dc366a6921e1ded1f8e0458b20f9\"" Nov 12 22:04:54.141508 containerd[1811]: time="2024-11-12T22:04:54.141491079Z" level=info msg="StartContainer for \"2c4f36d8e955818a14242a7266b695075bb3dc366a6921e1ded1f8e0458b20f9\"" Nov 12 22:04:54.143680 containerd[1811]: time="2024-11-12T22:04:54.143616012Z" level=info msg="shim disconnected" id=4e6e3003c810385f8e2c9fa533f8bfeeb20172e1d3cb3d7e246e2b4669037e59 namespace=k8s.io Nov 12 22:04:54.143680 containerd[1811]: time="2024-11-12T22:04:54.143661090Z" level=warning msg="cleaning up after shim disconnected" id=4e6e3003c810385f8e2c9fa533f8bfeeb20172e1d3cb3d7e246e2b4669037e59 namespace=k8s.io Nov 12 22:04:54.143680 containerd[1811]: time="2024-11-12T22:04:54.143669341Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:04:54.152584 containerd[1811]: time="2024-11-12T22:04:54.152531787Z" level=info msg="TearDown network for sandbox \"4e6e3003c810385f8e2c9fa533f8bfeeb20172e1d3cb3d7e246e2b4669037e59\" successfully" Nov 12 22:04:54.152584 containerd[1811]: time="2024-11-12T22:04:54.152550194Z" level=info msg="StopPodSandbox for \"4e6e3003c810385f8e2c9fa533f8bfeeb20172e1d3cb3d7e246e2b4669037e59\" returns successfully" Nov 12 22:04:54.155373 systemd[1]: Started cri-containerd-8538d6d560518616624823b1c9c9f91883820c3b507053b4fa4999bcca294539.scope - libcontainer container 8538d6d560518616624823b1c9c9f91883820c3b507053b4fa4999bcca294539. Nov 12 22:04:54.157065 systemd[1]: Started cri-containerd-2c4f36d8e955818a14242a7266b695075bb3dc366a6921e1ded1f8e0458b20f9.scope - libcontainer container 2c4f36d8e955818a14242a7266b695075bb3dc366a6921e1ded1f8e0458b20f9. Nov 12 22:04:54.166648 containerd[1811]: time="2024-11-12T22:04:54.166602439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ztfzr,Uid:97744019-df89-4404-bbe1-3437ef2936f8,Namespace:calico-system,Attempt:0,} returns sandbox id \"8538d6d560518616624823b1c9c9f91883820c3b507053b4fa4999bcca294539\"" Nov 12 22:04:54.167897 containerd[1811]: time="2024-11-12T22:04:54.167881642Z" level=info msg="CreateContainer within sandbox \"8538d6d560518616624823b1c9c9f91883820c3b507053b4fa4999bcca294539\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 12 22:04:54.173787 containerd[1811]: time="2024-11-12T22:04:54.173733214Z" level=info msg="CreateContainer within sandbox \"8538d6d560518616624823b1c9c9f91883820c3b507053b4fa4999bcca294539\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"50c89da3dafb72fcaf191921020f9f8984915aa58857c21807ba67c45aa3d4b9\"" Nov 12 22:04:54.174062 containerd[1811]: time="2024-11-12T22:04:54.174013011Z" level=info msg="StartContainer for \"50c89da3dafb72fcaf191921020f9f8984915aa58857c21807ba67c45aa3d4b9\"" Nov 12 22:04:54.183033 containerd[1811]: time="2024-11-12T22:04:54.183009276Z" level=info msg="StartContainer for \"2c4f36d8e955818a14242a7266b695075bb3dc366a6921e1ded1f8e0458b20f9\" returns successfully" Nov 12 22:04:54.198414 systemd[1]: Started cri-containerd-50c89da3dafb72fcaf191921020f9f8984915aa58857c21807ba67c45aa3d4b9.scope - libcontainer container 50c89da3dafb72fcaf191921020f9f8984915aa58857c21807ba67c45aa3d4b9. Nov 12 22:04:54.212024 containerd[1811]: time="2024-11-12T22:04:54.211998565Z" level=info msg="StartContainer for \"50c89da3dafb72fcaf191921020f9f8984915aa58857c21807ba67c45aa3d4b9\" returns successfully" Nov 12 22:04:54.218328 systemd[1]: cri-containerd-50c89da3dafb72fcaf191921020f9f8984915aa58857c21807ba67c45aa3d4b9.scope: Deactivated successfully. Nov 12 22:04:54.230778 kubelet[3195]: I1112 22:04:54.230760 3195 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69a6a41a-9ffb-4b15-b959-5e02864b1cb1-tigera-ca-bundle\") pod \"69a6a41a-9ffb-4b15-b959-5e02864b1cb1\" (UID: \"69a6a41a-9ffb-4b15-b959-5e02864b1cb1\") " Nov 12 22:04:54.230870 kubelet[3195]: I1112 22:04:54.230786 3195 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tww7\" (UniqueName: \"kubernetes.io/projected/69a6a41a-9ffb-4b15-b959-5e02864b1cb1-kube-api-access-8tww7\") pod \"69a6a41a-9ffb-4b15-b959-5e02864b1cb1\" (UID: \"69a6a41a-9ffb-4b15-b959-5e02864b1cb1\") " Nov 12 22:04:54.230870 kubelet[3195]: I1112 22:04:54.230804 3195 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/69a6a41a-9ffb-4b15-b959-5e02864b1cb1-typha-certs\") pod \"69a6a41a-9ffb-4b15-b959-5e02864b1cb1\" (UID: \"69a6a41a-9ffb-4b15-b959-5e02864b1cb1\") " Nov 12 22:04:54.232672 kubelet[3195]: I1112 22:04:54.232656 3195 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69a6a41a-9ffb-4b15-b959-5e02864b1cb1-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "69a6a41a-9ffb-4b15-b959-5e02864b1cb1" (UID: "69a6a41a-9ffb-4b15-b959-5e02864b1cb1"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 12 22:04:54.244667 kubelet[3195]: I1112 22:04:54.244643 3195 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69a6a41a-9ffb-4b15-b959-5e02864b1cb1-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "69a6a41a-9ffb-4b15-b959-5e02864b1cb1" (UID: "69a6a41a-9ffb-4b15-b959-5e02864b1cb1"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 12 22:04:54.268680 kubelet[3195]: I1112 22:04:54.268658 3195 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69a6a41a-9ffb-4b15-b959-5e02864b1cb1-kube-api-access-8tww7" (OuterVolumeSpecName: "kube-api-access-8tww7") pod "69a6a41a-9ffb-4b15-b959-5e02864b1cb1" (UID: "69a6a41a-9ffb-4b15-b959-5e02864b1cb1"). InnerVolumeSpecName "kube-api-access-8tww7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 22:04:54.268758 containerd[1811]: time="2024-11-12T22:04:54.268710043Z" level=info msg="shim disconnected" id=50c89da3dafb72fcaf191921020f9f8984915aa58857c21807ba67c45aa3d4b9 namespace=k8s.io Nov 12 22:04:54.268758 containerd[1811]: time="2024-11-12T22:04:54.268744131Z" level=warning msg="cleaning up after shim disconnected" id=50c89da3dafb72fcaf191921020f9f8984915aa58857c21807ba67c45aa3d4b9 namespace=k8s.io Nov 12 22:04:54.268758 containerd[1811]: time="2024-11-12T22:04:54.268750001Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:04:54.330961 kubelet[3195]: I1112 22:04:54.330914 3195 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69a6a41a-9ffb-4b15-b959-5e02864b1cb1-tigera-ca-bundle\") on node \"ci-4081.2.0-a-a9d0314af7\" DevicePath \"\"" Nov 12 22:04:54.330961 kubelet[3195]: I1112 22:04:54.330931 3195 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-8tww7\" (UniqueName: \"kubernetes.io/projected/69a6a41a-9ffb-4b15-b959-5e02864b1cb1-kube-api-access-8tww7\") on node \"ci-4081.2.0-a-a9d0314af7\" DevicePath \"\"" Nov 12 22:04:54.330961 kubelet[3195]: I1112 22:04:54.330937 3195 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/69a6a41a-9ffb-4b15-b959-5e02864b1cb1-typha-certs\") on node \"ci-4081.2.0-a-a9d0314af7\" DevicePath \"\"" Nov 12 22:04:54.500019 systemd[1]: Removed slice kubepods-besteffort-pod9956537c_d3e7_4a14_bec4_dcbb471e59d8.slice - libcontainer container kubepods-besteffort-pod9956537c_d3e7_4a14_bec4_dcbb471e59d8.slice. Nov 12 22:04:54.500072 systemd[1]: kubepods-besteffort-pod9956537c_d3e7_4a14_bec4_dcbb471e59d8.slice: Consumed 1.637s CPU time. Nov 12 22:04:54.500596 systemd[1]: Removed slice kubepods-besteffort-pod69a6a41a_9ffb_4b15_b959_5e02864b1cb1.slice - libcontainer container kubepods-besteffort-pod69a6a41a_9ffb_4b15_b959_5e02864b1cb1.slice. Nov 12 22:04:54.565398 systemd[1]: var-lib-kubelet-pods-9956537c\x2dd3e7\x2d4a14\x2dbec4\x2ddcbb471e59d8-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Nov 12 22:04:54.565480 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24f4be8ddb31752dd37401bb6b44ce5901715622dd23508d8b556d72118b00fb-rootfs.mount: Deactivated successfully. Nov 12 22:04:54.565537 systemd[1]: var-lib-kubelet-pods-69a6a41a\x2d9ffb\x2d4b15\x2db959\x2d5e02864b1cb1-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Nov 12 22:04:54.565600 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e6e3003c810385f8e2c9fa533f8bfeeb20172e1d3cb3d7e246e2b4669037e59-rootfs.mount: Deactivated successfully. Nov 12 22:04:54.565650 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4e6e3003c810385f8e2c9fa533f8bfeeb20172e1d3cb3d7e246e2b4669037e59-shm.mount: Deactivated successfully. Nov 12 22:04:54.565696 systemd[1]: var-lib-kubelet-pods-9956537c\x2dd3e7\x2d4a14\x2dbec4\x2ddcbb471e59d8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5rkkw.mount: Deactivated successfully. Nov 12 22:04:54.565744 systemd[1]: var-lib-kubelet-pods-9956537c\x2dd3e7\x2d4a14\x2dbec4\x2ddcbb471e59d8-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Nov 12 22:04:54.565789 systemd[1]: var-lib-kubelet-pods-69a6a41a\x2d9ffb\x2d4b15\x2db959\x2d5e02864b1cb1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8tww7.mount: Deactivated successfully. Nov 12 22:04:54.565839 systemd[1]: var-lib-kubelet-pods-69a6a41a\x2d9ffb\x2d4b15\x2db959\x2d5e02864b1cb1-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Nov 12 22:04:54.665925 kubelet[3195]: I1112 22:04:54.665855 3195 scope.go:117] "RemoveContainer" containerID="24f4be8ddb31752dd37401bb6b44ce5901715622dd23508d8b556d72118b00fb" Nov 12 22:04:54.668939 containerd[1811]: time="2024-11-12T22:04:54.668845585Z" level=info msg="RemoveContainer for \"24f4be8ddb31752dd37401bb6b44ce5901715622dd23508d8b556d72118b00fb\"" Nov 12 22:04:54.671205 containerd[1811]: time="2024-11-12T22:04:54.671186556Z" level=info msg="RemoveContainer for \"24f4be8ddb31752dd37401bb6b44ce5901715622dd23508d8b556d72118b00fb\" returns successfully" Nov 12 22:04:54.671306 kubelet[3195]: I1112 22:04:54.671297 3195 scope.go:117] "RemoveContainer" containerID="24f4be8ddb31752dd37401bb6b44ce5901715622dd23508d8b556d72118b00fb" Nov 12 22:04:54.671408 containerd[1811]: time="2024-11-12T22:04:54.671386767Z" level=error msg="ContainerStatus for \"24f4be8ddb31752dd37401bb6b44ce5901715622dd23508d8b556d72118b00fb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"24f4be8ddb31752dd37401bb6b44ce5901715622dd23508d8b556d72118b00fb\": not found" Nov 12 22:04:54.671479 kubelet[3195]: E1112 22:04:54.671466 3195 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"24f4be8ddb31752dd37401bb6b44ce5901715622dd23508d8b556d72118b00fb\": not found" containerID="24f4be8ddb31752dd37401bb6b44ce5901715622dd23508d8b556d72118b00fb" Nov 12 22:04:54.671503 kubelet[3195]: I1112 22:04:54.671484 3195 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"24f4be8ddb31752dd37401bb6b44ce5901715622dd23508d8b556d72118b00fb"} err="failed to get container status \"24f4be8ddb31752dd37401bb6b44ce5901715622dd23508d8b556d72118b00fb\": rpc error: code = NotFound desc = an error occurred when try to find container \"24f4be8ddb31752dd37401bb6b44ce5901715622dd23508d8b556d72118b00fb\": not found" Nov 12 22:04:54.671503 kubelet[3195]: I1112 22:04:54.671495 3195 scope.go:117] "RemoveContainer" containerID="4a745838641a0b226fbbe586423c30dd4d0c0c271ba63f3a7cb5fa4f29b167c4" Nov 12 22:04:54.671548 containerd[1811]: time="2024-11-12T22:04:54.671499557Z" level=info msg="CreateContainer within sandbox \"8538d6d560518616624823b1c9c9f91883820c3b507053b4fa4999bcca294539\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 12 22:04:54.671982 containerd[1811]: time="2024-11-12T22:04:54.671972248Z" level=info msg="RemoveContainer for \"4a745838641a0b226fbbe586423c30dd4d0c0c271ba63f3a7cb5fa4f29b167c4\"" Nov 12 22:04:54.673311 containerd[1811]: time="2024-11-12T22:04:54.673294624Z" level=info msg="RemoveContainer for \"4a745838641a0b226fbbe586423c30dd4d0c0c271ba63f3a7cb5fa4f29b167c4\" returns successfully" Nov 12 22:04:54.673398 kubelet[3195]: I1112 22:04:54.673382 3195 scope.go:117] "RemoveContainer" containerID="b59fbaec25931636533d6dcda4103bf123de5314f237f3862bccecb07853bf0b" Nov 12 22:04:54.673877 kubelet[3195]: I1112 22:04:54.673848 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d4b7b8c7-kll9h" podStartSLOduration=25.215426487 podStartE2EDuration="28.673835778s" podCreationTimestamp="2024-11-12 22:04:26 +0000 UTC" firstStartedPulling="2024-11-12 22:04:50.677228368 +0000 UTC m=+44.253917848" lastFinishedPulling="2024-11-12 22:04:54.135637658 +0000 UTC m=+47.712327139" observedRunningTime="2024-11-12 22:04:54.673506194 +0000 UTC m=+48.250195687" watchObservedRunningTime="2024-11-12 22:04:54.673835778 +0000 UTC m=+48.250525256" Nov 12 22:04:54.674204 containerd[1811]: time="2024-11-12T22:04:54.674003746Z" level=info msg="RemoveContainer for \"b59fbaec25931636533d6dcda4103bf123de5314f237f3862bccecb07853bf0b\"" Nov 12 22:04:54.677069 containerd[1811]: time="2024-11-12T22:04:54.677044758Z" level=info msg="RemoveContainer for \"b59fbaec25931636533d6dcda4103bf123de5314f237f3862bccecb07853bf0b\" returns successfully" Nov 12 22:04:54.677357 kubelet[3195]: I1112 22:04:54.677184 3195 scope.go:117] "RemoveContainer" containerID="84825fc6436bf61d75616116071f708b33902ca56e00b9653d74c8853ea2f21f" Nov 12 22:04:54.677723 containerd[1811]: time="2024-11-12T22:04:54.677708002Z" level=info msg="CreateContainer within sandbox \"8538d6d560518616624823b1c9c9f91883820c3b507053b4fa4999bcca294539\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1f1a807ef95c098b62a52025438b00ad5441b4b0945d76593ea80c286ba6d598\"" Nov 12 22:04:54.677821 containerd[1811]: time="2024-11-12T22:04:54.677810139Z" level=info msg="RemoveContainer for \"84825fc6436bf61d75616116071f708b33902ca56e00b9653d74c8853ea2f21f\"" Nov 12 22:04:54.677858 containerd[1811]: time="2024-11-12T22:04:54.677851268Z" level=info msg="StartContainer for \"1f1a807ef95c098b62a52025438b00ad5441b4b0945d76593ea80c286ba6d598\"" Nov 12 22:04:54.678984 containerd[1811]: time="2024-11-12T22:04:54.678970671Z" level=info msg="RemoveContainer for \"84825fc6436bf61d75616116071f708b33902ca56e00b9653d74c8853ea2f21f\" returns successfully" Nov 12 22:04:54.695647 kubelet[3195]: I1112 22:04:54.695605 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d4b7b8c7-97n44" podStartSLOduration=24.703548664 podStartE2EDuration="28.695589927s" podCreationTimestamp="2024-11-12 22:04:26 +0000 UTC" firstStartedPulling="2024-11-12 22:04:49.752793993 +0000 UTC m=+43.329483484" lastFinishedPulling="2024-11-12 22:04:53.744835266 +0000 UTC m=+47.321524747" observedRunningTime="2024-11-12 22:04:54.695489437 +0000 UTC m=+48.272178918" watchObservedRunningTime="2024-11-12 22:04:54.695589927 +0000 UTC m=+48.272279405" Nov 12 22:04:54.701364 systemd[1]: Started cri-containerd-1f1a807ef95c098b62a52025438b00ad5441b4b0945d76593ea80c286ba6d598.scope - libcontainer container 1f1a807ef95c098b62a52025438b00ad5441b4b0945d76593ea80c286ba6d598. Nov 12 22:04:54.715966 containerd[1811]: time="2024-11-12T22:04:54.715945634Z" level=info msg="StartContainer for \"1f1a807ef95c098b62a52025438b00ad5441b4b0945d76593ea80c286ba6d598\" returns successfully" Nov 12 22:04:54.859129 systemd[1]: cri-containerd-1f1a807ef95c098b62a52025438b00ad5441b4b0945d76593ea80c286ba6d598.scope: Deactivated successfully. Nov 12 22:04:54.885408 containerd[1811]: time="2024-11-12T22:04:54.885213337Z" level=info msg="shim disconnected" id=1f1a807ef95c098b62a52025438b00ad5441b4b0945d76593ea80c286ba6d598 namespace=k8s.io Nov 12 22:04:54.885408 containerd[1811]: time="2024-11-12T22:04:54.885361783Z" level=warning msg="cleaning up after shim disconnected" id=1f1a807ef95c098b62a52025438b00ad5441b4b0945d76593ea80c286ba6d598 namespace=k8s.io Nov 12 22:04:54.885408 containerd[1811]: time="2024-11-12T22:04:54.885393984Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:04:55.561339 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f1a807ef95c098b62a52025438b00ad5441b4b0945d76593ea80c286ba6d598-rootfs.mount: Deactivated successfully. Nov 12 22:04:55.678997 kubelet[3195]: I1112 22:04:55.678957 3195 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 22:04:55.686592 containerd[1811]: time="2024-11-12T22:04:55.686568757Z" level=info msg="CreateContainer within sandbox \"8538d6d560518616624823b1c9c9f91883820c3b507053b4fa4999bcca294539\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 12 22:04:55.692565 containerd[1811]: time="2024-11-12T22:04:55.692542952Z" level=info msg="CreateContainer within sandbox \"8538d6d560518616624823b1c9c9f91883820c3b507053b4fa4999bcca294539\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c192f76aa25245260670c69b8d00f8a1d98dc2555ec70a8089246a94912042c8\"" Nov 12 22:04:55.692933 containerd[1811]: time="2024-11-12T22:04:55.692893759Z" level=info msg="StartContainer for \"c192f76aa25245260670c69b8d00f8a1d98dc2555ec70a8089246a94912042c8\"" Nov 12 22:04:55.716432 systemd[1]: Started cri-containerd-c192f76aa25245260670c69b8d00f8a1d98dc2555ec70a8089246a94912042c8.scope - libcontainer container c192f76aa25245260670c69b8d00f8a1d98dc2555ec70a8089246a94912042c8. Nov 12 22:04:55.731316 containerd[1811]: time="2024-11-12T22:04:55.731292389Z" level=info msg="StartContainer for \"c192f76aa25245260670c69b8d00f8a1d98dc2555ec70a8089246a94912042c8\" returns successfully" Nov 12 22:04:55.996091 kubelet[3195]: I1112 22:04:55.995975 3195 topology_manager.go:215] "Topology Admit Handler" podUID="c0a24848-7be8-445c-99f9-0bf3c74a418b" podNamespace="calico-system" podName="calico-typha-64fbd8bc86-tw56g" Nov 12 22:04:55.996091 kubelet[3195]: E1112 22:04:55.996043 3195 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="69a6a41a-9ffb-4b15-b959-5e02864b1cb1" containerName="calico-typha" Nov 12 22:04:55.996091 kubelet[3195]: I1112 22:04:55.996080 3195 memory_manager.go:354] "RemoveStaleState removing state" podUID="69a6a41a-9ffb-4b15-b959-5e02864b1cb1" containerName="calico-typha" Nov 12 22:04:56.003779 systemd[1]: Created slice kubepods-besteffort-podc0a24848_7be8_445c_99f9_0bf3c74a418b.slice - libcontainer container kubepods-besteffort-podc0a24848_7be8_445c_99f9_0bf3c74a418b.slice. Nov 12 22:04:56.042005 kubelet[3195]: I1112 22:04:56.041928 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9zqj\" (UniqueName: \"kubernetes.io/projected/c0a24848-7be8-445c-99f9-0bf3c74a418b-kube-api-access-z9zqj\") pod \"calico-typha-64fbd8bc86-tw56g\" (UID: \"c0a24848-7be8-445c-99f9-0bf3c74a418b\") " pod="calico-system/calico-typha-64fbd8bc86-tw56g" Nov 12 22:04:56.042325 kubelet[3195]: I1112 22:04:56.042053 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0a24848-7be8-445c-99f9-0bf3c74a418b-tigera-ca-bundle\") pod \"calico-typha-64fbd8bc86-tw56g\" (UID: \"c0a24848-7be8-445c-99f9-0bf3c74a418b\") " pod="calico-system/calico-typha-64fbd8bc86-tw56g" Nov 12 22:04:56.042325 kubelet[3195]: I1112 22:04:56.042138 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c0a24848-7be8-445c-99f9-0bf3c74a418b-typha-certs\") pod \"calico-typha-64fbd8bc86-tw56g\" (UID: \"c0a24848-7be8-445c-99f9-0bf3c74a418b\") " pod="calico-system/calico-typha-64fbd8bc86-tw56g" Nov 12 22:04:56.306517 containerd[1811]: time="2024-11-12T22:04:56.306496477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64fbd8bc86-tw56g,Uid:c0a24848-7be8-445c-99f9-0bf3c74a418b,Namespace:calico-system,Attempt:0,}" Nov 12 22:04:56.315235 containerd[1811]: time="2024-11-12T22:04:56.315196881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:04:56.315414 containerd[1811]: time="2024-11-12T22:04:56.315399885Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:04:56.315414 containerd[1811]: time="2024-11-12T22:04:56.315410009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:04:56.315508 containerd[1811]: time="2024-11-12T22:04:56.315455229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:04:56.328291 containerd[1811]: time="2024-11-12T22:04:56.328270073Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:04:56.328502 containerd[1811]: time="2024-11-12T22:04:56.328481295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.0: active requests=0, bytes read=34152461" Nov 12 22:04:56.328823 containerd[1811]: time="2024-11-12T22:04:56.328811658Z" level=info msg="ImageCreate event name:\"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:04:56.329770 containerd[1811]: time="2024-11-12T22:04:56.329757041Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:04:56.330449 containerd[1811]: time="2024-11-12T22:04:56.330436296Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" with image id \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\", size \"35645521\" in 2.194704325s" Nov 12 22:04:56.330478 containerd[1811]: time="2024-11-12T22:04:56.330450663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" returns image reference \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\"" Nov 12 22:04:56.333635 containerd[1811]: time="2024-11-12T22:04:56.333590175Z" level=info msg="CreateContainer within sandbox \"679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Nov 12 22:04:56.335386 systemd[1]: Started cri-containerd-f30b238622f6aa0e42f26104ea53981f998428abc35aedaf8a251859b15cdb30.scope - libcontainer container f30b238622f6aa0e42f26104ea53981f998428abc35aedaf8a251859b15cdb30. Nov 12 22:04:56.337602 containerd[1811]: time="2024-11-12T22:04:56.337584355Z" level=info msg="CreateContainer within sandbox \"679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1cb19101b5832194225f878c2bffa7ec71d9e356c642e35fcf552bfa099f140d\"" Nov 12 22:04:56.337848 containerd[1811]: time="2024-11-12T22:04:56.337836632Z" level=info msg="StartContainer for \"1cb19101b5832194225f878c2bffa7ec71d9e356c642e35fcf552bfa099f140d\"" Nov 12 22:04:56.347606 systemd[1]: Started cri-containerd-1cb19101b5832194225f878c2bffa7ec71d9e356c642e35fcf552bfa099f140d.scope - libcontainer container 1cb19101b5832194225f878c2bffa7ec71d9e356c642e35fcf552bfa099f140d. Nov 12 22:04:56.357785 containerd[1811]: time="2024-11-12T22:04:56.357763272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64fbd8bc86-tw56g,Uid:c0a24848-7be8-445c-99f9-0bf3c74a418b,Namespace:calico-system,Attempt:0,} returns sandbox id \"f30b238622f6aa0e42f26104ea53981f998428abc35aedaf8a251859b15cdb30\"" Nov 12 22:04:56.361192 containerd[1811]: time="2024-11-12T22:04:56.361173450Z" level=info msg="CreateContainer within sandbox \"f30b238622f6aa0e42f26104ea53981f998428abc35aedaf8a251859b15cdb30\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 12 22:04:56.365424 containerd[1811]: time="2024-11-12T22:04:56.365408433Z" level=info msg="CreateContainer within sandbox \"f30b238622f6aa0e42f26104ea53981f998428abc35aedaf8a251859b15cdb30\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"61b0ead1076cdff93e9cc89db4591fdb7be563ecdd572df901d3f398ff88b0fd\"" Nov 12 22:04:56.365596 containerd[1811]: time="2024-11-12T22:04:56.365581649Z" level=info msg="StartContainer for \"61b0ead1076cdff93e9cc89db4591fdb7be563ecdd572df901d3f398ff88b0fd\"" Nov 12 22:04:56.371690 containerd[1811]: time="2024-11-12T22:04:56.371664972Z" level=info msg="StartContainer for \"1cb19101b5832194225f878c2bffa7ec71d9e356c642e35fcf552bfa099f140d\" returns successfully" Nov 12 22:04:56.388405 systemd[1]: Started cri-containerd-61b0ead1076cdff93e9cc89db4591fdb7be563ecdd572df901d3f398ff88b0fd.scope - libcontainer container 61b0ead1076cdff93e9cc89db4591fdb7be563ecdd572df901d3f398ff88b0fd. Nov 12 22:04:56.414136 containerd[1811]: time="2024-11-12T22:04:56.414115771Z" level=info msg="StartContainer for \"61b0ead1076cdff93e9cc89db4591fdb7be563ecdd572df901d3f398ff88b0fd\" returns successfully" Nov 12 22:04:56.504439 kubelet[3195]: I1112 22:04:56.504366 3195 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69a6a41a-9ffb-4b15-b959-5e02864b1cb1" path="/var/lib/kubelet/pods/69a6a41a-9ffb-4b15-b959-5e02864b1cb1/volumes" Nov 12 22:04:56.505944 kubelet[3195]: I1112 22:04:56.505902 3195 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9956537c-d3e7-4a14-bec4-dcbb471e59d8" path="/var/lib/kubelet/pods/9956537c-d3e7-4a14-bec4-dcbb471e59d8/volumes" Nov 12 22:04:56.688841 containerd[1811]: time="2024-11-12T22:04:56.688577100Z" level=info msg="StopContainer for \"1cb19101b5832194225f878c2bffa7ec71d9e356c642e35fcf552bfa099f140d\" with timeout 30 (s)" Nov 12 22:04:56.689824 containerd[1811]: time="2024-11-12T22:04:56.689460135Z" level=info msg="Stop container \"1cb19101b5832194225f878c2bffa7ec71d9e356c642e35fcf552bfa099f140d\" with signal terminated" Nov 12 22:04:56.711630 systemd[1]: cri-containerd-1cb19101b5832194225f878c2bffa7ec71d9e356c642e35fcf552bfa099f140d.scope: Deactivated successfully. Nov 12 22:04:56.718661 kubelet[3195]: I1112 22:04:56.718575 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7546446886-8b475" podStartSLOduration=26.070101707 podStartE2EDuration="30.718541162s" podCreationTimestamp="2024-11-12 22:04:26 +0000 UTC" firstStartedPulling="2024-11-12 22:04:51.682277169 +0000 UTC m=+45.258966651" lastFinishedPulling="2024-11-12 22:04:56.330716626 +0000 UTC m=+49.907406106" observedRunningTime="2024-11-12 22:04:56.717717332 +0000 UTC m=+50.294406854" watchObservedRunningTime="2024-11-12 22:04:56.718541162 +0000 UTC m=+50.295230671" Nov 12 22:04:56.733619 kubelet[3195]: I1112 22:04:56.733562 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ztfzr" podStartSLOduration=3.733538938 podStartE2EDuration="3.733538938s" podCreationTimestamp="2024-11-12 22:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:04:56.732860053 +0000 UTC m=+50.309549559" watchObservedRunningTime="2024-11-12 22:04:56.733538938 +0000 UTC m=+50.310228426" Nov 12 22:04:56.735869 containerd[1811]: time="2024-11-12T22:04:56.735837879Z" level=error msg="ExecSync for \"1cb19101b5832194225f878c2bffa7ec71d9e356c642e35fcf552bfa099f140d\" failed" error="failed to exec in container: failed to start exec \"64f2bceb8a9b8422ec02d796f01dda292004ab53765a3810924fc4658c064d38\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" Nov 12 22:04:56.736025 kubelet[3195]: E1112 22:04:56.735987 3195 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"64f2bceb8a9b8422ec02d796f01dda292004ab53765a3810924fc4658c064d38\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="1cb19101b5832194225f878c2bffa7ec71d9e356c642e35fcf552bfa099f140d" cmd=["/usr/bin/check-status","-r"] Nov 12 22:04:56.741147 kubelet[3195]: I1112 22:04:56.741101 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-64fbd8bc86-tw56g" podStartSLOduration=4.741086238 podStartE2EDuration="4.741086238s" podCreationTimestamp="2024-11-12 22:04:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:04:56.740920577 +0000 UTC m=+50.317610067" watchObservedRunningTime="2024-11-12 22:04:56.741086238 +0000 UTC m=+50.317775720" Nov 12 22:04:56.745675 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1cb19101b5832194225f878c2bffa7ec71d9e356c642e35fcf552bfa099f140d-rootfs.mount: Deactivated successfully. Nov 12 22:04:56.924189 containerd[1811]: time="2024-11-12T22:04:56.924155471Z" level=info msg="shim disconnected" id=1cb19101b5832194225f878c2bffa7ec71d9e356c642e35fcf552bfa099f140d namespace=k8s.io Nov 12 22:04:56.924189 containerd[1811]: time="2024-11-12T22:04:56.924186212Z" level=warning msg="cleaning up after shim disconnected" id=1cb19101b5832194225f878c2bffa7ec71d9e356c642e35fcf552bfa099f140d namespace=k8s.io Nov 12 22:04:56.924189 containerd[1811]: time="2024-11-12T22:04:56.924192677Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:04:56.924376 containerd[1811]: time="2024-11-12T22:04:56.924236968Z" level=error msg="ExecSync for \"1cb19101b5832194225f878c2bffa7ec71d9e356c642e35fcf552bfa099f140d\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"7e9d60af1192aca6fdc7843572b516bb7b85d7be4ee16eeb0718bc8ea10c6c83\": task 1cb19101b5832194225f878c2bffa7ec71d9e356c642e35fcf552bfa099f140d not found: not found" Nov 12 22:04:56.924478 kubelet[3195]: E1112 22:04:56.924419 3195 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"7e9d60af1192aca6fdc7843572b516bb7b85d7be4ee16eeb0718bc8ea10c6c83\": task 1cb19101b5832194225f878c2bffa7ec71d9e356c642e35fcf552bfa099f140d not found: not found" containerID="1cb19101b5832194225f878c2bffa7ec71d9e356c642e35fcf552bfa099f140d" cmd=["/usr/bin/check-status","-r"] Nov 12 22:04:56.924911 containerd[1811]: time="2024-11-12T22:04:56.924895185Z" level=error msg="ExecSync for \"1cb19101b5832194225f878c2bffa7ec71d9e356c642e35fcf552bfa099f140d\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 1cb19101b5832194225f878c2bffa7ec71d9e356c642e35fcf552bfa099f140d not found: not found" Nov 12 22:04:56.924964 kubelet[3195]: E1112 22:04:56.924950 3195 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 1cb19101b5832194225f878c2bffa7ec71d9e356c642e35fcf552bfa099f140d not found: not found" containerID="1cb19101b5832194225f878c2bffa7ec71d9e356c642e35fcf552bfa099f140d" cmd=["/usr/bin/check-status","-r"] Nov 12 22:04:56.931930 containerd[1811]: time="2024-11-12T22:04:56.931876823Z" level=info msg="StopContainer for \"1cb19101b5832194225f878c2bffa7ec71d9e356c642e35fcf552bfa099f140d\" returns successfully" Nov 12 22:04:56.932145 containerd[1811]: time="2024-11-12T22:04:56.932134404Z" level=info msg="StopPodSandbox for \"679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c\"" Nov 12 22:04:56.932171 containerd[1811]: time="2024-11-12T22:04:56.932151038Z" level=info msg="Container to stop \"1cb19101b5832194225f878c2bffa7ec71d9e356c642e35fcf552bfa099f140d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:04:56.934071 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c-shm.mount: Deactivated successfully. Nov 12 22:04:56.935645 systemd[1]: cri-containerd-679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c.scope: Deactivated successfully. Nov 12 22:04:56.945991 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c-rootfs.mount: Deactivated successfully. Nov 12 22:04:56.946485 containerd[1811]: time="2024-11-12T22:04:56.946444321Z" level=info msg="shim disconnected" id=679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c namespace=k8s.io Nov 12 22:04:56.946485 containerd[1811]: time="2024-11-12T22:04:56.946484393Z" level=warning msg="cleaning up after shim disconnected" id=679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c namespace=k8s.io Nov 12 22:04:56.946568 containerd[1811]: time="2024-11-12T22:04:56.946491624Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:04:56.976957 systemd-networkd[1725]: cali36266a55883: Link DOWN Nov 12 22:04:56.976960 systemd-networkd[1725]: cali36266a55883: Lost carrier Nov 12 22:04:57.023582 containerd[1811]: 2024-11-12 22:04:56.976 [INFO][6831] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" Nov 12 22:04:57.023582 containerd[1811]: 2024-11-12 22:04:56.976 [INFO][6831] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" iface="eth0" netns="/var/run/netns/cni-8f7abe8f-2766-45c3-4ca8-43b93c7a709d" Nov 12 22:04:57.023582 containerd[1811]: 2024-11-12 22:04:56.976 [INFO][6831] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" iface="eth0" netns="/var/run/netns/cni-8f7abe8f-2766-45c3-4ca8-43b93c7a709d" Nov 12 22:04:57.023582 containerd[1811]: 2024-11-12 22:04:56.995 [INFO][6831] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" after=18.79521ms iface="eth0" netns="/var/run/netns/cni-8f7abe8f-2766-45c3-4ca8-43b93c7a709d" Nov 12 22:04:57.023582 containerd[1811]: 2024-11-12 22:04:56.995 [INFO][6831] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" Nov 12 22:04:57.023582 containerd[1811]: 2024-11-12 22:04:56.995 [INFO][6831] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" Nov 12 22:04:57.023582 containerd[1811]: 2024-11-12 22:04:57.006 [INFO][6843] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" HandleID="k8s-pod-network.679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0" Nov 12 22:04:57.023582 containerd[1811]: 2024-11-12 22:04:57.006 [INFO][6843] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 22:04:57.023582 containerd[1811]: 2024-11-12 22:04:57.006 [INFO][6843] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 22:04:57.023582 containerd[1811]: 2024-11-12 22:04:57.021 [INFO][6843] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" HandleID="k8s-pod-network.679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0" Nov 12 22:04:57.023582 containerd[1811]: 2024-11-12 22:04:57.021 [INFO][6843] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" HandleID="k8s-pod-network.679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0" Nov 12 22:04:57.023582 containerd[1811]: 2024-11-12 22:04:57.022 [INFO][6843] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 22:04:57.023582 containerd[1811]: 2024-11-12 22:04:57.022 [INFO][6831] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" Nov 12 22:04:57.023912 containerd[1811]: time="2024-11-12T22:04:57.023678491Z" level=info msg="TearDown network for sandbox \"679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c\" successfully" Nov 12 22:04:57.023912 containerd[1811]: time="2024-11-12T22:04:57.023695896Z" level=info msg="StopPodSandbox for \"679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c\" returns successfully" Nov 12 22:04:57.023970 containerd[1811]: time="2024-11-12T22:04:57.023956578Z" level=info msg="StopPodSandbox for \"a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475\"" Nov 12 22:04:57.059507 containerd[1811]: 2024-11-12 22:04:57.042 [WARNING][6871] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0", GenerateName:"calico-kube-controllers-7546446886-", Namespace:"calico-system", SelfLink:"", UID:"f7d2b67c-78b9-4645-b1a5-5cfb719e011e", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 4, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7546446886", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-a9d0314af7", ContainerID:"679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c", Pod:"calico-kube-controllers-7546446886-8b475", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali36266a55883", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:04:57.059507 containerd[1811]: 2024-11-12 22:04:57.043 [INFO][6871] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" Nov 12 22:04:57.059507 containerd[1811]: 2024-11-12 22:04:57.043 [INFO][6871] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" iface="eth0" netns="" Nov 12 22:04:57.059507 containerd[1811]: 2024-11-12 22:04:57.043 [INFO][6871] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" Nov 12 22:04:57.059507 containerd[1811]: 2024-11-12 22:04:57.043 [INFO][6871] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" Nov 12 22:04:57.059507 containerd[1811]: 2024-11-12 22:04:57.053 [INFO][6884] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" HandleID="k8s-pod-network.a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0" Nov 12 22:04:57.059507 containerd[1811]: 2024-11-12 22:04:57.053 [INFO][6884] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 22:04:57.059507 containerd[1811]: 2024-11-12 22:04:57.053 [INFO][6884] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 22:04:57.059507 containerd[1811]: 2024-11-12 22:04:57.057 [WARNING][6884] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" HandleID="k8s-pod-network.a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0" Nov 12 22:04:57.059507 containerd[1811]: 2024-11-12 22:04:57.057 [INFO][6884] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" HandleID="k8s-pod-network.a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0" Nov 12 22:04:57.059507 containerd[1811]: 2024-11-12 22:04:57.058 [INFO][6884] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 22:04:57.059507 containerd[1811]: 2024-11-12 22:04:57.058 [INFO][6871] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" Nov 12 22:04:57.059507 containerd[1811]: time="2024-11-12T22:04:57.059492460Z" level=info msg="TearDown network for sandbox \"a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475\" successfully" Nov 12 22:04:57.059507 containerd[1811]: time="2024-11-12T22:04:57.059506791Z" level=info msg="StopPodSandbox for \"a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475\" returns successfully" Nov 12 22:04:57.149683 kubelet[3195]: I1112 22:04:57.149573 3195 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttvl7\" (UniqueName: \"kubernetes.io/projected/f7d2b67c-78b9-4645-b1a5-5cfb719e011e-kube-api-access-ttvl7\") pod \"f7d2b67c-78b9-4645-b1a5-5cfb719e011e\" (UID: \"f7d2b67c-78b9-4645-b1a5-5cfb719e011e\") " Nov 12 22:04:57.149683 kubelet[3195]: I1112 22:04:57.149669 3195 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7d2b67c-78b9-4645-b1a5-5cfb719e011e-tigera-ca-bundle\") pod \"f7d2b67c-78b9-4645-b1a5-5cfb719e011e\" (UID: \"f7d2b67c-78b9-4645-b1a5-5cfb719e011e\") " Nov 12 22:04:57.155809 kubelet[3195]: I1112 22:04:57.155698 3195 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7d2b67c-78b9-4645-b1a5-5cfb719e011e-kube-api-access-ttvl7" (OuterVolumeSpecName: "kube-api-access-ttvl7") pod "f7d2b67c-78b9-4645-b1a5-5cfb719e011e" (UID: "f7d2b67c-78b9-4645-b1a5-5cfb719e011e"). InnerVolumeSpecName "kube-api-access-ttvl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 22:04:57.158743 kubelet[3195]: I1112 22:04:57.158642 3195 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7d2b67c-78b9-4645-b1a5-5cfb719e011e-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "f7d2b67c-78b9-4645-b1a5-5cfb719e011e" (UID: "f7d2b67c-78b9-4645-b1a5-5cfb719e011e"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 12 22:04:57.250753 kubelet[3195]: I1112 22:04:57.250551 3195 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-ttvl7\" (UniqueName: \"kubernetes.io/projected/f7d2b67c-78b9-4645-b1a5-5cfb719e011e-kube-api-access-ttvl7\") on node \"ci-4081.2.0-a-a9d0314af7\" DevicePath \"\"" Nov 12 22:04:57.250753 kubelet[3195]: I1112 22:04:57.250631 3195 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7d2b67c-78b9-4645-b1a5-5cfb719e011e-tigera-ca-bundle\") on node \"ci-4081.2.0-a-a9d0314af7\" DevicePath \"\"" Nov 12 22:04:57.568522 systemd[1]: var-lib-kubelet-pods-f7d2b67c\x2d78b9\x2d4645\x2db1a5\x2d5cfb719e011e-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Nov 12 22:04:57.568950 systemd[1]: run-netns-cni\x2d8f7abe8f\x2d2766\x2d45c3\x2d4ca8\x2d43b93c7a709d.mount: Deactivated successfully. Nov 12 22:04:57.569332 systemd[1]: var-lib-kubelet-pods-f7d2b67c\x2d78b9\x2d4645\x2db1a5\x2d5cfb719e011e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dttvl7.mount: Deactivated successfully. Nov 12 22:04:57.705792 kubelet[3195]: I1112 22:04:57.705697 3195 scope.go:117] "RemoveContainer" containerID="1cb19101b5832194225f878c2bffa7ec71d9e356c642e35fcf552bfa099f140d" Nov 12 22:04:57.708583 containerd[1811]: time="2024-11-12T22:04:57.708430461Z" level=info msg="RemoveContainer for \"1cb19101b5832194225f878c2bffa7ec71d9e356c642e35fcf552bfa099f140d\"" Nov 12 22:04:57.710820 containerd[1811]: time="2024-11-12T22:04:57.710802360Z" level=info msg="RemoveContainer for \"1cb19101b5832194225f878c2bffa7ec71d9e356c642e35fcf552bfa099f140d\" returns successfully" Nov 12 22:04:57.710910 kubelet[3195]: I1112 22:04:57.710898 3195 scope.go:117] "RemoveContainer" containerID="1cb19101b5832194225f878c2bffa7ec71d9e356c642e35fcf552bfa099f140d" Nov 12 22:04:57.711041 containerd[1811]: time="2024-11-12T22:04:57.711021945Z" level=error msg="ContainerStatus for \"1cb19101b5832194225f878c2bffa7ec71d9e356c642e35fcf552bfa099f140d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1cb19101b5832194225f878c2bffa7ec71d9e356c642e35fcf552bfa099f140d\": not found" Nov 12 22:04:57.711048 systemd[1]: Removed slice kubepods-besteffort-podf7d2b67c_78b9_4645_b1a5_5cfb719e011e.slice - libcontainer container kubepods-besteffort-podf7d2b67c_78b9_4645_b1a5_5cfb719e011e.slice. Nov 12 22:04:57.711156 kubelet[3195]: E1112 22:04:57.711092 3195 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1cb19101b5832194225f878c2bffa7ec71d9e356c642e35fcf552bfa099f140d\": not found" containerID="1cb19101b5832194225f878c2bffa7ec71d9e356c642e35fcf552bfa099f140d" Nov 12 22:04:57.711156 kubelet[3195]: I1112 22:04:57.711111 3195 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1cb19101b5832194225f878c2bffa7ec71d9e356c642e35fcf552bfa099f140d"} err="failed to get container status \"1cb19101b5832194225f878c2bffa7ec71d9e356c642e35fcf552bfa099f140d\": rpc error: code = NotFound desc = an error occurred when try to find container \"1cb19101b5832194225f878c2bffa7ec71d9e356c642e35fcf552bfa099f140d\": not found" Nov 12 22:04:57.749599 kubelet[3195]: I1112 22:04:57.749568 3195 topology_manager.go:215] "Topology Admit Handler" podUID="5c4ba79f-798d-4869-9b30-c474341e63ca" podNamespace="calico-system" podName="calico-kube-controllers-749cc576cc-gmjcj" Nov 12 22:04:57.749837 kubelet[3195]: E1112 22:04:57.749627 3195 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f7d2b67c-78b9-4645-b1a5-5cfb719e011e" containerName="calico-kube-controllers" Nov 12 22:04:57.749837 kubelet[3195]: I1112 22:04:57.749655 3195 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7d2b67c-78b9-4645-b1a5-5cfb719e011e" containerName="calico-kube-controllers" Nov 12 22:04:57.753222 systemd[1]: Created slice kubepods-besteffort-pod5c4ba79f_798d_4869_9b30_c474341e63ca.slice - libcontainer container kubepods-besteffort-pod5c4ba79f_798d_4869_9b30_c474341e63ca.slice. Nov 12 22:04:57.754232 kubelet[3195]: I1112 22:04:57.754204 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c4ba79f-798d-4869-9b30-c474341e63ca-tigera-ca-bundle\") pod \"calico-kube-controllers-749cc576cc-gmjcj\" (UID: \"5c4ba79f-798d-4869-9b30-c474341e63ca\") " pod="calico-system/calico-kube-controllers-749cc576cc-gmjcj" Nov 12 22:04:57.754324 kubelet[3195]: I1112 22:04:57.754266 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk4qx\" (UniqueName: \"kubernetes.io/projected/5c4ba79f-798d-4869-9b30-c474341e63ca-kube-api-access-wk4qx\") pod \"calico-kube-controllers-749cc576cc-gmjcj\" (UID: \"5c4ba79f-798d-4869-9b30-c474341e63ca\") " pod="calico-system/calico-kube-controllers-749cc576cc-gmjcj" Nov 12 22:04:58.055912 containerd[1811]: time="2024-11-12T22:04:58.055800475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-749cc576cc-gmjcj,Uid:5c4ba79f-798d-4869-9b30-c474341e63ca,Namespace:calico-system,Attempt:0,}" Nov 12 22:04:58.133441 systemd-networkd[1725]: cali4abdd15e903: Link UP Nov 12 22:04:58.133596 systemd-networkd[1725]: cali4abdd15e903: Gained carrier Nov 12 22:04:58.139927 containerd[1811]: 2024-11-12 22:04:58.076 [INFO][6975] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--749cc576cc--gmjcj-eth0 calico-kube-controllers-749cc576cc- calico-system 5c4ba79f-798d-4869-9b30-c474341e63ca 1029 0 2024-11-12 22:04:57 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:749cc576cc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.2.0-a-a9d0314af7 calico-kube-controllers-749cc576cc-gmjcj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4abdd15e903 [] []}} ContainerID="7725f18364ee7409e005c48ea7210a348652c53afe2983524c451dd1794bbee5" Namespace="calico-system" Pod="calico-kube-controllers-749cc576cc-gmjcj" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--749cc576cc--gmjcj-" Nov 12 22:04:58.139927 containerd[1811]: 2024-11-12 22:04:58.076 [INFO][6975] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7725f18364ee7409e005c48ea7210a348652c53afe2983524c451dd1794bbee5" Namespace="calico-system" Pod="calico-kube-controllers-749cc576cc-gmjcj" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--749cc576cc--gmjcj-eth0" Nov 12 22:04:58.139927 containerd[1811]: 2024-11-12 22:04:58.091 [INFO][6993] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7725f18364ee7409e005c48ea7210a348652c53afe2983524c451dd1794bbee5" HandleID="k8s-pod-network.7725f18364ee7409e005c48ea7210a348652c53afe2983524c451dd1794bbee5" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--749cc576cc--gmjcj-eth0" Nov 12 22:04:58.139927 containerd[1811]: 2024-11-12 22:04:58.096 [INFO][6993] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7725f18364ee7409e005c48ea7210a348652c53afe2983524c451dd1794bbee5" HandleID="k8s-pod-network.7725f18364ee7409e005c48ea7210a348652c53afe2983524c451dd1794bbee5" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--749cc576cc--gmjcj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051f20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.2.0-a-a9d0314af7", "pod":"calico-kube-controllers-749cc576cc-gmjcj", "timestamp":"2024-11-12 22:04:58.091379976 +0000 UTC"}, Hostname:"ci-4081.2.0-a-a9d0314af7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 22:04:58.139927 containerd[1811]: 2024-11-12 22:04:58.096 [INFO][6993] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 22:04:58.139927 containerd[1811]: 2024-11-12 22:04:58.096 [INFO][6993] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 22:04:58.139927 containerd[1811]: 2024-11-12 22:04:58.096 [INFO][6993] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-a-a9d0314af7' Nov 12 22:04:58.139927 containerd[1811]: 2024-11-12 22:04:58.097 [INFO][6993] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7725f18364ee7409e005c48ea7210a348652c53afe2983524c451dd1794bbee5" host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:58.139927 containerd[1811]: 2024-11-12 22:04:58.100 [INFO][6993] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:58.139927 containerd[1811]: 2024-11-12 22:04:58.103 [INFO][6993] ipam/ipam.go 489: Trying affinity for 192.168.20.64/26 host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:58.139927 containerd[1811]: 2024-11-12 22:04:58.104 [INFO][6993] ipam/ipam.go 155: Attempting to load block cidr=192.168.20.64/26 host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:58.139927 containerd[1811]: 2024-11-12 22:04:58.105 [INFO][6993] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.20.64/26 host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:58.139927 containerd[1811]: 2024-11-12 22:04:58.106 [INFO][6993] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.20.64/26 handle="k8s-pod-network.7725f18364ee7409e005c48ea7210a348652c53afe2983524c451dd1794bbee5" host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:58.139927 containerd[1811]: 2024-11-12 22:04:58.107 [INFO][6993] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7725f18364ee7409e005c48ea7210a348652c53afe2983524c451dd1794bbee5 Nov 12 22:04:58.139927 containerd[1811]: 2024-11-12 22:04:58.109 [INFO][6993] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.20.64/26 handle="k8s-pod-network.7725f18364ee7409e005c48ea7210a348652c53afe2983524c451dd1794bbee5" host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:58.139927 containerd[1811]: 2024-11-12 22:04:58.131 [INFO][6993] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.20.71/26] block=192.168.20.64/26 handle="k8s-pod-network.7725f18364ee7409e005c48ea7210a348652c53afe2983524c451dd1794bbee5" host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:58.139927 containerd[1811]: 2024-11-12 22:04:58.131 [INFO][6993] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.20.71/26] handle="k8s-pod-network.7725f18364ee7409e005c48ea7210a348652c53afe2983524c451dd1794bbee5" host="ci-4081.2.0-a-a9d0314af7" Nov 12 22:04:58.139927 containerd[1811]: 2024-11-12 22:04:58.131 [INFO][6993] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 22:04:58.140512 containerd[1811]: 2024-11-12 22:04:58.131 [INFO][6993] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.71/26] IPv6=[] ContainerID="7725f18364ee7409e005c48ea7210a348652c53afe2983524c451dd1794bbee5" HandleID="k8s-pod-network.7725f18364ee7409e005c48ea7210a348652c53afe2983524c451dd1794bbee5" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--749cc576cc--gmjcj-eth0" Nov 12 22:04:58.140512 containerd[1811]: 2024-11-12 22:04:58.132 [INFO][6975] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7725f18364ee7409e005c48ea7210a348652c53afe2983524c451dd1794bbee5" Namespace="calico-system" Pod="calico-kube-controllers-749cc576cc-gmjcj" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--749cc576cc--gmjcj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--749cc576cc--gmjcj-eth0", GenerateName:"calico-kube-controllers-749cc576cc-", Namespace:"calico-system", SelfLink:"", UID:"5c4ba79f-798d-4869-9b30-c474341e63ca", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 4, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"749cc576cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-a9d0314af7", ContainerID:"", Pod:"calico-kube-controllers-749cc576cc-gmjcj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.20.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4abdd15e903", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:04:58.140512 containerd[1811]: 2024-11-12 22:04:58.132 [INFO][6975] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.20.71/32] ContainerID="7725f18364ee7409e005c48ea7210a348652c53afe2983524c451dd1794bbee5" Namespace="calico-system" Pod="calico-kube-controllers-749cc576cc-gmjcj" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--749cc576cc--gmjcj-eth0" Nov 12 22:04:58.140512 containerd[1811]: 2024-11-12 22:04:58.132 [INFO][6975] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4abdd15e903 ContainerID="7725f18364ee7409e005c48ea7210a348652c53afe2983524c451dd1794bbee5" Namespace="calico-system" Pod="calico-kube-controllers-749cc576cc-gmjcj" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--749cc576cc--gmjcj-eth0" Nov 12 22:04:58.140512 containerd[1811]: 2024-11-12 22:04:58.133 [INFO][6975] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7725f18364ee7409e005c48ea7210a348652c53afe2983524c451dd1794bbee5" Namespace="calico-system" Pod="calico-kube-controllers-749cc576cc-gmjcj" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--749cc576cc--gmjcj-eth0" Nov 12 22:04:58.140684 containerd[1811]: 2024-11-12 22:04:58.133 [INFO][6975] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7725f18364ee7409e005c48ea7210a348652c53afe2983524c451dd1794bbee5" Namespace="calico-system" Pod="calico-kube-controllers-749cc576cc-gmjcj" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--749cc576cc--gmjcj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--749cc576cc--gmjcj-eth0", GenerateName:"calico-kube-controllers-749cc576cc-", Namespace:"calico-system", SelfLink:"", UID:"5c4ba79f-798d-4869-9b30-c474341e63ca", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 4, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"749cc576cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-a9d0314af7", ContainerID:"7725f18364ee7409e005c48ea7210a348652c53afe2983524c451dd1794bbee5", Pod:"calico-kube-controllers-749cc576cc-gmjcj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.20.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4abdd15e903", MAC:"ae:85:b2:aa:ca:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:04:58.140684 containerd[1811]: 2024-11-12 22:04:58.138 [INFO][6975] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7725f18364ee7409e005c48ea7210a348652c53afe2983524c451dd1794bbee5" Namespace="calico-system" Pod="calico-kube-controllers-749cc576cc-gmjcj" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--749cc576cc--gmjcj-eth0" Nov 12 22:04:58.150637 containerd[1811]: time="2024-11-12T22:04:58.150594540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:04:58.150637 containerd[1811]: time="2024-11-12T22:04:58.150627968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:04:58.150637 containerd[1811]: time="2024-11-12T22:04:58.150636993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:04:58.150771 containerd[1811]: time="2024-11-12T22:04:58.150693831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:04:58.174469 systemd[1]: Started cri-containerd-7725f18364ee7409e005c48ea7210a348652c53afe2983524c451dd1794bbee5.scope - libcontainer container 7725f18364ee7409e005c48ea7210a348652c53afe2983524c451dd1794bbee5. Nov 12 22:04:58.205359 containerd[1811]: time="2024-11-12T22:04:58.205334637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-749cc576cc-gmjcj,Uid:5c4ba79f-798d-4869-9b30-c474341e63ca,Namespace:calico-system,Attempt:0,} returns sandbox id \"7725f18364ee7409e005c48ea7210a348652c53afe2983524c451dd1794bbee5\"" Nov 12 22:04:58.209389 containerd[1811]: time="2024-11-12T22:04:58.209334767Z" level=info msg="CreateContainer within sandbox \"7725f18364ee7409e005c48ea7210a348652c53afe2983524c451dd1794bbee5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Nov 12 22:04:58.214291 containerd[1811]: time="2024-11-12T22:04:58.214248702Z" level=info msg="CreateContainer within sandbox \"7725f18364ee7409e005c48ea7210a348652c53afe2983524c451dd1794bbee5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"06b000e3149b4c58685691b8d75b6de1915c149bc7fe9b5a332ee876098ab6a5\"" Nov 12 22:04:58.214455 containerd[1811]: time="2024-11-12T22:04:58.214407553Z" level=info msg="StartContainer for \"06b000e3149b4c58685691b8d75b6de1915c149bc7fe9b5a332ee876098ab6a5\"" Nov 12 22:04:58.237517 systemd[1]: Started cri-containerd-06b000e3149b4c58685691b8d75b6de1915c149bc7fe9b5a332ee876098ab6a5.scope - libcontainer container 06b000e3149b4c58685691b8d75b6de1915c149bc7fe9b5a332ee876098ab6a5. Nov 12 22:04:58.271190 containerd[1811]: time="2024-11-12T22:04:58.271161275Z" level=info msg="StartContainer for \"06b000e3149b4c58685691b8d75b6de1915c149bc7fe9b5a332ee876098ab6a5\" returns successfully" Nov 12 22:04:58.504324 kubelet[3195]: I1112 22:04:58.504066 3195 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7d2b67c-78b9-4645-b1a5-5cfb719e011e" path="/var/lib/kubelet/pods/f7d2b67c-78b9-4645-b1a5-5cfb719e011e/volumes" Nov 12 22:04:58.737957 kubelet[3195]: I1112 22:04:58.737762 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-749cc576cc-gmjcj" podStartSLOduration=1.7377070319999999 podStartE2EDuration="1.737707032s" podCreationTimestamp="2024-11-12 22:04:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:04:58.736754132 +0000 UTC m=+52.313443783" watchObservedRunningTime="2024-11-12 22:04:58.737707032 +0000 UTC m=+52.314396601" Nov 12 22:05:00.037589 systemd-networkd[1725]: cali4abdd15e903: Gained IPv6LL Nov 12 22:05:05.058296 kubelet[3195]: I1112 22:05:05.058162 3195 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 22:05:06.490124 containerd[1811]: time="2024-11-12T22:05:06.490028476Z" level=info msg="StopPodSandbox for \"bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd\"" Nov 12 22:05:06.568906 containerd[1811]: 2024-11-12 22:05:06.546 [WARNING][7462] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--4b7bm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d6ab8fe9-0b2f-42b5-9df3-8800f48bbf2b", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 4, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-a9d0314af7", ContainerID:"a4cfabd43766698e75bb3b9388fbff6bc3870f7d5a14e487ec8a87007e999fef", Pod:"coredns-7db6d8ff4d-4b7bm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5fa94260055", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:05:06.568906 containerd[1811]: 2024-11-12 22:05:06.546 [INFO][7462] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" Nov 12 22:05:06.568906 containerd[1811]: 2024-11-12 22:05:06.546 [INFO][7462] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" iface="eth0" netns="" Nov 12 22:05:06.568906 containerd[1811]: 2024-11-12 22:05:06.546 [INFO][7462] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" Nov 12 22:05:06.568906 containerd[1811]: 2024-11-12 22:05:06.546 [INFO][7462] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" Nov 12 22:05:06.568906 containerd[1811]: 2024-11-12 22:05:06.560 [INFO][7478] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" HandleID="k8s-pod-network.bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" Workload="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--4b7bm-eth0" Nov 12 22:05:06.568906 containerd[1811]: 2024-11-12 22:05:06.560 [INFO][7478] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 22:05:06.568906 containerd[1811]: 2024-11-12 22:05:06.560 [INFO][7478] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 22:05:06.568906 containerd[1811]: 2024-11-12 22:05:06.565 [WARNING][7478] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" HandleID="k8s-pod-network.bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" Workload="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--4b7bm-eth0" Nov 12 22:05:06.568906 containerd[1811]: 2024-11-12 22:05:06.565 [INFO][7478] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" HandleID="k8s-pod-network.bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" Workload="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--4b7bm-eth0" Nov 12 22:05:06.568906 containerd[1811]: 2024-11-12 22:05:06.567 [INFO][7478] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 22:05:06.568906 containerd[1811]: 2024-11-12 22:05:06.567 [INFO][7462] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" Nov 12 22:05:06.569402 containerd[1811]: time="2024-11-12T22:05:06.568935739Z" level=info msg="TearDown network for sandbox \"bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd\" successfully" Nov 12 22:05:06.569402 containerd[1811]: time="2024-11-12T22:05:06.568956871Z" level=info msg="StopPodSandbox for \"bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd\" returns successfully" Nov 12 22:05:06.569402 containerd[1811]: time="2024-11-12T22:05:06.569375822Z" level=info msg="RemovePodSandbox for \"bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd\"" Nov 12 22:05:06.569402 containerd[1811]: time="2024-11-12T22:05:06.569400363Z" level=info msg="Forcibly stopping sandbox \"bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd\"" Nov 12 22:05:06.617008 containerd[1811]: 2024-11-12 22:05:06.593 [WARNING][7505] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--4b7bm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d6ab8fe9-0b2f-42b5-9df3-8800f48bbf2b", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 4, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-a9d0314af7", ContainerID:"a4cfabd43766698e75bb3b9388fbff6bc3870f7d5a14e487ec8a87007e999fef", Pod:"coredns-7db6d8ff4d-4b7bm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5fa94260055", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:05:06.617008 containerd[1811]: 2024-11-12 22:05:06.594 [INFO][7505] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" Nov 12 22:05:06.617008 containerd[1811]: 2024-11-12 22:05:06.594 [INFO][7505] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" iface="eth0" netns="" Nov 12 22:05:06.617008 containerd[1811]: 2024-11-12 22:05:06.594 [INFO][7505] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" Nov 12 22:05:06.617008 containerd[1811]: 2024-11-12 22:05:06.594 [INFO][7505] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" Nov 12 22:05:06.617008 containerd[1811]: 2024-11-12 22:05:06.609 [INFO][7520] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" HandleID="k8s-pod-network.bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" Workload="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--4b7bm-eth0" Nov 12 22:05:06.617008 containerd[1811]: 2024-11-12 22:05:06.609 [INFO][7520] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 22:05:06.617008 containerd[1811]: 2024-11-12 22:05:06.610 [INFO][7520] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 22:05:06.617008 containerd[1811]: 2024-11-12 22:05:06.614 [WARNING][7520] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" HandleID="k8s-pod-network.bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" Workload="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--4b7bm-eth0" Nov 12 22:05:06.617008 containerd[1811]: 2024-11-12 22:05:06.614 [INFO][7520] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" HandleID="k8s-pod-network.bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" Workload="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--4b7bm-eth0" Nov 12 22:05:06.617008 containerd[1811]: 2024-11-12 22:05:06.615 [INFO][7520] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 22:05:06.617008 containerd[1811]: 2024-11-12 22:05:06.616 [INFO][7505] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd" Nov 12 22:05:06.617458 containerd[1811]: time="2024-11-12T22:05:06.617041284Z" level=info msg="TearDown network for sandbox \"bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd\" successfully" Nov 12 22:05:06.618620 containerd[1811]: time="2024-11-12T22:05:06.618603986Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 22:05:06.618664 containerd[1811]: time="2024-11-12T22:05:06.618635816Z" level=info msg="RemovePodSandbox \"bd245a0d1b8f79797f50b48837911c87123febdf8eb4d76f05297308083753dd\" returns successfully" Nov 12 22:05:06.618959 containerd[1811]: time="2024-11-12T22:05:06.618924272Z" level=info msg="StopPodSandbox for \"0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a\"" Nov 12 22:05:06.652600 containerd[1811]: 2024-11-12 22:05:06.636 [WARNING][7551] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--97n44-eth0", GenerateName:"calico-apiserver-7d4b7b8c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"920c6399-0c92-4fd2-94f7-2a8b4fbecff5", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 4, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4b7b8c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-a9d0314af7", ContainerID:"2e677a7152699ae0cb62b77c34e6f0ebb98396bbf83d028dd9a121f4832b8444", Pod:"calico-apiserver-7d4b7b8c7-97n44", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali01f9ee2b857", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:05:06.652600 containerd[1811]: 2024-11-12 22:05:06.636 [INFO][7551] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" Nov 12 22:05:06.652600 containerd[1811]: 2024-11-12 22:05:06.636 [INFO][7551] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" iface="eth0" netns="" Nov 12 22:05:06.652600 containerd[1811]: 2024-11-12 22:05:06.636 [INFO][7551] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" Nov 12 22:05:06.652600 containerd[1811]: 2024-11-12 22:05:06.636 [INFO][7551] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" Nov 12 22:05:06.652600 containerd[1811]: 2024-11-12 22:05:06.647 [INFO][7567] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" HandleID="k8s-pod-network.0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--97n44-eth0" Nov 12 22:05:06.652600 containerd[1811]: 2024-11-12 22:05:06.647 [INFO][7567] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 22:05:06.652600 containerd[1811]: 2024-11-12 22:05:06.647 [INFO][7567] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 22:05:06.652600 containerd[1811]: 2024-11-12 22:05:06.650 [WARNING][7567] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" HandleID="k8s-pod-network.0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--97n44-eth0" Nov 12 22:05:06.652600 containerd[1811]: 2024-11-12 22:05:06.650 [INFO][7567] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" HandleID="k8s-pod-network.0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--97n44-eth0" Nov 12 22:05:06.652600 containerd[1811]: 2024-11-12 22:05:06.651 [INFO][7567] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 22:05:06.652600 containerd[1811]: 2024-11-12 22:05:06.651 [INFO][7551] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" Nov 12 22:05:06.652600 containerd[1811]: time="2024-11-12T22:05:06.652597922Z" level=info msg="TearDown network for sandbox \"0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a\" successfully" Nov 12 22:05:06.653009 containerd[1811]: time="2024-11-12T22:05:06.652613869Z" level=info msg="StopPodSandbox for \"0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a\" returns successfully" Nov 12 22:05:06.653009 containerd[1811]: time="2024-11-12T22:05:06.652900096Z" level=info msg="RemovePodSandbox for \"0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a\"" Nov 12 22:05:06.653009 containerd[1811]: time="2024-11-12T22:05:06.652923139Z" level=info msg="Forcibly stopping sandbox \"0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a\"" Nov 12 22:05:06.687930 containerd[1811]: 2024-11-12 22:05:06.671 [WARNING][7593] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--97n44-eth0", GenerateName:"calico-apiserver-7d4b7b8c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"920c6399-0c92-4fd2-94f7-2a8b4fbecff5", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 4, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4b7b8c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-a9d0314af7", ContainerID:"2e677a7152699ae0cb62b77c34e6f0ebb98396bbf83d028dd9a121f4832b8444", Pod:"calico-apiserver-7d4b7b8c7-97n44", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali01f9ee2b857", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:05:06.687930 containerd[1811]: 2024-11-12 22:05:06.671 [INFO][7593] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" Nov 12 22:05:06.687930 containerd[1811]: 2024-11-12 22:05:06.671 [INFO][7593] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" iface="eth0" netns="" Nov 12 22:05:06.687930 containerd[1811]: 2024-11-12 22:05:06.671 [INFO][7593] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" Nov 12 22:05:06.687930 containerd[1811]: 2024-11-12 22:05:06.671 [INFO][7593] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" Nov 12 22:05:06.687930 containerd[1811]: 2024-11-12 22:05:06.681 [INFO][7607] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" HandleID="k8s-pod-network.0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--97n44-eth0" Nov 12 22:05:06.687930 containerd[1811]: 2024-11-12 22:05:06.682 [INFO][7607] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 22:05:06.687930 containerd[1811]: 2024-11-12 22:05:06.682 [INFO][7607] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 22:05:06.687930 containerd[1811]: 2024-11-12 22:05:06.685 [WARNING][7607] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" HandleID="k8s-pod-network.0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--97n44-eth0" Nov 12 22:05:06.687930 containerd[1811]: 2024-11-12 22:05:06.685 [INFO][7607] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" HandleID="k8s-pod-network.0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--97n44-eth0" Nov 12 22:05:06.687930 containerd[1811]: 2024-11-12 22:05:06.686 [INFO][7607] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 22:05:06.687930 containerd[1811]: 2024-11-12 22:05:06.687 [INFO][7593] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a" Nov 12 22:05:06.688214 containerd[1811]: time="2024-11-12T22:05:06.687928131Z" level=info msg="TearDown network for sandbox \"0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a\" successfully" Nov 12 22:05:06.704049 containerd[1811]: time="2024-11-12T22:05:06.704004567Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 22:05:06.704049 containerd[1811]: time="2024-11-12T22:05:06.704035839Z" level=info msg="RemovePodSandbox \"0162d49af39eabc4ac2862eea46bcb3dbcaf82a32e1f759b218a5e78e0d8fb8a\" returns successfully" Nov 12 22:05:06.704361 containerd[1811]: time="2024-11-12T22:05:06.704300273Z" level=info msg="StopPodSandbox for \"79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995\"" Nov 12 22:05:06.737505 containerd[1811]: 2024-11-12 22:05:06.721 [WARNING][7634] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--a9d0314af7-k8s-csi--node--driver--5zp6z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2ab49c9f-8e53-44c9-8b09-d25ffd106921", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 4, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85bdc57578", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-a9d0314af7", ContainerID:"1d0d81bfd9cf6deddca629a43c5ca2a6288c98052207a17c735bb8ac1cded317", Pod:"csi-node-driver-5zp6z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.20.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib3dfe231bdd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:05:06.737505 containerd[1811]: 2024-11-12 22:05:06.721 [INFO][7634] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" Nov 12 22:05:06.737505 containerd[1811]: 2024-11-12 22:05:06.721 [INFO][7634] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" iface="eth0" netns="" Nov 12 22:05:06.737505 containerd[1811]: 2024-11-12 22:05:06.722 [INFO][7634] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" Nov 12 22:05:06.737505 containerd[1811]: 2024-11-12 22:05:06.722 [INFO][7634] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" Nov 12 22:05:06.737505 containerd[1811]: 2024-11-12 22:05:06.732 [INFO][7647] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" HandleID="k8s-pod-network.79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" Workload="ci--4081.2.0--a--a9d0314af7-k8s-csi--node--driver--5zp6z-eth0" Nov 12 22:05:06.737505 containerd[1811]: 2024-11-12 22:05:06.732 [INFO][7647] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 22:05:06.737505 containerd[1811]: 2024-11-12 22:05:06.732 [INFO][7647] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 22:05:06.737505 containerd[1811]: 2024-11-12 22:05:06.735 [WARNING][7647] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" HandleID="k8s-pod-network.79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" Workload="ci--4081.2.0--a--a9d0314af7-k8s-csi--node--driver--5zp6z-eth0" Nov 12 22:05:06.737505 containerd[1811]: 2024-11-12 22:05:06.735 [INFO][7647] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" HandleID="k8s-pod-network.79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" Workload="ci--4081.2.0--a--a9d0314af7-k8s-csi--node--driver--5zp6z-eth0" Nov 12 22:05:06.737505 containerd[1811]: 2024-11-12 22:05:06.736 [INFO][7647] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 22:05:06.737505 containerd[1811]: 2024-11-12 22:05:06.736 [INFO][7634] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" Nov 12 22:05:06.737803 containerd[1811]: time="2024-11-12T22:05:06.737527363Z" level=info msg="TearDown network for sandbox \"79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995\" successfully" Nov 12 22:05:06.737803 containerd[1811]: time="2024-11-12T22:05:06.737544012Z" level=info msg="StopPodSandbox for \"79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995\" returns successfully" Nov 12 22:05:06.737842 containerd[1811]: time="2024-11-12T22:05:06.737820409Z" level=info msg="RemovePodSandbox for \"79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995\"" Nov 12 22:05:06.737842 containerd[1811]: time="2024-11-12T22:05:06.737838707Z" level=info msg="Forcibly stopping sandbox \"79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995\"" Nov 12 22:05:06.771901 containerd[1811]: 2024-11-12 22:05:06.755 [WARNING][7673] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--a9d0314af7-k8s-csi--node--driver--5zp6z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2ab49c9f-8e53-44c9-8b09-d25ffd106921", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 4, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85bdc57578", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-a9d0314af7", ContainerID:"1d0d81bfd9cf6deddca629a43c5ca2a6288c98052207a17c735bb8ac1cded317", Pod:"csi-node-driver-5zp6z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.20.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib3dfe231bdd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:05:06.771901 containerd[1811]: 2024-11-12 22:05:06.755 [INFO][7673] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" Nov 12 22:05:06.771901 containerd[1811]: 2024-11-12 22:05:06.755 [INFO][7673] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" iface="eth0" netns="" Nov 12 22:05:06.771901 containerd[1811]: 2024-11-12 22:05:06.755 [INFO][7673] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" Nov 12 22:05:06.771901 containerd[1811]: 2024-11-12 22:05:06.755 [INFO][7673] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" Nov 12 22:05:06.771901 containerd[1811]: 2024-11-12 22:05:06.766 [INFO][7685] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" HandleID="k8s-pod-network.79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" Workload="ci--4081.2.0--a--a9d0314af7-k8s-csi--node--driver--5zp6z-eth0" Nov 12 22:05:06.771901 containerd[1811]: 2024-11-12 22:05:06.766 [INFO][7685] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 22:05:06.771901 containerd[1811]: 2024-11-12 22:05:06.766 [INFO][7685] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 22:05:06.771901 containerd[1811]: 2024-11-12 22:05:06.769 [WARNING][7685] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" HandleID="k8s-pod-network.79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" Workload="ci--4081.2.0--a--a9d0314af7-k8s-csi--node--driver--5zp6z-eth0" Nov 12 22:05:06.771901 containerd[1811]: 2024-11-12 22:05:06.769 [INFO][7685] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" HandleID="k8s-pod-network.79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" Workload="ci--4081.2.0--a--a9d0314af7-k8s-csi--node--driver--5zp6z-eth0" Nov 12 22:05:06.771901 containerd[1811]: 2024-11-12 22:05:06.770 [INFO][7685] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 22:05:06.771901 containerd[1811]: 2024-11-12 22:05:06.771 [INFO][7673] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995" Nov 12 22:05:06.771901 containerd[1811]: time="2024-11-12T22:05:06.771883623Z" level=info msg="TearDown network for sandbox \"79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995\" successfully" Nov 12 22:05:06.773780 containerd[1811]: time="2024-11-12T22:05:06.773727228Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 22:05:06.773780 containerd[1811]: time="2024-11-12T22:05:06.773762553Z" level=info msg="RemovePodSandbox \"79803d5a72fb170cef44d601eb45d0a27577bf5694c23a743394ecc095d08995\" returns successfully" Nov 12 22:05:06.774078 containerd[1811]: time="2024-11-12T22:05:06.774043723Z" level=info msg="StopPodSandbox for \"2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c\"" Nov 12 22:05:06.811924 containerd[1811]: 2024-11-12 22:05:06.793 [WARNING][7712] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--b8h7n-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1044f927-3795-4d3f-aa90-111cd417f193", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 4, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-a9d0314af7", ContainerID:"c88aa18793443e5f7b21fc2f1e4d3e994d7cfd6844464b43a12ae454f4a5c638", Pod:"coredns-7db6d8ff4d-b8h7n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2cce9a5cbe3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:05:06.811924 containerd[1811]: 2024-11-12 22:05:06.793 [INFO][7712] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" Nov 12 22:05:06.811924 containerd[1811]: 2024-11-12 22:05:06.794 [INFO][7712] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" iface="eth0" netns="" Nov 12 22:05:06.811924 containerd[1811]: 2024-11-12 22:05:06.794 [INFO][7712] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" Nov 12 22:05:06.811924 containerd[1811]: 2024-11-12 22:05:06.794 [INFO][7712] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" Nov 12 22:05:06.811924 containerd[1811]: 2024-11-12 22:05:06.805 [INFO][7722] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" HandleID="k8s-pod-network.2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" Workload="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--b8h7n-eth0" Nov 12 22:05:06.811924 containerd[1811]: 2024-11-12 22:05:06.805 [INFO][7722] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 22:05:06.811924 containerd[1811]: 2024-11-12 22:05:06.805 [INFO][7722] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 22:05:06.811924 containerd[1811]: 2024-11-12 22:05:06.809 [WARNING][7722] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" HandleID="k8s-pod-network.2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" Workload="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--b8h7n-eth0" Nov 12 22:05:06.811924 containerd[1811]: 2024-11-12 22:05:06.809 [INFO][7722] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" HandleID="k8s-pod-network.2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" Workload="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--b8h7n-eth0" Nov 12 22:05:06.811924 containerd[1811]: 2024-11-12 22:05:06.810 [INFO][7722] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 22:05:06.811924 containerd[1811]: 2024-11-12 22:05:06.811 [INFO][7712] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" Nov 12 22:05:06.812324 containerd[1811]: time="2024-11-12T22:05:06.811954290Z" level=info msg="TearDown network for sandbox \"2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c\" successfully" Nov 12 22:05:06.812324 containerd[1811]: time="2024-11-12T22:05:06.811975712Z" level=info msg="StopPodSandbox for \"2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c\" returns successfully" Nov 12 22:05:06.812324 containerd[1811]: time="2024-11-12T22:05:06.812296177Z" level=info msg="RemovePodSandbox for \"2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c\"" Nov 12 22:05:06.812324 containerd[1811]: time="2024-11-12T22:05:06.812314914Z" level=info msg="Forcibly stopping sandbox \"2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c\"" Nov 12 22:05:06.850421 containerd[1811]: 2024-11-12 22:05:06.832 [WARNING][7747] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--b8h7n-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1044f927-3795-4d3f-aa90-111cd417f193", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 4, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-a9d0314af7", ContainerID:"c88aa18793443e5f7b21fc2f1e4d3e994d7cfd6844464b43a12ae454f4a5c638", Pod:"coredns-7db6d8ff4d-b8h7n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2cce9a5cbe3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:05:06.850421 containerd[1811]: 2024-11-12 22:05:06.832 [INFO][7747] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" Nov 12 22:05:06.850421 containerd[1811]: 2024-11-12 22:05:06.832 [INFO][7747] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" iface="eth0" netns="" Nov 12 22:05:06.850421 containerd[1811]: 2024-11-12 22:05:06.832 [INFO][7747] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" Nov 12 22:05:06.850421 containerd[1811]: 2024-11-12 22:05:06.832 [INFO][7747] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" Nov 12 22:05:06.850421 containerd[1811]: 2024-11-12 22:05:06.844 [INFO][7760] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" HandleID="k8s-pod-network.2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" Workload="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--b8h7n-eth0" Nov 12 22:05:06.850421 containerd[1811]: 2024-11-12 22:05:06.844 [INFO][7760] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 22:05:06.850421 containerd[1811]: 2024-11-12 22:05:06.844 [INFO][7760] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 22:05:06.850421 containerd[1811]: 2024-11-12 22:05:06.848 [WARNING][7760] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" HandleID="k8s-pod-network.2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" Workload="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--b8h7n-eth0" Nov 12 22:05:06.850421 containerd[1811]: 2024-11-12 22:05:06.848 [INFO][7760] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" HandleID="k8s-pod-network.2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" Workload="ci--4081.2.0--a--a9d0314af7-k8s-coredns--7db6d8ff4d--b8h7n-eth0" Nov 12 22:05:06.850421 containerd[1811]: 2024-11-12 22:05:06.849 [INFO][7760] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 22:05:06.850421 containerd[1811]: 2024-11-12 22:05:06.849 [INFO][7747] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c" Nov 12 22:05:06.850753 containerd[1811]: time="2024-11-12T22:05:06.850446308Z" level=info msg="TearDown network for sandbox \"2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c\" successfully" Nov 12 22:05:06.851873 containerd[1811]: time="2024-11-12T22:05:06.851835435Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 22:05:06.851873 containerd[1811]: time="2024-11-12T22:05:06.851862721Z" level=info msg="RemovePodSandbox \"2070d760aa8ab7df3a1177dd306f9d80dfdadb08d3a237808259495c2ebb010c\" returns successfully" Nov 12 22:05:06.852116 containerd[1811]: time="2024-11-12T22:05:06.852105984Z" level=info msg="StopPodSandbox for \"679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c\"" Nov 12 22:05:06.887067 containerd[1811]: 2024-11-12 22:05:06.870 [WARNING][7789] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0" Nov 12 22:05:06.887067 containerd[1811]: 2024-11-12 22:05:06.870 [INFO][7789] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" Nov 12 22:05:06.887067 containerd[1811]: 2024-11-12 22:05:06.870 [INFO][7789] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" iface="eth0" netns="" Nov 12 22:05:06.887067 containerd[1811]: 2024-11-12 22:05:06.870 [INFO][7789] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" Nov 12 22:05:06.887067 containerd[1811]: 2024-11-12 22:05:06.870 [INFO][7789] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" Nov 12 22:05:06.887067 containerd[1811]: 2024-11-12 22:05:06.880 [INFO][7801] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" HandleID="k8s-pod-network.679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0" Nov 12 22:05:06.887067 containerd[1811]: 2024-11-12 22:05:06.881 [INFO][7801] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 22:05:06.887067 containerd[1811]: 2024-11-12 22:05:06.881 [INFO][7801] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 22:05:06.887067 containerd[1811]: 2024-11-12 22:05:06.884 [WARNING][7801] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" HandleID="k8s-pod-network.679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0" Nov 12 22:05:06.887067 containerd[1811]: 2024-11-12 22:05:06.884 [INFO][7801] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" HandleID="k8s-pod-network.679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0" Nov 12 22:05:06.887067 containerd[1811]: 2024-11-12 22:05:06.885 [INFO][7801] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 22:05:06.887067 containerd[1811]: 2024-11-12 22:05:06.886 [INFO][7789] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" Nov 12 22:05:06.887334 containerd[1811]: time="2024-11-12T22:05:06.887088162Z" level=info msg="TearDown network for sandbox \"679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c\" successfully" Nov 12 22:05:06.887334 containerd[1811]: time="2024-11-12T22:05:06.887111561Z" level=info msg="StopPodSandbox for \"679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c\" returns successfully" Nov 12 22:05:06.887441 containerd[1811]: time="2024-11-12T22:05:06.887404777Z" level=info msg="RemovePodSandbox for \"679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c\"" Nov 12 22:05:06.887441 containerd[1811]: time="2024-11-12T22:05:06.887421231Z" level=info msg="Forcibly stopping sandbox \"679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c\"" Nov 12 22:05:06.920649 containerd[1811]: 2024-11-12 22:05:06.904 [WARNING][7828] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0" Nov 12 22:05:06.920649 containerd[1811]: 2024-11-12 22:05:06.904 [INFO][7828] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" Nov 12 22:05:06.920649 containerd[1811]: 2024-11-12 22:05:06.904 [INFO][7828] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" iface="eth0" netns="" Nov 12 22:05:06.920649 containerd[1811]: 2024-11-12 22:05:06.904 [INFO][7828] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" Nov 12 22:05:06.920649 containerd[1811]: 2024-11-12 22:05:06.904 [INFO][7828] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" Nov 12 22:05:06.920649 containerd[1811]: 2024-11-12 22:05:06.914 [INFO][7841] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" HandleID="k8s-pod-network.679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0" Nov 12 22:05:06.920649 containerd[1811]: 2024-11-12 22:05:06.914 [INFO][7841] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 22:05:06.920649 containerd[1811]: 2024-11-12 22:05:06.914 [INFO][7841] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 22:05:06.920649 containerd[1811]: 2024-11-12 22:05:06.918 [WARNING][7841] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" HandleID="k8s-pod-network.679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0" Nov 12 22:05:06.920649 containerd[1811]: 2024-11-12 22:05:06.918 [INFO][7841] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" HandleID="k8s-pod-network.679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0" Nov 12 22:05:06.920649 containerd[1811]: 2024-11-12 22:05:06.919 [INFO][7841] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 22:05:06.920649 containerd[1811]: 2024-11-12 22:05:06.919 [INFO][7828] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c" Nov 12 22:05:06.920649 containerd[1811]: time="2024-11-12T22:05:06.920641029Z" level=info msg="TearDown network for sandbox \"679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c\" successfully" Nov 12 22:05:06.922089 containerd[1811]: time="2024-11-12T22:05:06.922048233Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 22:05:06.922089 containerd[1811]: time="2024-11-12T22:05:06.922074441Z" level=info msg="RemovePodSandbox \"679616d921d6055e46c34accfbc9831a9fab61e56e2180a796f05209a46ecc0c\" returns successfully" Nov 12 22:05:06.922385 containerd[1811]: time="2024-11-12T22:05:06.922354401Z" level=info msg="StopPodSandbox for \"a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475\"" Nov 12 22:05:06.956368 containerd[1811]: 2024-11-12 22:05:06.939 [WARNING][7869] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0" Nov 12 22:05:06.956368 containerd[1811]: 2024-11-12 22:05:06.940 [INFO][7869] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" Nov 12 22:05:06.956368 containerd[1811]: 2024-11-12 22:05:06.940 [INFO][7869] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" iface="eth0" netns="" Nov 12 22:05:06.956368 containerd[1811]: 2024-11-12 22:05:06.940 [INFO][7869] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" Nov 12 22:05:06.956368 containerd[1811]: 2024-11-12 22:05:06.940 [INFO][7869] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" Nov 12 22:05:06.956368 containerd[1811]: 2024-11-12 22:05:06.950 [INFO][7882] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" HandleID="k8s-pod-network.a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0" Nov 12 22:05:06.956368 containerd[1811]: 2024-11-12 22:05:06.950 [INFO][7882] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 22:05:06.956368 containerd[1811]: 2024-11-12 22:05:06.950 [INFO][7882] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 22:05:06.956368 containerd[1811]: 2024-11-12 22:05:06.954 [WARNING][7882] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" HandleID="k8s-pod-network.a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0" Nov 12 22:05:06.956368 containerd[1811]: 2024-11-12 22:05:06.954 [INFO][7882] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" HandleID="k8s-pod-network.a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0" Nov 12 22:05:06.956368 containerd[1811]: 2024-11-12 22:05:06.955 [INFO][7882] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 22:05:06.956368 containerd[1811]: 2024-11-12 22:05:06.955 [INFO][7869] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" Nov 12 22:05:06.956660 containerd[1811]: time="2024-11-12T22:05:06.956372783Z" level=info msg="TearDown network for sandbox \"a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475\" successfully" Nov 12 22:05:06.956660 containerd[1811]: time="2024-11-12T22:05:06.956389066Z" level=info msg="StopPodSandbox for \"a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475\" returns successfully" Nov 12 22:05:06.956704 containerd[1811]: time="2024-11-12T22:05:06.956677580Z" level=info msg="RemovePodSandbox for \"a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475\"" Nov 12 22:05:06.956704 containerd[1811]: time="2024-11-12T22:05:06.956693841Z" level=info msg="Forcibly stopping sandbox \"a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475\"" Nov 12 22:05:06.990605 containerd[1811]: 2024-11-12 22:05:06.974 [WARNING][7911] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" WorkloadEndpoint="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0" Nov 12 22:05:06.990605 containerd[1811]: 2024-11-12 22:05:06.974 [INFO][7911] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" Nov 12 22:05:06.990605 containerd[1811]: 2024-11-12 22:05:06.974 [INFO][7911] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" iface="eth0" netns="" Nov 12 22:05:06.990605 containerd[1811]: 2024-11-12 22:05:06.974 [INFO][7911] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" Nov 12 22:05:06.990605 containerd[1811]: 2024-11-12 22:05:06.974 [INFO][7911] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" Nov 12 22:05:06.990605 containerd[1811]: 2024-11-12 22:05:06.984 [INFO][7923] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" HandleID="k8s-pod-network.a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0" Nov 12 22:05:06.990605 containerd[1811]: 2024-11-12 22:05:06.984 [INFO][7923] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 22:05:06.990605 containerd[1811]: 2024-11-12 22:05:06.984 [INFO][7923] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 22:05:06.990605 containerd[1811]: 2024-11-12 22:05:06.988 [WARNING][7923] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" HandleID="k8s-pod-network.a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0" Nov 12 22:05:06.990605 containerd[1811]: 2024-11-12 22:05:06.988 [INFO][7923] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" HandleID="k8s-pod-network.a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--kube--controllers--7546446886--8b475-eth0" Nov 12 22:05:06.990605 containerd[1811]: 2024-11-12 22:05:06.989 [INFO][7923] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 22:05:06.990605 containerd[1811]: 2024-11-12 22:05:06.990 [INFO][7911] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475" Nov 12 22:05:06.990855 containerd[1811]: time="2024-11-12T22:05:06.990628665Z" level=info msg="TearDown network for sandbox \"a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475\" successfully" Nov 12 22:05:06.992092 containerd[1811]: time="2024-11-12T22:05:06.992035118Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 22:05:06.992092 containerd[1811]: time="2024-11-12T22:05:06.992063874Z" level=info msg="RemovePodSandbox \"a75ba88da3b0ec5869e7bde47637269d4baaa603eb3ef0c3f280eed1a6d04475\" returns successfully" Nov 12 22:05:06.992336 containerd[1811]: time="2024-11-12T22:05:06.992296924Z" level=info msg="StopPodSandbox for \"4e6e3003c810385f8e2c9fa533f8bfeeb20172e1d3cb3d7e246e2b4669037e59\"" Nov 12 22:05:06.992373 containerd[1811]: time="2024-11-12T22:05:06.992336488Z" level=info msg="TearDown network for sandbox \"4e6e3003c810385f8e2c9fa533f8bfeeb20172e1d3cb3d7e246e2b4669037e59\" successfully" Nov 12 22:05:06.992373 containerd[1811]: time="2024-11-12T22:05:06.992343395Z" level=info msg="StopPodSandbox for \"4e6e3003c810385f8e2c9fa533f8bfeeb20172e1d3cb3d7e246e2b4669037e59\" returns successfully" Nov 12 22:05:06.992514 containerd[1811]: time="2024-11-12T22:05:06.992475806Z" level=info msg="RemovePodSandbox for \"4e6e3003c810385f8e2c9fa533f8bfeeb20172e1d3cb3d7e246e2b4669037e59\"" Nov 12 22:05:06.992514 containerd[1811]: time="2024-11-12T22:05:06.992487933Z" level=info msg="Forcibly stopping sandbox \"4e6e3003c810385f8e2c9fa533f8bfeeb20172e1d3cb3d7e246e2b4669037e59\"" Nov 12 22:05:06.992559 containerd[1811]: time="2024-11-12T22:05:06.992515986Z" level=info msg="TearDown network for sandbox \"4e6e3003c810385f8e2c9fa533f8bfeeb20172e1d3cb3d7e246e2b4669037e59\" successfully" Nov 12 22:05:06.993750 containerd[1811]: time="2024-11-12T22:05:06.993702380Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4e6e3003c810385f8e2c9fa533f8bfeeb20172e1d3cb3d7e246e2b4669037e59\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 22:05:06.993750 containerd[1811]: time="2024-11-12T22:05:06.993721252Z" level=info msg="RemovePodSandbox \"4e6e3003c810385f8e2c9fa533f8bfeeb20172e1d3cb3d7e246e2b4669037e59\" returns successfully" Nov 12 22:05:06.993889 containerd[1811]: time="2024-11-12T22:05:06.993846003Z" level=info msg="StopPodSandbox for \"5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97\"" Nov 12 22:05:07.028418 containerd[1811]: 2024-11-12 22:05:07.012 [WARNING][7954] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--kll9h-eth0", GenerateName:"calico-apiserver-7d4b7b8c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"a9130193-3858-4a34-9c6f-60605387578f", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 4, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4b7b8c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-a9d0314af7", ContainerID:"e9604b733ebbe2ebe88b30eb1977ef780814d711ef2c76cb075da717ee20450d", Pod:"calico-apiserver-7d4b7b8c7-kll9h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3c80bfbfc20", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:05:07.028418 containerd[1811]: 2024-11-12 22:05:07.012 [INFO][7954] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" Nov 12 22:05:07.028418 containerd[1811]: 2024-11-12 22:05:07.012 [INFO][7954] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" iface="eth0" netns="" Nov 12 22:05:07.028418 containerd[1811]: 2024-11-12 22:05:07.012 [INFO][7954] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" Nov 12 22:05:07.028418 containerd[1811]: 2024-11-12 22:05:07.012 [INFO][7954] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" Nov 12 22:05:07.028418 containerd[1811]: 2024-11-12 22:05:07.022 [INFO][7973] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" HandleID="k8s-pod-network.5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--kll9h-eth0" Nov 12 22:05:07.028418 containerd[1811]: 2024-11-12 22:05:07.022 [INFO][7973] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 22:05:07.028418 containerd[1811]: 2024-11-12 22:05:07.022 [INFO][7973] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 22:05:07.028418 containerd[1811]: 2024-11-12 22:05:07.026 [WARNING][7973] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" HandleID="k8s-pod-network.5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--kll9h-eth0" Nov 12 22:05:07.028418 containerd[1811]: 2024-11-12 22:05:07.026 [INFO][7973] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" HandleID="k8s-pod-network.5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--kll9h-eth0" Nov 12 22:05:07.028418 containerd[1811]: 2024-11-12 22:05:07.027 [INFO][7973] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 22:05:07.028418 containerd[1811]: 2024-11-12 22:05:07.027 [INFO][7954] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" Nov 12 22:05:07.028418 containerd[1811]: time="2024-11-12T22:05:07.028363943Z" level=info msg="TearDown network for sandbox \"5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97\" successfully" Nov 12 22:05:07.028418 containerd[1811]: time="2024-11-12T22:05:07.028381009Z" level=info msg="StopPodSandbox for \"5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97\" returns successfully" Nov 12 22:05:07.028729 containerd[1811]: time="2024-11-12T22:05:07.028647434Z" level=info msg="RemovePodSandbox for \"5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97\"" Nov 12 22:05:07.028729 containerd[1811]: time="2024-11-12T22:05:07.028664013Z" level=info msg="Forcibly stopping sandbox \"5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97\"" Nov 12 22:05:07.062149 containerd[1811]: 2024-11-12 22:05:07.045 [WARNING][8001] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--kll9h-eth0", GenerateName:"calico-apiserver-7d4b7b8c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"a9130193-3858-4a34-9c6f-60605387578f", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 22, 4, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4b7b8c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-a9d0314af7", ContainerID:"e9604b733ebbe2ebe88b30eb1977ef780814d711ef2c76cb075da717ee20450d", Pod:"calico-apiserver-7d4b7b8c7-kll9h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3c80bfbfc20", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 22:05:07.062149 containerd[1811]: 2024-11-12 22:05:07.045 [INFO][8001] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" Nov 12 22:05:07.062149 containerd[1811]: 2024-11-12 22:05:07.045 [INFO][8001] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" iface="eth0" netns="" Nov 12 22:05:07.062149 containerd[1811]: 2024-11-12 22:05:07.045 [INFO][8001] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" Nov 12 22:05:07.062149 containerd[1811]: 2024-11-12 22:05:07.045 [INFO][8001] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" Nov 12 22:05:07.062149 containerd[1811]: 2024-11-12 22:05:07.056 [INFO][8015] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" HandleID="k8s-pod-network.5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--kll9h-eth0" Nov 12 22:05:07.062149 containerd[1811]: 2024-11-12 22:05:07.056 [INFO][8015] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 22:05:07.062149 containerd[1811]: 2024-11-12 22:05:07.056 [INFO][8015] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 22:05:07.062149 containerd[1811]: 2024-11-12 22:05:07.059 [WARNING][8015] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" HandleID="k8s-pod-network.5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--kll9h-eth0" Nov 12 22:05:07.062149 containerd[1811]: 2024-11-12 22:05:07.059 [INFO][8015] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" HandleID="k8s-pod-network.5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" Workload="ci--4081.2.0--a--a9d0314af7-k8s-calico--apiserver--7d4b7b8c7--kll9h-eth0" Nov 12 22:05:07.062149 containerd[1811]: 2024-11-12 22:05:07.060 [INFO][8015] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 22:05:07.062149 containerd[1811]: 2024-11-12 22:05:07.061 [INFO][8001] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97" Nov 12 22:05:07.062439 containerd[1811]: time="2024-11-12T22:05:07.062174252Z" level=info msg="TearDown network for sandbox \"5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97\" successfully" Nov 12 22:05:07.063476 containerd[1811]: time="2024-11-12T22:05:07.063435978Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 22:05:07.063476 containerd[1811]: time="2024-11-12T22:05:07.063464070Z" level=info msg="RemovePodSandbox \"5322934edda645f608fbd7fa16a818bedb9298c194fdad43e8648de6776fdf97\" returns successfully" Nov 12 22:05:07.063760 containerd[1811]: time="2024-11-12T22:05:07.063697414Z" level=info msg="StopPodSandbox for \"edfe889ba9d743f17fc26fca6b6f7f04094e56a0483e8ac575bd4eb6e99e81fc\"" Nov 12 22:05:07.063760 containerd[1811]: time="2024-11-12T22:05:07.063738573Z" level=info msg="TearDown network for sandbox \"edfe889ba9d743f17fc26fca6b6f7f04094e56a0483e8ac575bd4eb6e99e81fc\" successfully" Nov 12 22:05:07.063760 containerd[1811]: time="2024-11-12T22:05:07.063745654Z" level=info msg="StopPodSandbox for \"edfe889ba9d743f17fc26fca6b6f7f04094e56a0483e8ac575bd4eb6e99e81fc\" returns successfully" Nov 12 22:05:07.063957 containerd[1811]: time="2024-11-12T22:05:07.063916350Z" level=info msg="RemovePodSandbox for \"edfe889ba9d743f17fc26fca6b6f7f04094e56a0483e8ac575bd4eb6e99e81fc\"" Nov 12 22:05:07.063957 containerd[1811]: time="2024-11-12T22:05:07.063929852Z" level=info msg="Forcibly stopping sandbox \"edfe889ba9d743f17fc26fca6b6f7f04094e56a0483e8ac575bd4eb6e99e81fc\"" Nov 12 22:05:07.064002 containerd[1811]: time="2024-11-12T22:05:07.063956600Z" level=info msg="TearDown network for sandbox \"edfe889ba9d743f17fc26fca6b6f7f04094e56a0483e8ac575bd4eb6e99e81fc\" successfully" Nov 12 22:05:07.065168 containerd[1811]: time="2024-11-12T22:05:07.065128792Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"edfe889ba9d743f17fc26fca6b6f7f04094e56a0483e8ac575bd4eb6e99e81fc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 22:05:07.065168 containerd[1811]: time="2024-11-12T22:05:07.065149298Z" level=info msg="RemovePodSandbox \"edfe889ba9d743f17fc26fca6b6f7f04094e56a0483e8ac575bd4eb6e99e81fc\" returns successfully" Nov 12 22:05:39.426562 systemd[1]: Started sshd@7-147.75.202.249:22-195.178.110.67:34274.service - OpenSSH per-connection server daemon (195.178.110.67:34274). Nov 12 22:05:39.992037 sshd[9071]: Invalid user b from 195.178.110.67 port 34274 Nov 12 22:05:40.130610 sshd[9071]: Connection closed by invalid user b 195.178.110.67 port 34274 [preauth] Nov 12 22:05:40.133972 systemd[1]: sshd@7-147.75.202.249:22-195.178.110.67:34274.service: Deactivated successfully. Nov 12 22:10:06.774054 systemd[1]: Started sshd@8-147.75.202.249:22-147.75.109.163:40252.service - OpenSSH per-connection server daemon (147.75.109.163:40252). Nov 12 22:10:06.806397 sshd[9695]: Accepted publickey for core from 147.75.109.163 port 40252 ssh2: RSA SHA256:KBVmhKW+oGJyeiskyb5aOJEhXukp3J/3zKpNJrwWNKM Nov 12 22:10:06.807821 sshd[9695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:10:06.812856 systemd-logind[1794]: New session 10 of user core. Nov 12 22:10:06.822552 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 22:10:06.957543 sshd[9695]: pam_unix(sshd:session): session closed for user core Nov 12 22:10:06.959447 systemd[1]: sshd@8-147.75.202.249:22-147.75.109.163:40252.service: Deactivated successfully. Nov 12 22:10:06.960311 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 22:10:06.960728 systemd-logind[1794]: Session 10 logged out. Waiting for processes to exit. Nov 12 22:10:06.961196 systemd-logind[1794]: Removed session 10. Nov 12 22:10:11.983951 systemd[1]: Started sshd@9-147.75.202.249:22-147.75.109.163:42700.service - OpenSSH per-connection server daemon (147.75.109.163:42700). Nov 12 22:10:12.020309 sshd[9722]: Accepted publickey for core from 147.75.109.163 port 42700 ssh2: RSA SHA256:KBVmhKW+oGJyeiskyb5aOJEhXukp3J/3zKpNJrwWNKM Nov 12 22:10:12.021764 sshd[9722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:10:12.026335 systemd-logind[1794]: New session 11 of user core. Nov 12 22:10:12.041581 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 22:10:12.142918 sshd[9722]: pam_unix(sshd:session): session closed for user core Nov 12 22:10:12.144888 systemd[1]: sshd@9-147.75.202.249:22-147.75.109.163:42700.service: Deactivated successfully. Nov 12 22:10:12.146049 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 22:10:12.146932 systemd-logind[1794]: Session 11 logged out. Waiting for processes to exit. Nov 12 22:10:12.147564 systemd-logind[1794]: Removed session 11. Nov 12 22:10:17.177672 systemd[1]: Started sshd@10-147.75.202.249:22-147.75.109.163:42712.service - OpenSSH per-connection server daemon (147.75.109.163:42712). Nov 12 22:10:17.201516 sshd[9772]: Accepted publickey for core from 147.75.109.163 port 42712 ssh2: RSA SHA256:KBVmhKW+oGJyeiskyb5aOJEhXukp3J/3zKpNJrwWNKM Nov 12 22:10:17.202143 sshd[9772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:10:17.204756 systemd-logind[1794]: New session 12 of user core. Nov 12 22:10:17.215403 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 22:10:17.302208 sshd[9772]: pam_unix(sshd:session): session closed for user core Nov 12 22:10:17.316004 systemd[1]: sshd@10-147.75.202.249:22-147.75.109.163:42712.service: Deactivated successfully. Nov 12 22:10:17.316829 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 22:10:17.317606 systemd-logind[1794]: Session 12 logged out. Waiting for processes to exit. Nov 12 22:10:17.318263 systemd[1]: Started sshd@11-147.75.202.249:22-147.75.109.163:42720.service - OpenSSH per-connection server daemon (147.75.109.163:42720). Nov 12 22:10:17.318816 systemd-logind[1794]: Removed session 12. Nov 12 22:10:17.343732 sshd[9799]: Accepted publickey for core from 147.75.109.163 port 42720 ssh2: RSA SHA256:KBVmhKW+oGJyeiskyb5aOJEhXukp3J/3zKpNJrwWNKM Nov 12 22:10:17.344829 sshd[9799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:10:17.348517 systemd-logind[1794]: New session 13 of user core. Nov 12 22:10:17.373624 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 22:10:17.484314 sshd[9799]: pam_unix(sshd:session): session closed for user core Nov 12 22:10:17.497332 systemd[1]: sshd@11-147.75.202.249:22-147.75.109.163:42720.service: Deactivated successfully. Nov 12 22:10:17.498213 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 22:10:17.498900 systemd-logind[1794]: Session 13 logged out. Waiting for processes to exit. Nov 12 22:10:17.499656 systemd[1]: Started sshd@12-147.75.202.249:22-147.75.109.163:42724.service - OpenSSH per-connection server daemon (147.75.109.163:42724). Nov 12 22:10:17.500200 systemd-logind[1794]: Removed session 13. Nov 12 22:10:17.523119 sshd[9823]: Accepted publickey for core from 147.75.109.163 port 42724 ssh2: RSA SHA256:KBVmhKW+oGJyeiskyb5aOJEhXukp3J/3zKpNJrwWNKM Nov 12 22:10:17.523899 sshd[9823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:10:17.526553 systemd-logind[1794]: New session 14 of user core. Nov 12 22:10:17.535530 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 22:10:17.624881 sshd[9823]: pam_unix(sshd:session): session closed for user core Nov 12 22:10:17.626411 systemd[1]: sshd@12-147.75.202.249:22-147.75.109.163:42724.service: Deactivated successfully. Nov 12 22:10:17.627326 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 22:10:17.628048 systemd-logind[1794]: Session 14 logged out. Waiting for processes to exit. Nov 12 22:10:17.628683 systemd-logind[1794]: Removed session 14. Nov 12 22:10:22.656582 systemd[1]: Started sshd@13-147.75.202.249:22-147.75.109.163:46344.service - OpenSSH per-connection server daemon (147.75.109.163:46344). Nov 12 22:10:22.678597 sshd[9858]: Accepted publickey for core from 147.75.109.163 port 46344 ssh2: RSA SHA256:KBVmhKW+oGJyeiskyb5aOJEhXukp3J/3zKpNJrwWNKM Nov 12 22:10:22.679409 sshd[9858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:10:22.682172 systemd-logind[1794]: New session 15 of user core. Nov 12 22:10:22.695357 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 22:10:22.782263 sshd[9858]: pam_unix(sshd:session): session closed for user core Nov 12 22:10:22.783971 systemd[1]: sshd@13-147.75.202.249:22-147.75.109.163:46344.service: Deactivated successfully. Nov 12 22:10:22.784900 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 22:10:22.785646 systemd-logind[1794]: Session 15 logged out. Waiting for processes to exit. Nov 12 22:10:22.786204 systemd-logind[1794]: Removed session 15. Nov 12 22:10:27.825544 systemd[1]: Started sshd@14-147.75.202.249:22-147.75.109.163:46356.service - OpenSSH per-connection server daemon (147.75.109.163:46356). Nov 12 22:10:27.847886 sshd[9913]: Accepted publickey for core from 147.75.109.163 port 46356 ssh2: RSA SHA256:KBVmhKW+oGJyeiskyb5aOJEhXukp3J/3zKpNJrwWNKM Nov 12 22:10:27.848973 sshd[9913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:10:27.852680 systemd-logind[1794]: New session 16 of user core. Nov 12 22:10:27.870539 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 22:10:27.965350 sshd[9913]: pam_unix(sshd:session): session closed for user core Nov 12 22:10:27.966779 systemd[1]: sshd@14-147.75.202.249:22-147.75.109.163:46356.service: Deactivated successfully. Nov 12 22:10:27.967685 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 22:10:27.968358 systemd-logind[1794]: Session 16 logged out. Waiting for processes to exit. Nov 12 22:10:27.968979 systemd-logind[1794]: Removed session 16. Nov 12 22:10:32.982494 systemd[1]: Started sshd@15-147.75.202.249:22-147.75.109.163:33252.service - OpenSSH per-connection server daemon (147.75.109.163:33252). Nov 12 22:10:33.005998 sshd[9959]: Accepted publickey for core from 147.75.109.163 port 33252 ssh2: RSA SHA256:KBVmhKW+oGJyeiskyb5aOJEhXukp3J/3zKpNJrwWNKM Nov 12 22:10:33.006783 sshd[9959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:10:33.009817 systemd-logind[1794]: New session 17 of user core. Nov 12 22:10:33.026727 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 22:10:33.123061 sshd[9959]: pam_unix(sshd:session): session closed for user core Nov 12 22:10:33.124679 systemd[1]: sshd@15-147.75.202.249:22-147.75.109.163:33252.service: Deactivated successfully. Nov 12 22:10:33.125590 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 22:10:33.126228 systemd-logind[1794]: Session 17 logged out. Waiting for processes to exit. Nov 12 22:10:33.126946 systemd-logind[1794]: Removed session 17. Nov 12 22:10:38.152479 systemd[1]: Started sshd@16-147.75.202.249:22-147.75.109.163:33260.service - OpenSSH per-connection server daemon (147.75.109.163:33260). Nov 12 22:10:38.174348 sshd[9985]: Accepted publickey for core from 147.75.109.163 port 33260 ssh2: RSA SHA256:KBVmhKW+oGJyeiskyb5aOJEhXukp3J/3zKpNJrwWNKM Nov 12 22:10:38.175093 sshd[9985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:10:38.177996 systemd-logind[1794]: New session 18 of user core. Nov 12 22:10:38.195501 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 22:10:38.283211 sshd[9985]: pam_unix(sshd:session): session closed for user core Nov 12 22:10:38.307140 systemd[1]: sshd@16-147.75.202.249:22-147.75.109.163:33260.service: Deactivated successfully. Nov 12 22:10:38.308036 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 22:10:38.308849 systemd-logind[1794]: Session 18 logged out. Waiting for processes to exit. Nov 12 22:10:38.309638 systemd[1]: Started sshd@17-147.75.202.249:22-147.75.109.163:33262.service - OpenSSH per-connection server daemon (147.75.109.163:33262). Nov 12 22:10:38.310277 systemd-logind[1794]: Removed session 18. Nov 12 22:10:38.337569 sshd[10011]: Accepted publickey for core from 147.75.109.163 port 33262 ssh2: RSA SHA256:KBVmhKW+oGJyeiskyb5aOJEhXukp3J/3zKpNJrwWNKM Nov 12 22:10:38.338987 sshd[10011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:10:38.344413 systemd-logind[1794]: New session 19 of user core. Nov 12 22:10:38.367806 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 22:10:38.569694 sshd[10011]: pam_unix(sshd:session): session closed for user core Nov 12 22:10:38.592574 systemd[1]: sshd@17-147.75.202.249:22-147.75.109.163:33262.service: Deactivated successfully. Nov 12 22:10:38.596525 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 22:10:38.599940 systemd-logind[1794]: Session 19 logged out. Waiting for processes to exit. Nov 12 22:10:38.619153 systemd[1]: Started sshd@18-147.75.202.249:22-147.75.109.163:33266.service - OpenSSH per-connection server daemon (147.75.109.163:33266). Nov 12 22:10:38.621811 systemd-logind[1794]: Removed session 19. Nov 12 22:10:38.670566 sshd[10032]: Accepted publickey for core from 147.75.109.163 port 33266 ssh2: RSA SHA256:KBVmhKW+oGJyeiskyb5aOJEhXukp3J/3zKpNJrwWNKM Nov 12 22:10:38.672369 sshd[10032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:10:38.678361 systemd-logind[1794]: New session 20 of user core. Nov 12 22:10:38.698696 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 22:10:40.053919 sshd[10032]: pam_unix(sshd:session): session closed for user core Nov 12 22:10:40.075540 systemd[1]: sshd@18-147.75.202.249:22-147.75.109.163:33266.service: Deactivated successfully. Nov 12 22:10:40.079278 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 22:10:40.081349 systemd-logind[1794]: Session 20 logged out. Waiting for processes to exit. Nov 12 22:10:40.105747 systemd[1]: Started sshd@19-147.75.202.249:22-147.75.109.163:36630.service - OpenSSH per-connection server daemon (147.75.109.163:36630). Nov 12 22:10:40.107033 systemd-logind[1794]: Removed session 20. Nov 12 22:10:40.140989 sshd[10065]: Accepted publickey for core from 147.75.109.163 port 36630 ssh2: RSA SHA256:KBVmhKW+oGJyeiskyb5aOJEhXukp3J/3zKpNJrwWNKM Nov 12 22:10:40.142670 sshd[10065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:10:40.147621 systemd-logind[1794]: New session 21 of user core. Nov 12 22:10:40.167696 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 22:10:40.351303 sshd[10065]: pam_unix(sshd:session): session closed for user core Nov 12 22:10:40.368951 systemd[1]: sshd@19-147.75.202.249:22-147.75.109.163:36630.service: Deactivated successfully. Nov 12 22:10:40.373003 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 22:10:40.376439 systemd-logind[1794]: Session 21 logged out. Waiting for processes to exit. Nov 12 22:10:40.379655 systemd[1]: Started sshd@20-147.75.202.249:22-147.75.109.163:36642.service - OpenSSH per-connection server daemon (147.75.109.163:36642). Nov 12 22:10:40.382336 systemd-logind[1794]: Removed session 21. Nov 12 22:10:40.434624 sshd[10092]: Accepted publickey for core from 147.75.109.163 port 36642 ssh2: RSA SHA256:KBVmhKW+oGJyeiskyb5aOJEhXukp3J/3zKpNJrwWNKM Nov 12 22:10:40.438166 sshd[10092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:10:40.450407 systemd-logind[1794]: New session 22 of user core. Nov 12 22:10:40.470592 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 22:10:40.601420 sshd[10092]: pam_unix(sshd:session): session closed for user core Nov 12 22:10:40.603387 systemd[1]: sshd@20-147.75.202.249:22-147.75.109.163:36642.service: Deactivated successfully. Nov 12 22:10:40.604218 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 22:10:40.604583 systemd-logind[1794]: Session 22 logged out. Waiting for processes to exit. Nov 12 22:10:40.605055 systemd-logind[1794]: Removed session 22. Nov 12 22:10:45.636591 systemd[1]: Started sshd@21-147.75.202.249:22-147.75.109.163:36648.service - OpenSSH per-connection server daemon (147.75.109.163:36648). Nov 12 22:10:45.658464 sshd[10124]: Accepted publickey for core from 147.75.109.163 port 36648 ssh2: RSA SHA256:KBVmhKW+oGJyeiskyb5aOJEhXukp3J/3zKpNJrwWNKM Nov 12 22:10:45.659388 sshd[10124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:10:45.662765 systemd-logind[1794]: New session 23 of user core. Nov 12 22:10:45.684541 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 22:10:45.773515 sshd[10124]: pam_unix(sshd:session): session closed for user core Nov 12 22:10:45.775184 systemd[1]: sshd@21-147.75.202.249:22-147.75.109.163:36648.service: Deactivated successfully. Nov 12 22:10:45.776185 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 22:10:45.776955 systemd-logind[1794]: Session 23 logged out. Waiting for processes to exit. Nov 12 22:10:45.777453 systemd-logind[1794]: Removed session 23. Nov 12 22:10:50.807605 systemd[1]: Started sshd@22-147.75.202.249:22-147.75.109.163:49620.service - OpenSSH per-connection server daemon (147.75.109.163:49620). Nov 12 22:10:50.829290 sshd[10175]: Accepted publickey for core from 147.75.109.163 port 49620 ssh2: RSA SHA256:KBVmhKW+oGJyeiskyb5aOJEhXukp3J/3zKpNJrwWNKM Nov 12 22:10:50.830051 sshd[10175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:10:50.832835 systemd-logind[1794]: New session 24 of user core. Nov 12 22:10:50.833831 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 12 22:10:50.920612 sshd[10175]: pam_unix(sshd:session): session closed for user core Nov 12 22:10:50.922503 systemd[1]: sshd@22-147.75.202.249:22-147.75.109.163:49620.service: Deactivated successfully. Nov 12 22:10:50.923387 systemd[1]: session-24.scope: Deactivated successfully. Nov 12 22:10:50.923813 systemd-logind[1794]: Session 24 logged out. Waiting for processes to exit. Nov 12 22:10:50.924323 systemd-logind[1794]: Removed session 24. Nov 12 22:10:55.951914 systemd[1]: Started sshd@23-147.75.202.249:22-147.75.109.163:49632.service - OpenSSH per-connection server daemon (147.75.109.163:49632). Nov 12 22:10:56.011961 sshd[10229]: Accepted publickey for core from 147.75.109.163 port 49632 ssh2: RSA SHA256:KBVmhKW+oGJyeiskyb5aOJEhXukp3J/3zKpNJrwWNKM Nov 12 22:10:56.013778 sshd[10229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:10:56.019692 systemd-logind[1794]: New session 25 of user core. Nov 12 22:10:56.037656 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 12 22:10:56.132234 sshd[10229]: pam_unix(sshd:session): session closed for user core Nov 12 22:10:56.133729 systemd[1]: sshd@23-147.75.202.249:22-147.75.109.163:49632.service: Deactivated successfully. Nov 12 22:10:56.134659 systemd[1]: session-25.scope: Deactivated successfully. Nov 12 22:10:56.135417 systemd-logind[1794]: Session 25 logged out. Waiting for processes to exit. Nov 12 22:10:56.135995 systemd-logind[1794]: Removed session 25.