Aug 13 07:17:57.759233 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 22:14:58 -00 2025 Aug 13 07:17:57.759249 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:17:57.759256 kernel: Disabled fast string operations Aug 13 07:17:57.759260 kernel: BIOS-provided physical RAM map: Aug 13 07:17:57.759264 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Aug 13 07:17:57.759268 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Aug 13 07:17:57.759274 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Aug 13 07:17:57.759278 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Aug 13 07:17:57.759283 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Aug 13 07:17:57.759287 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Aug 13 07:17:57.759291 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Aug 13 07:17:57.759295 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Aug 13 07:17:57.759299 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Aug 13 07:17:57.759304 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Aug 13 07:17:57.759310 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Aug 13 07:17:57.759315 kernel: NX (Execute Disable) protection: active Aug 13 07:17:57.759320 kernel: APIC: Static calls initialized Aug 13 07:17:57.759324 kernel: SMBIOS 2.7 present. Aug 13 07:17:57.759329 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Aug 13 07:17:57.759334 kernel: vmware: hypercall mode: 0x00 Aug 13 07:17:57.759339 kernel: Hypervisor detected: VMware Aug 13 07:17:57.759343 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Aug 13 07:17:57.759349 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Aug 13 07:17:57.759354 kernel: vmware: using clock offset of 3419116089 ns Aug 13 07:17:57.759359 kernel: tsc: Detected 3408.000 MHz processor Aug 13 07:17:57.759364 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 07:17:57.759369 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 07:17:57.759374 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Aug 13 07:17:57.759379 kernel: total RAM covered: 3072M Aug 13 07:17:57.759384 kernel: Found optimal setting for mtrr clean up Aug 13 07:17:57.759389 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Aug 13 07:17:57.759395 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Aug 13 07:17:57.759401 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 07:17:57.759405 kernel: Using GB pages for direct mapping Aug 13 07:17:57.759410 kernel: ACPI: Early table checksum verification disabled Aug 13 07:17:57.759415 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Aug 13 07:17:57.759420 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Aug 13 07:17:57.759425 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Aug 13 07:17:57.759430 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Aug 13 07:17:57.759435 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Aug 13 07:17:57.759443 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Aug 13 07:17:57.759448 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Aug 13 07:17:57.759453 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Aug 13 07:17:57.759458 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Aug 13 07:17:57.759463 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Aug 13 07:17:57.759469 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Aug 13 07:17:57.759475 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Aug 13 07:17:57.759485 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Aug 13 07:17:57.759491 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Aug 13 07:17:57.759497 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Aug 13 07:17:57.759502 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Aug 13 07:17:57.759507 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Aug 13 07:17:57.759512 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Aug 13 07:17:57.759517 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Aug 13 07:17:57.759522 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Aug 13 07:17:57.759529 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Aug 13 07:17:57.759534 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Aug 13 07:17:57.759539 kernel: system APIC only can use physical flat Aug 13 07:17:57.759545 kernel: APIC: Switched APIC routing to: physical flat Aug 13 07:17:57.759550 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 07:17:57.759555 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Aug 13 07:17:57.759560 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Aug 13 07:17:57.759565 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Aug 13 07:17:57.759570 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Aug 13 07:17:57.759576 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Aug 13 07:17:57.759581 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Aug 13 07:17:57.759586 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Aug 13 07:17:57.759591 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Aug 13 07:17:57.759596 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Aug 13 07:17:57.759601 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Aug 13 07:17:57.759606 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Aug 13 07:17:57.759612 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Aug 13 07:17:57.759616 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Aug 13 07:17:57.759621 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Aug 13 07:17:57.759628 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Aug 13 07:17:57.759633 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Aug 13 07:17:57.759638 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Aug 13 07:17:57.759643 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Aug 13 07:17:57.759648 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Aug 13 07:17:57.759653 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Aug 13 07:17:57.759657 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Aug 13 07:17:57.759663 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Aug 13 07:17:57.759668 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Aug 13 07:17:57.759673 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Aug 13 07:17:57.759678 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Aug 13 07:17:57.759684 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Aug 13 07:17:57.759689 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Aug 13 07:17:57.759694 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Aug 13 07:17:57.759699 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Aug 13 07:17:57.759704 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Aug 13 07:17:57.759709 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Aug 13 07:17:57.759714 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Aug 13 07:17:57.759719 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Aug 13 07:17:57.759724 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Aug 13 07:17:57.759729 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Aug 13 07:17:57.759735 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Aug 13 07:17:57.759741 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Aug 13 07:17:57.759746 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Aug 13 07:17:57.759751 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Aug 13 07:17:57.759756 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Aug 13 07:17:57.759761 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Aug 13 07:17:57.759766 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Aug 13 07:17:57.759771 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Aug 13 07:17:57.759776 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Aug 13 07:17:57.759781 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Aug 13 07:17:57.759787 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Aug 13 07:17:57.759792 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Aug 13 07:17:57.759797 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Aug 13 07:17:57.759802 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Aug 13 07:17:57.759807 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Aug 13 07:17:57.759812 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Aug 13 07:17:57.759817 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Aug 13 07:17:57.759822 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Aug 13 07:17:57.759827 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Aug 13 07:17:57.759832 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Aug 13 07:17:57.759838 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Aug 13 07:17:57.759843 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Aug 13 07:17:57.759848 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Aug 13 07:17:57.759858 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Aug 13 07:17:57.759863 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Aug 13 07:17:57.759869 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Aug 13 07:17:57.759874 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Aug 13 07:17:57.759879 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Aug 13 07:17:57.759886 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Aug 13 07:17:57.759892 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Aug 13 07:17:57.759897 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Aug 13 07:17:57.759903 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Aug 13 07:17:57.759908 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Aug 13 07:17:57.759913 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Aug 13 07:17:57.759919 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Aug 13 07:17:57.759924 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Aug 13 07:17:57.759929 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Aug 13 07:17:57.760476 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Aug 13 07:17:57.760486 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Aug 13 07:17:57.760491 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Aug 13 07:17:57.760497 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Aug 13 07:17:57.760503 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Aug 13 07:17:57.760508 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Aug 13 07:17:57.760513 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Aug 13 07:17:57.760519 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Aug 13 07:17:57.760524 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Aug 13 07:17:57.760530 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Aug 13 07:17:57.760535 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Aug 13 07:17:57.760542 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Aug 13 07:17:57.760547 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Aug 13 07:17:57.760553 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Aug 13 07:17:57.760558 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Aug 13 07:17:57.760564 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Aug 13 07:17:57.760569 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Aug 13 07:17:57.760574 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Aug 13 07:17:57.760580 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Aug 13 07:17:57.760585 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Aug 13 07:17:57.760591 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Aug 13 07:17:57.760597 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Aug 13 07:17:57.760602 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Aug 13 07:17:57.760608 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Aug 13 07:17:57.760613 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Aug 13 07:17:57.760619 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Aug 13 07:17:57.760624 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Aug 13 07:17:57.760629 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Aug 13 07:17:57.760635 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Aug 13 07:17:57.760640 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Aug 13 07:17:57.760645 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Aug 13 07:17:57.760652 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Aug 13 07:17:57.760657 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Aug 13 07:17:57.760663 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Aug 13 07:17:57.760668 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Aug 13 07:17:57.760673 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Aug 13 07:17:57.760679 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Aug 13 07:17:57.760684 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Aug 13 07:17:57.760689 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Aug 13 07:17:57.760695 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Aug 13 07:17:57.760700 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Aug 13 07:17:57.760707 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Aug 13 07:17:57.760712 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Aug 13 07:17:57.760718 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Aug 13 07:17:57.760723 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Aug 13 07:17:57.760729 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Aug 13 07:17:57.760734 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Aug 13 07:17:57.760739 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Aug 13 07:17:57.760745 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Aug 13 07:17:57.760750 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Aug 13 07:17:57.760755 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Aug 13 07:17:57.760762 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Aug 13 07:17:57.760767 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Aug 13 07:17:57.760772 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Aug 13 07:17:57.760778 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Aug 13 07:17:57.760783 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 13 07:17:57.760789 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Aug 13 07:17:57.760794 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Aug 13 07:17:57.760800 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Aug 13 07:17:57.760806 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Aug 13 07:17:57.760811 kernel: Zone ranges: Aug 13 07:17:57.760818 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 07:17:57.760824 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Aug 13 07:17:57.760829 kernel: Normal empty Aug 13 07:17:57.760835 kernel: Movable zone start for each node Aug 13 07:17:57.760840 kernel: Early memory node ranges Aug 13 07:17:57.760846 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Aug 13 07:17:57.760851 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Aug 13 07:17:57.760857 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Aug 13 07:17:57.760862 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Aug 13 07:17:57.760869 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:17:57.760875 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Aug 13 07:17:57.760880 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Aug 13 07:17:57.760886 kernel: ACPI: PM-Timer IO Port: 0x1008 Aug 13 07:17:57.760891 kernel: system APIC only can use physical flat Aug 13 07:17:57.760897 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Aug 13 07:17:57.760902 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Aug 13 07:17:57.760908 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Aug 13 07:17:57.760913 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Aug 13 07:17:57.760918 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Aug 13 07:17:57.760925 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Aug 13 07:17:57.760931 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Aug 13 07:17:57.761998 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Aug 13 07:17:57.762005 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Aug 13 07:17:57.762010 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Aug 13 07:17:57.762016 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Aug 13 07:17:57.762021 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Aug 13 07:17:57.762027 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Aug 13 07:17:57.762032 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Aug 13 07:17:57.762041 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Aug 13 07:17:57.762046 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Aug 13 07:17:57.762052 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Aug 13 07:17:57.762057 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Aug 13 07:17:57.762063 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Aug 13 07:17:57.762068 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Aug 13 07:17:57.762073 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Aug 13 07:17:57.762079 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Aug 13 07:17:57.762085 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Aug 13 07:17:57.762090 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Aug 13 07:17:57.762097 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Aug 13 07:17:57.762102 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Aug 13 07:17:57.762108 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Aug 13 07:17:57.762113 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Aug 13 07:17:57.762119 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Aug 13 07:17:57.762124 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Aug 13 07:17:57.762130 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Aug 13 07:17:57.762135 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Aug 13 07:17:57.762140 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Aug 13 07:17:57.762147 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Aug 13 07:17:57.762152 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Aug 13 07:17:57.762158 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Aug 13 07:17:57.762163 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Aug 13 07:17:57.762169 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Aug 13 07:17:57.762174 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Aug 13 07:17:57.762180 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Aug 13 07:17:57.762185 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Aug 13 07:17:57.762191 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Aug 13 07:17:57.762196 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Aug 13 07:17:57.762203 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Aug 13 07:17:57.762208 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Aug 13 07:17:57.762214 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Aug 13 07:17:57.762219 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Aug 13 07:17:57.762224 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Aug 13 07:17:57.762230 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Aug 13 07:17:57.762235 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Aug 13 07:17:57.762241 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Aug 13 07:17:57.762246 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Aug 13 07:17:57.762252 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Aug 13 07:17:57.762258 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Aug 13 07:17:57.762264 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Aug 13 07:17:57.762269 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Aug 13 07:17:57.762275 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Aug 13 07:17:57.762280 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Aug 13 07:17:57.762286 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Aug 13 07:17:57.762291 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Aug 13 07:17:57.762296 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Aug 13 07:17:57.762302 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Aug 13 07:17:57.762307 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Aug 13 07:17:57.762314 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Aug 13 07:17:57.762319 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Aug 13 07:17:57.762325 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Aug 13 07:17:57.762330 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Aug 13 07:17:57.762336 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Aug 13 07:17:57.762341 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Aug 13 07:17:57.762347 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Aug 13 07:17:57.762352 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Aug 13 07:17:57.762357 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Aug 13 07:17:57.762367 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Aug 13 07:17:57.762376 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Aug 13 07:17:57.762386 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Aug 13 07:17:57.762394 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Aug 13 07:17:57.762404 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Aug 13 07:17:57.762414 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Aug 13 07:17:57.762419 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Aug 13 07:17:57.762425 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Aug 13 07:17:57.762431 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Aug 13 07:17:57.762436 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Aug 13 07:17:57.762444 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Aug 13 07:17:57.762449 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Aug 13 07:17:57.762454 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Aug 13 07:17:57.762460 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Aug 13 07:17:57.762465 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Aug 13 07:17:57.762471 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Aug 13 07:17:57.762476 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Aug 13 07:17:57.762482 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Aug 13 07:17:57.762487 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Aug 13 07:17:57.762492 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Aug 13 07:17:57.762499 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Aug 13 07:17:57.762505 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Aug 13 07:17:57.762510 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Aug 13 07:17:57.762516 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Aug 13 07:17:57.762521 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Aug 13 07:17:57.762527 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Aug 13 07:17:57.762532 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Aug 13 07:17:57.762537 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Aug 13 07:17:57.762543 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Aug 13 07:17:57.762550 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Aug 13 07:17:57.762555 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Aug 13 07:17:57.762561 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Aug 13 07:17:57.762566 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Aug 13 07:17:57.762572 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Aug 13 07:17:57.762577 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Aug 13 07:17:57.762582 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Aug 13 07:17:57.762588 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Aug 13 07:17:57.762593 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Aug 13 07:17:57.762599 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Aug 13 07:17:57.762605 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Aug 13 07:17:57.762611 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Aug 13 07:17:57.762616 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Aug 13 07:17:57.762622 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Aug 13 07:17:57.762627 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Aug 13 07:17:57.762632 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Aug 13 07:17:57.762638 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Aug 13 07:17:57.762643 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Aug 13 07:17:57.762649 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Aug 13 07:17:57.762655 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Aug 13 07:17:57.762661 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Aug 13 07:17:57.762666 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Aug 13 07:17:57.762672 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Aug 13 07:17:57.762677 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Aug 13 07:17:57.762682 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Aug 13 07:17:57.762688 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Aug 13 07:17:57.762693 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Aug 13 07:17:57.762699 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Aug 13 07:17:57.762704 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Aug 13 07:17:57.762711 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 07:17:57.762717 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Aug 13 07:17:57.762722 kernel: TSC deadline timer available Aug 13 07:17:57.762728 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Aug 13 07:17:57.762734 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Aug 13 07:17:57.762740 kernel: Booting paravirtualized kernel on VMware hypervisor Aug 13 07:17:57.762745 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 07:17:57.762751 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Aug 13 07:17:57.762757 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 Aug 13 07:17:57.762763 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 Aug 13 07:17:57.762769 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Aug 13 07:17:57.762775 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Aug 13 07:17:57.762780 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Aug 13 07:17:57.762785 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Aug 13 07:17:57.762791 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Aug 13 07:17:57.762805 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Aug 13 07:17:57.762812 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Aug 13 07:17:57.762817 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Aug 13 07:17:57.762824 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Aug 13 07:17:57.762830 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Aug 13 07:17:57.762835 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Aug 13 07:17:57.762841 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Aug 13 07:17:57.762847 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Aug 13 07:17:57.762852 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Aug 13 07:17:57.762858 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Aug 13 07:17:57.762864 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Aug 13 07:17:57.762872 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:17:57.762878 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 07:17:57.762884 kernel: random: crng init done Aug 13 07:17:57.762889 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Aug 13 07:17:57.762895 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Aug 13 07:17:57.762901 kernel: printk: log_buf_len min size: 262144 bytes Aug 13 07:17:57.762907 kernel: printk: log_buf_len: 1048576 bytes Aug 13 07:17:57.762913 kernel: printk: early log buf free: 239648(91%) Aug 13 07:17:57.762919 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 07:17:57.762926 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 07:17:57.764016 kernel: Fallback order for Node 0: 0 Aug 13 07:17:57.764027 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Aug 13 07:17:57.764033 kernel: Policy zone: DMA32 Aug 13 07:17:57.764039 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 07:17:57.764046 kernel: Memory: 1936348K/2096628K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 160020K reserved, 0K cma-reserved) Aug 13 07:17:57.764054 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Aug 13 07:17:57.764060 kernel: ftrace: allocating 37968 entries in 149 pages Aug 13 07:17:57.764066 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 07:17:57.764072 kernel: Dynamic Preempt: voluntary Aug 13 07:17:57.764078 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 07:17:57.764084 kernel: rcu: RCU event tracing is enabled. Aug 13 07:17:57.764090 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Aug 13 07:17:57.764097 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 07:17:57.764104 kernel: Rude variant of Tasks RCU enabled. Aug 13 07:17:57.764110 kernel: Tracing variant of Tasks RCU enabled. Aug 13 07:17:57.764116 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 07:17:57.764122 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Aug 13 07:17:57.764128 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Aug 13 07:17:57.764134 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Aug 13 07:17:57.764139 kernel: Console: colour VGA+ 80x25 Aug 13 07:17:57.764145 kernel: printk: console [tty0] enabled Aug 13 07:17:57.764151 kernel: printk: console [ttyS0] enabled Aug 13 07:17:57.764158 kernel: ACPI: Core revision 20230628 Aug 13 07:17:57.764165 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Aug 13 07:17:57.764171 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 07:17:57.764177 kernel: x2apic enabled Aug 13 07:17:57.764183 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 07:17:57.764188 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 07:17:57.764194 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Aug 13 07:17:57.764200 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Aug 13 07:17:57.764207 kernel: Disabled fast string operations Aug 13 07:17:57.764213 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Aug 13 07:17:57.764220 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Aug 13 07:17:57.764226 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 07:17:57.764232 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Aug 13 07:17:57.764238 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Aug 13 07:17:57.764244 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Aug 13 07:17:57.764250 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Aug 13 07:17:57.764256 kernel: RETBleed: Mitigation: Enhanced IBRS Aug 13 07:17:57.764262 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 07:17:57.764268 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 07:17:57.764275 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 07:17:57.764281 kernel: SRBDS: Unknown: Dependent on hypervisor status Aug 13 07:17:57.764287 kernel: GDS: Unknown: Dependent on hypervisor status Aug 13 07:17:57.764293 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 07:17:57.764305 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 07:17:57.764314 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 07:17:57.764320 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 07:17:57.764326 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 07:17:57.764333 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Aug 13 07:17:57.764339 kernel: Freeing SMP alternatives memory: 32K Aug 13 07:17:57.764345 kernel: pid_max: default: 131072 minimum: 1024 Aug 13 07:17:57.764351 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 07:17:57.764357 kernel: landlock: Up and running. Aug 13 07:17:57.764362 kernel: SELinux: Initializing. Aug 13 07:17:57.764368 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 07:17:57.764374 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 07:17:57.764380 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Aug 13 07:17:57.764387 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Aug 13 07:17:57.764393 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Aug 13 07:17:57.764399 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Aug 13 07:17:57.764405 kernel: Performance Events: Skylake events, core PMU driver. Aug 13 07:17:57.764411 kernel: core: CPUID marked event: 'cpu cycles' unavailable Aug 13 07:17:57.764418 kernel: core: CPUID marked event: 'instructions' unavailable Aug 13 07:17:57.764423 kernel: core: CPUID marked event: 'bus cycles' unavailable Aug 13 07:17:57.764429 kernel: core: CPUID marked event: 'cache references' unavailable Aug 13 07:17:57.764435 kernel: core: CPUID marked event: 'cache misses' unavailable Aug 13 07:17:57.764442 kernel: core: CPUID marked event: 'branch instructions' unavailable Aug 13 07:17:57.764448 kernel: core: CPUID marked event: 'branch misses' unavailable Aug 13 07:17:57.764453 kernel: ... version: 1 Aug 13 07:17:57.764463 kernel: ... bit width: 48 Aug 13 07:17:57.764473 kernel: ... generic registers: 4 Aug 13 07:17:57.764485 kernel: ... value mask: 0000ffffffffffff Aug 13 07:17:57.764492 kernel: ... max period: 000000007fffffff Aug 13 07:17:57.764498 kernel: ... fixed-purpose events: 0 Aug 13 07:17:57.764504 kernel: ... event mask: 000000000000000f Aug 13 07:17:57.764512 kernel: signal: max sigframe size: 1776 Aug 13 07:17:57.764518 kernel: rcu: Hierarchical SRCU implementation. Aug 13 07:17:57.764524 kernel: rcu: Max phase no-delay instances is 400. Aug 13 07:17:57.764529 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 07:17:57.764535 kernel: smp: Bringing up secondary CPUs ... Aug 13 07:17:57.764541 kernel: smpboot: x86: Booting SMP configuration: Aug 13 07:17:57.764547 kernel: .... node #0, CPUs: #1 Aug 13 07:17:57.764553 kernel: Disabled fast string operations Aug 13 07:17:57.764559 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Aug 13 07:17:57.764565 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Aug 13 07:17:57.764571 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 07:17:57.764577 kernel: smpboot: Max logical packages: 128 Aug 13 07:17:57.764583 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Aug 13 07:17:57.764589 kernel: devtmpfs: initialized Aug 13 07:17:57.764595 kernel: x86/mm: Memory block size: 128MB Aug 13 07:17:57.764601 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Aug 13 07:17:57.764607 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 07:17:57.764613 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Aug 13 07:17:57.764619 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 07:17:57.764626 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 07:17:57.764632 kernel: audit: initializing netlink subsys (disabled) Aug 13 07:17:57.764638 kernel: audit: type=2000 audit(1755069476.093:1): state=initialized audit_enabled=0 res=1 Aug 13 07:17:57.764644 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 07:17:57.764650 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 07:17:57.764656 kernel: cpuidle: using governor menu Aug 13 07:17:57.764662 kernel: Simple Boot Flag at 0x36 set to 0x80 Aug 13 07:17:57.764668 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 07:17:57.764673 kernel: dca service started, version 1.12.1 Aug 13 07:17:57.764680 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Aug 13 07:17:57.764686 kernel: PCI: Using configuration type 1 for base access Aug 13 07:17:57.764693 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 07:17:57.764699 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 07:17:57.764704 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 07:17:57.764710 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 07:17:57.764716 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 07:17:57.764722 kernel: ACPI: Added _OSI(Module Device) Aug 13 07:17:57.764728 kernel: ACPI: Added _OSI(Processor Device) Aug 13 07:17:57.764735 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 07:17:57.764741 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 07:17:57.764747 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Aug 13 07:17:57.764753 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 07:17:57.764759 kernel: ACPI: Interpreter enabled Aug 13 07:17:57.764764 kernel: ACPI: PM: (supports S0 S1 S5) Aug 13 07:17:57.764770 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 07:17:57.764776 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 07:17:57.764784 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 07:17:57.764790 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Aug 13 07:17:57.764796 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Aug 13 07:17:57.764880 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 07:17:57.765972 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Aug 13 07:17:57.766033 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Aug 13 07:17:57.766042 kernel: PCI host bridge to bus 0000:00 Aug 13 07:17:57.766096 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 07:17:57.766147 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Aug 13 07:17:57.766192 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 07:17:57.766237 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 07:17:57.766282 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Aug 13 07:17:57.766327 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Aug 13 07:17:57.766389 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Aug 13 07:17:57.766449 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Aug 13 07:17:57.766506 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Aug 13 07:17:57.766561 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Aug 13 07:17:57.766612 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Aug 13 07:17:57.766663 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Aug 13 07:17:57.766713 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Aug 13 07:17:57.766767 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Aug 13 07:17:57.766818 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Aug 13 07:17:57.766885 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Aug 13 07:17:57.767958 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Aug 13 07:17:57.768015 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Aug 13 07:17:57.768072 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Aug 13 07:17:57.768123 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Aug 13 07:17:57.768178 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Aug 13 07:17:57.768233 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Aug 13 07:17:57.768285 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Aug 13 07:17:57.768336 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Aug 13 07:17:57.768387 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Aug 13 07:17:57.768437 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Aug 13 07:17:57.768487 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 07:17:57.768548 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Aug 13 07:17:57.768603 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.768655 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.768710 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.768762 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.768817 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.768871 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.768926 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.771019 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.771079 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.771132 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.771189 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.771245 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.771304 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.771357 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.771413 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.771466 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.771538 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.771595 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.771653 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.771705 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.771761 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.771814 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.771871 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.771926 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.771998 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.772051 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.772107 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.772160 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.772216 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.772272 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.772329 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.772381 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.772442 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.772494 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.772551 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.772606 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.772662 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.772714 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.772770 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.772824 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.772880 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.773898 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.773999 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.774055 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.774111 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.774164 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.774259 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.774549 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.774628 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.775011 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.775080 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.775134 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.775190 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.775244 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.775303 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.775356 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.775412 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.775465 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.775522 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.775575 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.775634 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.775686 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.775743 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.775795 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.775849 kernel: pci_bus 0000:01: extended config space not accessible Aug 13 07:17:57.775902 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Aug 13 07:17:57.775970 kernel: pci_bus 0000:02: extended config space not accessible Aug 13 07:17:57.775980 kernel: acpiphp: Slot [32] registered Aug 13 07:17:57.775986 kernel: acpiphp: Slot [33] registered Aug 13 07:17:57.775992 kernel: acpiphp: Slot [34] registered Aug 13 07:17:57.775998 kernel: acpiphp: Slot [35] registered Aug 13 07:17:57.776004 kernel: acpiphp: Slot [36] registered Aug 13 07:17:57.776011 kernel: acpiphp: Slot [37] registered Aug 13 07:17:57.776016 kernel: acpiphp: Slot [38] registered Aug 13 07:17:57.776022 kernel: acpiphp: Slot [39] registered Aug 13 07:17:57.776030 kernel: acpiphp: Slot [40] registered Aug 13 07:17:57.776037 kernel: acpiphp: Slot [41] registered Aug 13 07:17:57.776042 kernel: acpiphp: Slot [42] registered Aug 13 07:17:57.776048 kernel: acpiphp: Slot [43] registered Aug 13 07:17:57.776054 kernel: acpiphp: Slot [44] registered Aug 13 07:17:57.776060 kernel: acpiphp: Slot [45] registered Aug 13 07:17:57.776066 kernel: acpiphp: Slot [46] registered Aug 13 07:17:57.776072 kernel: acpiphp: Slot [47] registered Aug 13 07:17:57.776078 kernel: acpiphp: Slot [48] registered Aug 13 07:17:57.776085 kernel: acpiphp: Slot [49] registered Aug 13 07:17:57.776091 kernel: acpiphp: Slot [50] registered Aug 13 07:17:57.776105 kernel: acpiphp: Slot [51] registered Aug 13 07:17:57.776124 kernel: acpiphp: Slot [52] registered Aug 13 07:17:57.776140 kernel: acpiphp: Slot [53] registered Aug 13 07:17:57.776153 kernel: acpiphp: Slot [54] registered Aug 13 07:17:57.776160 kernel: acpiphp: Slot [55] registered Aug 13 07:17:57.776166 kernel: acpiphp: Slot [56] registered Aug 13 07:17:57.776172 kernel: acpiphp: Slot [57] registered Aug 13 07:17:57.776178 kernel: acpiphp: Slot [58] registered Aug 13 07:17:57.776185 kernel: acpiphp: Slot [59] registered Aug 13 07:17:57.776191 kernel: acpiphp: Slot [60] registered Aug 13 07:17:57.776205 kernel: acpiphp: Slot [61] registered Aug 13 07:17:57.776211 kernel: acpiphp: Slot [62] registered Aug 13 07:17:57.776217 kernel: acpiphp: Slot [63] registered Aug 13 07:17:57.778098 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Aug 13 07:17:57.778168 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Aug 13 07:17:57.778222 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Aug 13 07:17:57.778274 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Aug 13 07:17:57.778329 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Aug 13 07:17:57.778381 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Aug 13 07:17:57.778432 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Aug 13 07:17:57.778482 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Aug 13 07:17:57.778534 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Aug 13 07:17:57.778593 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Aug 13 07:17:57.778646 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Aug 13 07:17:57.778702 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Aug 13 07:17:57.778755 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Aug 13 07:17:57.778808 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.778861 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Aug 13 07:17:57.778914 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Aug 13 07:17:57.778987 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Aug 13 07:17:57.779041 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Aug 13 07:17:57.779098 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Aug 13 07:17:57.779150 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Aug 13 07:17:57.779202 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Aug 13 07:17:57.779253 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Aug 13 07:17:57.779307 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Aug 13 07:17:57.779358 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Aug 13 07:17:57.779410 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Aug 13 07:17:57.779461 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Aug 13 07:17:57.779532 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Aug 13 07:17:57.779583 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Aug 13 07:17:57.779633 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Aug 13 07:17:57.779685 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Aug 13 07:17:57.779735 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Aug 13 07:17:57.779786 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Aug 13 07:17:57.779840 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Aug 13 07:17:57.779892 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Aug 13 07:17:57.779978 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Aug 13 07:17:57.780033 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Aug 13 07:17:57.780083 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Aug 13 07:17:57.780134 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Aug 13 07:17:57.780189 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Aug 13 07:17:57.780239 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Aug 13 07:17:57.780289 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Aug 13 07:17:57.780348 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Aug 13 07:17:57.780401 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Aug 13 07:17:57.780460 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Aug 13 07:17:57.780518 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Aug 13 07:17:57.780573 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Aug 13 07:17:57.780625 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Aug 13 07:17:57.780677 kernel: pci 0000:0b:00.0: supports D1 D2 Aug 13 07:17:57.780749 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Aug 13 07:17:57.780802 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Aug 13 07:17:57.780855 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Aug 13 07:17:57.780906 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Aug 13 07:17:57.781962 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Aug 13 07:17:57.782038 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Aug 13 07:17:57.782098 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Aug 13 07:17:57.782152 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Aug 13 07:17:57.782205 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Aug 13 07:17:57.782260 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Aug 13 07:17:57.782312 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Aug 13 07:17:57.782365 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Aug 13 07:17:57.782418 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Aug 13 07:17:57.782474 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Aug 13 07:17:57.782527 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Aug 13 07:17:57.782579 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Aug 13 07:17:57.782632 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Aug 13 07:17:57.782684 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Aug 13 07:17:57.782736 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Aug 13 07:17:57.782789 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Aug 13 07:17:57.782841 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Aug 13 07:17:57.782896 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Aug 13 07:17:57.782964 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Aug 13 07:17:57.783016 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Aug 13 07:17:57.783068 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Aug 13 07:17:57.783121 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Aug 13 07:17:57.783173 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Aug 13 07:17:57.783225 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Aug 13 07:17:57.783279 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Aug 13 07:17:57.783335 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Aug 13 07:17:57.783387 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Aug 13 07:17:57.783439 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Aug 13 07:17:57.783493 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Aug 13 07:17:57.783546 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Aug 13 07:17:57.783597 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Aug 13 07:17:57.783649 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Aug 13 07:17:57.783706 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Aug 13 07:17:57.783758 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Aug 13 07:17:57.783811 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Aug 13 07:17:57.783863 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Aug 13 07:17:57.783917 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Aug 13 07:17:57.784002 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Aug 13 07:17:57.784055 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Aug 13 07:17:57.784107 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Aug 13 07:17:57.784162 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Aug 13 07:17:57.784214 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Aug 13 07:17:57.784266 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Aug 13 07:17:57.784317 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Aug 13 07:17:57.784369 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Aug 13 07:17:57.784421 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Aug 13 07:17:57.784473 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Aug 13 07:17:57.784535 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Aug 13 07:17:57.784591 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Aug 13 07:17:57.784643 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Aug 13 07:17:57.784694 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Aug 13 07:17:57.784746 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Aug 13 07:17:57.784799 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Aug 13 07:17:57.784851 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Aug 13 07:17:57.784903 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Aug 13 07:17:57.784966 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Aug 13 07:17:57.785023 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Aug 13 07:17:57.785075 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Aug 13 07:17:57.785127 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Aug 13 07:17:57.785181 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Aug 13 07:17:57.785233 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Aug 13 07:17:57.785285 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Aug 13 07:17:57.785339 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Aug 13 07:17:57.785393 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Aug 13 07:17:57.785446 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Aug 13 07:17:57.785500 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Aug 13 07:17:57.785552 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Aug 13 07:17:57.785604 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Aug 13 07:17:57.785657 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Aug 13 07:17:57.785709 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Aug 13 07:17:57.785761 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Aug 13 07:17:57.785815 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Aug 13 07:17:57.785870 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Aug 13 07:17:57.785923 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Aug 13 07:17:57.786019 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Aug 13 07:17:57.786071 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Aug 13 07:17:57.786123 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Aug 13 07:17:57.786131 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Aug 13 07:17:57.786138 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Aug 13 07:17:57.786144 kernel: ACPI: PCI: Interrupt link LNKB disabled Aug 13 07:17:57.786153 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 07:17:57.786159 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Aug 13 07:17:57.786166 kernel: iommu: Default domain type: Translated Aug 13 07:17:57.786172 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 07:17:57.786178 kernel: PCI: Using ACPI for IRQ routing Aug 13 07:17:57.786184 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 07:17:57.786190 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Aug 13 07:17:57.786196 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Aug 13 07:17:57.786247 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Aug 13 07:17:57.786302 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Aug 13 07:17:57.786354 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 07:17:57.786363 kernel: vgaarb: loaded Aug 13 07:17:57.786369 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Aug 13 07:17:57.786375 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Aug 13 07:17:57.786381 kernel: clocksource: Switched to clocksource tsc-early Aug 13 07:17:57.786387 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 07:17:57.786393 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 07:17:57.786399 kernel: pnp: PnP ACPI init Aug 13 07:17:57.786456 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Aug 13 07:17:57.786506 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Aug 13 07:17:57.786553 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Aug 13 07:17:57.786604 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Aug 13 07:17:57.786656 kernel: pnp 00:06: [dma 2] Aug 13 07:17:57.786708 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Aug 13 07:17:57.786760 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Aug 13 07:17:57.786807 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Aug 13 07:17:57.786816 kernel: pnp: PnP ACPI: found 8 devices Aug 13 07:17:57.786822 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 07:17:57.786828 kernel: NET: Registered PF_INET protocol family Aug 13 07:17:57.786834 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 07:17:57.786840 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 13 07:17:57.786847 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 07:17:57.786854 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 07:17:57.786861 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 13 07:17:57.786867 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 13 07:17:57.786873 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 07:17:57.786880 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 07:17:57.786886 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 07:17:57.786892 kernel: NET: Registered PF_XDP protocol family Aug 13 07:17:57.787015 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Aug 13 07:17:57.787074 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Aug 13 07:17:57.787127 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Aug 13 07:17:57.787180 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Aug 13 07:17:57.787232 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Aug 13 07:17:57.787285 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Aug 13 07:17:57.787337 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Aug 13 07:17:57.787391 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Aug 13 07:17:57.787443 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Aug 13 07:17:57.787499 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Aug 13 07:17:57.787551 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Aug 13 07:17:57.787603 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Aug 13 07:17:57.787654 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Aug 13 07:17:57.787710 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Aug 13 07:17:57.787762 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Aug 13 07:17:57.787814 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Aug 13 07:17:57.787866 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Aug 13 07:17:57.787918 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Aug 13 07:17:57.787978 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Aug 13 07:17:57.788033 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Aug 13 07:17:57.788085 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Aug 13 07:17:57.788137 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Aug 13 07:17:57.788190 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Aug 13 07:17:57.788242 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Aug 13 07:17:57.788294 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Aug 13 07:17:57.788349 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.788401 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.788453 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.788505 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.788557 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.788608 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.788660 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.788711 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.788766 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.788818 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.788870 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.788922 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.788989 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.789041 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.789093 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.789145 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.789199 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.789252 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.789304 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.789356 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.789408 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.789470 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.789522 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.789574 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.789628 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.789679 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.789731 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.789787 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.789840 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.789891 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.791970 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.792057 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.792120 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.792178 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.792257 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.792347 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.792410 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.792476 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.792530 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.792583 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.792648 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.792702 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.792763 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.792828 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.792891 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.792969 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.793028 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.793095 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.793150 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.793205 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.793262 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.793323 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.793385 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.793456 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.793522 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.793595 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.793651 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.793704 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.793756 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.793820 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.793885 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.794518 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.794594 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.794678 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.794735 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.794788 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.794842 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.794898 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.794987 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.795045 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.795102 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.795163 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.795223 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.795284 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.795347 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.795419 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.795479 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.795534 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.795596 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.795668 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.795725 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.795778 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.795837 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.795894 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.796222 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Aug 13 07:17:57.796304 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Aug 13 07:17:57.796371 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Aug 13 07:17:57.796433 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Aug 13 07:17:57.796492 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Aug 13 07:17:57.796560 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Aug 13 07:17:57.796628 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Aug 13 07:17:57.796695 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Aug 13 07:17:57.796766 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Aug 13 07:17:57.796820 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Aug 13 07:17:57.796874 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Aug 13 07:17:57.796931 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Aug 13 07:17:57.799033 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Aug 13 07:17:57.799092 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Aug 13 07:17:57.799155 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Aug 13 07:17:57.799238 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Aug 13 07:17:57.799320 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Aug 13 07:17:57.799377 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Aug 13 07:17:57.799444 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Aug 13 07:17:57.799505 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Aug 13 07:17:57.799568 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Aug 13 07:17:57.799636 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Aug 13 07:17:57.799717 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Aug 13 07:17:57.799789 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Aug 13 07:17:57.799856 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Aug 13 07:17:57.799921 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Aug 13 07:17:57.800019 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Aug 13 07:17:57.800090 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Aug 13 07:17:57.800143 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Aug 13 07:17:57.800212 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Aug 13 07:17:57.800267 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Aug 13 07:17:57.800341 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Aug 13 07:17:57.800419 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Aug 13 07:17:57.800517 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Aug 13 07:17:57.800583 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Aug 13 07:17:57.800636 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Aug 13 07:17:57.800699 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Aug 13 07:17:57.800780 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Aug 13 07:17:57.800844 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Aug 13 07:17:57.800903 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Aug 13 07:17:57.800972 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Aug 13 07:17:57.801037 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Aug 13 07:17:57.801106 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Aug 13 07:17:57.801177 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Aug 13 07:17:57.801246 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Aug 13 07:17:57.801309 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Aug 13 07:17:57.801387 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Aug 13 07:17:57.801442 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Aug 13 07:17:57.801503 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Aug 13 07:17:57.801559 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Aug 13 07:17:57.801616 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Aug 13 07:17:57.801674 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Aug 13 07:17:57.801735 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Aug 13 07:17:57.801788 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Aug 13 07:17:57.801851 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Aug 13 07:17:57.801925 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Aug 13 07:17:57.802677 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Aug 13 07:17:57.802764 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Aug 13 07:17:57.802838 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Aug 13 07:17:57.802906 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Aug 13 07:17:57.802976 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Aug 13 07:17:57.803055 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Aug 13 07:17:57.803130 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Aug 13 07:17:57.803186 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Aug 13 07:17:57.803257 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Aug 13 07:17:57.803322 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Aug 13 07:17:57.803376 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Aug 13 07:17:57.803446 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Aug 13 07:17:57.803506 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Aug 13 07:17:57.803573 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Aug 13 07:17:57.803633 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Aug 13 07:17:57.803706 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Aug 13 07:17:57.803782 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Aug 13 07:17:57.803856 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Aug 13 07:17:57.803925 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Aug 13 07:17:57.803996 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Aug 13 07:17:57.804064 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Aug 13 07:17:57.804127 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Aug 13 07:17:57.804181 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Aug 13 07:17:57.804235 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Aug 13 07:17:57.804290 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Aug 13 07:17:57.804355 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Aug 13 07:17:57.804414 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Aug 13 07:17:57.804471 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Aug 13 07:17:57.804530 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Aug 13 07:17:57.804585 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Aug 13 07:17:57.804648 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Aug 13 07:17:57.804704 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Aug 13 07:17:57.804776 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Aug 13 07:17:57.804843 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Aug 13 07:17:57.804904 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Aug 13 07:17:57.805421 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Aug 13 07:17:57.805484 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Aug 13 07:17:57.805558 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Aug 13 07:17:57.805621 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Aug 13 07:17:57.805675 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Aug 13 07:17:57.805748 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Aug 13 07:17:57.805803 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Aug 13 07:17:57.805869 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Aug 13 07:17:57.805928 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Aug 13 07:17:57.805999 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Aug 13 07:17:57.806065 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Aug 13 07:17:57.806125 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Aug 13 07:17:57.806187 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Aug 13 07:17:57.806241 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Aug 13 07:17:57.806311 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Aug 13 07:17:57.806367 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Aug 13 07:17:57.806431 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Aug 13 07:17:57.806485 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Aug 13 07:17:57.806550 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Aug 13 07:17:57.806604 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Aug 13 07:17:57.806664 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Aug 13 07:17:57.806727 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Aug 13 07:17:57.806783 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Aug 13 07:17:57.806847 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Aug 13 07:17:57.806896 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Aug 13 07:17:57.807007 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Aug 13 07:17:57.807064 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Aug 13 07:17:57.807115 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Aug 13 07:17:57.807184 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Aug 13 07:17:57.807240 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Aug 13 07:17:57.807291 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Aug 13 07:17:57.807350 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Aug 13 07:17:57.807405 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Aug 13 07:17:57.807462 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Aug 13 07:17:57.807516 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Aug 13 07:17:57.807569 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Aug 13 07:17:57.807650 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Aug 13 07:17:57.807703 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Aug 13 07:17:57.807763 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Aug 13 07:17:57.807828 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Aug 13 07:17:57.807879 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Aug 13 07:17:57.807928 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Aug 13 07:17:57.808007 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Aug 13 07:17:57.808077 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Aug 13 07:17:57.808135 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Aug 13 07:17:57.808202 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Aug 13 07:17:57.808266 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Aug 13 07:17:57.808328 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Aug 13 07:17:57.808384 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Aug 13 07:17:57.808451 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Aug 13 07:17:57.808502 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Aug 13 07:17:57.808560 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Aug 13 07:17:57.808622 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Aug 13 07:17:57.808680 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Aug 13 07:17:57.808746 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Aug 13 07:17:57.808826 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Aug 13 07:17:57.808877 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Aug 13 07:17:57.808981 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Aug 13 07:17:57.809037 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Aug 13 07:17:57.809086 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Aug 13 07:17:57.809411 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Aug 13 07:17:57.809470 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Aug 13 07:17:57.809530 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Aug 13 07:17:57.809583 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Aug 13 07:17:57.809636 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Aug 13 07:17:57.809686 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Aug 13 07:17:57.809755 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Aug 13 07:17:57.809806 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Aug 13 07:17:57.809879 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Aug 13 07:17:57.809939 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Aug 13 07:17:57.810008 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Aug 13 07:17:57.810071 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Aug 13 07:17:57.810130 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Aug 13 07:17:57.810193 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Aug 13 07:17:57.810256 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Aug 13 07:17:57.810313 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Aug 13 07:17:57.810366 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Aug 13 07:17:57.810427 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Aug 13 07:17:57.810477 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Aug 13 07:17:57.810538 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Aug 13 07:17:57.810592 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Aug 13 07:17:57.810645 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Aug 13 07:17:57.810693 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Aug 13 07:17:57.810757 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Aug 13 07:17:57.810823 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Aug 13 07:17:57.810892 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Aug 13 07:17:57.810962 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Aug 13 07:17:57.811050 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Aug 13 07:17:57.811126 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Aug 13 07:17:57.811191 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Aug 13 07:17:57.811262 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Aug 13 07:17:57.811338 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Aug 13 07:17:57.811398 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Aug 13 07:17:57.811465 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Aug 13 07:17:57.811542 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Aug 13 07:17:57.811601 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Aug 13 07:17:57.811676 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Aug 13 07:17:57.811741 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Aug 13 07:17:57.811816 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Aug 13 07:17:57.811877 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Aug 13 07:17:57.812296 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Aug 13 07:17:57.812367 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Aug 13 07:17:57.812435 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Aug 13 07:17:57.812494 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Aug 13 07:17:57.812552 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Aug 13 07:17:57.812606 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Aug 13 07:17:57.812660 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Aug 13 07:17:57.812713 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Aug 13 07:17:57.812784 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Aug 13 07:17:57.812845 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Aug 13 07:17:57.812914 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Aug 13 07:17:57.813047 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 13 07:17:57.813065 kernel: PCI: CLS 32 bytes, default 64 Aug 13 07:17:57.813076 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 07:17:57.813085 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Aug 13 07:17:57.813094 kernel: clocksource: Switched to clocksource tsc Aug 13 07:17:57.813101 kernel: Initialise system trusted keyrings Aug 13 07:17:57.813112 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 13 07:17:57.813121 kernel: Key type asymmetric registered Aug 13 07:17:57.813128 kernel: Asymmetric key parser 'x509' registered Aug 13 07:17:57.813135 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 07:17:57.813145 kernel: io scheduler mq-deadline registered Aug 13 07:17:57.813152 kernel: io scheduler kyber registered Aug 13 07:17:57.813158 kernel: io scheduler bfq registered Aug 13 07:17:57.813219 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Aug 13 07:17:57.813289 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.813345 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Aug 13 07:17:57.813399 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.813458 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Aug 13 07:17:57.813528 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.815279 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Aug 13 07:17:57.815363 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.815449 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Aug 13 07:17:57.815519 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.815577 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Aug 13 07:17:57.815640 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.815711 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Aug 13 07:17:57.815774 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.815829 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Aug 13 07:17:57.815883 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.816243 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Aug 13 07:17:57.816315 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.816383 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Aug 13 07:17:57.816447 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.816503 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Aug 13 07:17:57.816562 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.816618 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Aug 13 07:17:57.816682 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.816736 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Aug 13 07:17:57.816805 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.816874 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Aug 13 07:17:57.816930 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.817003 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Aug 13 07:17:57.817077 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.817136 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Aug 13 07:17:57.817196 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.817267 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Aug 13 07:17:57.817347 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.817672 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Aug 13 07:17:57.817733 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.817796 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Aug 13 07:17:57.817875 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.817956 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Aug 13 07:17:57.818019 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.818074 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Aug 13 07:17:57.818137 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.818195 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Aug 13 07:17:57.818249 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.818331 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Aug 13 07:17:57.818395 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.818464 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Aug 13 07:17:57.818523 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.818588 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Aug 13 07:17:57.818650 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.818711 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Aug 13 07:17:57.818769 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.818828 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Aug 13 07:17:57.818882 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.818953 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Aug 13 07:17:57.819025 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.819086 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Aug 13 07:17:57.819142 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.819203 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Aug 13 07:17:57.819270 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.819330 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Aug 13 07:17:57.819397 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.819454 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Aug 13 07:17:57.819528 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.819541 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 07:17:57.819548 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 07:17:57.819555 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 07:17:57.819561 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Aug 13 07:17:57.819571 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 07:17:57.819578 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 07:17:57.819646 kernel: rtc_cmos 00:01: registered as rtc0 Aug 13 07:17:57.819712 kernel: rtc_cmos 00:01: setting system clock to 2025-08-13T07:17:57 UTC (1755069477) Aug 13 07:17:57.819770 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Aug 13 07:17:57.819780 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 07:17:57.819788 kernel: intel_pstate: CPU model not supported Aug 13 07:17:57.819794 kernel: NET: Registered PF_INET6 protocol family Aug 13 07:17:57.819800 kernel: Segment Routing with IPv6 Aug 13 07:17:57.819807 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 07:17:57.819815 kernel: NET: Registered PF_PACKET protocol family Aug 13 07:17:57.819822 kernel: Key type dns_resolver registered Aug 13 07:17:57.819831 kernel: IPI shorthand broadcast: enabled Aug 13 07:17:57.819838 kernel: sched_clock: Marking stable (949003622, 231506761)->(1235223535, -54713152) Aug 13 07:17:57.819845 kernel: registered taskstats version 1 Aug 13 07:17:57.819851 kernel: Loading compiled-in X.509 certificates Aug 13 07:17:57.819857 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 264e720147fa8df9744bb9dc1c08171c0cb20041' Aug 13 07:17:57.819864 kernel: Key type .fscrypt registered Aug 13 07:17:57.819870 kernel: Key type fscrypt-provisioning registered Aug 13 07:17:57.819878 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 07:17:57.819884 kernel: ima: Allocated hash algorithm: sha1 Aug 13 07:17:57.819892 kernel: ima: No architecture policies found Aug 13 07:17:57.819898 kernel: clk: Disabling unused clocks Aug 13 07:17:57.819909 kernel: Freeing unused kernel image (initmem) memory: 42876K Aug 13 07:17:57.819915 kernel: Write protecting the kernel read-only data: 36864k Aug 13 07:17:57.819922 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Aug 13 07:17:57.819928 kernel: Run /init as init process Aug 13 07:17:57.819944 kernel: with arguments: Aug 13 07:17:57.819955 kernel: /init Aug 13 07:17:57.819961 kernel: with environment: Aug 13 07:17:57.819967 kernel: HOME=/ Aug 13 07:17:57.819974 kernel: TERM=linux Aug 13 07:17:57.819980 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 07:17:57.819987 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:17:57.819995 systemd[1]: Detected virtualization vmware. Aug 13 07:17:57.820002 systemd[1]: Detected architecture x86-64. Aug 13 07:17:57.820010 systemd[1]: Running in initrd. Aug 13 07:17:57.820016 systemd[1]: No hostname configured, using default hostname. Aug 13 07:17:57.820023 systemd[1]: Hostname set to . Aug 13 07:17:57.820030 systemd[1]: Initializing machine ID from random generator. Aug 13 07:17:57.820036 systemd[1]: Queued start job for default target initrd.target. Aug 13 07:17:57.820042 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:17:57.820049 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:17:57.820056 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 07:17:57.820064 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:17:57.820070 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 07:17:57.820077 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 07:17:57.820084 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 07:17:57.820091 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 07:17:57.820098 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:17:57.820104 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:17:57.820112 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:17:57.820121 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:17:57.820128 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:17:57.820135 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:17:57.820141 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:17:57.820148 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:17:57.820154 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:17:57.820161 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:17:57.820168 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:17:57.820176 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:17:57.820183 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:17:57.820192 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:17:57.820201 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 07:17:57.820211 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:17:57.820219 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 07:17:57.820225 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 07:17:57.820232 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:17:57.820240 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:17:57.820246 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:17:57.820253 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 07:17:57.820260 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:17:57.820266 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 07:17:57.820287 systemd-journald[215]: Collecting audit messages is disabled. Aug 13 07:17:57.820304 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:17:57.820311 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:17:57.820319 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 07:17:57.820326 kernel: Bridge firewalling registered Aug 13 07:17:57.820333 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:17:57.820339 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:17:57.820346 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:17:57.820353 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:17:57.820361 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:17:57.820368 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:17:57.820374 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 07:17:57.820383 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:17:57.820389 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:17:57.820397 systemd-journald[215]: Journal started Aug 13 07:17:57.820411 systemd-journald[215]: Runtime Journal (/run/log/journal/bf4b32bcda05408ba43e9509c5b75d1b) is 4.8M, max 38.6M, 33.8M free. Aug 13 07:17:57.754945 systemd-modules-load[216]: Inserted module 'overlay' Aug 13 07:17:57.822016 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:17:57.778135 systemd-modules-load[216]: Inserted module 'br_netfilter' Aug 13 07:17:57.822485 dracut-cmdline[236]: dracut-dracut-053 Aug 13 07:17:57.824806 dracut-cmdline[236]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:17:57.829050 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:17:57.834939 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:17:57.843997 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:17:57.858876 systemd-resolved[274]: Positive Trust Anchors: Aug 13 07:17:57.858889 systemd-resolved[274]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:17:57.858911 systemd-resolved[274]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:17:57.860878 systemd-resolved[274]: Defaulting to hostname 'linux'. Aug 13 07:17:57.861757 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:17:57.861916 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:17:57.879945 kernel: SCSI subsystem initialized Aug 13 07:17:57.886949 kernel: Loading iSCSI transport class v2.0-870. Aug 13 07:17:57.893946 kernel: iscsi: registered transport (tcp) Aug 13 07:17:57.908058 kernel: iscsi: registered transport (qla4xxx) Aug 13 07:17:57.908077 kernel: QLogic iSCSI HBA Driver Aug 13 07:17:57.928339 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 07:17:57.932071 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 07:17:57.949960 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 07:17:57.950006 kernel: device-mapper: uevent: version 1.0.3 Aug 13 07:17:57.950025 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 07:17:57.983949 kernel: raid6: avx2x4 gen() 43095 MB/s Aug 13 07:17:58.000944 kernel: raid6: avx2x2 gen() 52923 MB/s Aug 13 07:17:58.018231 kernel: raid6: avx2x1 gen() 44085 MB/s Aug 13 07:17:58.018251 kernel: raid6: using algorithm avx2x2 gen() 52923 MB/s Aug 13 07:17:58.036157 kernel: raid6: .... xor() 31102 MB/s, rmw enabled Aug 13 07:17:58.036178 kernel: raid6: using avx2x2 recovery algorithm Aug 13 07:17:58.049943 kernel: xor: automatically using best checksumming function avx Aug 13 07:17:58.150000 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 07:17:58.155187 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:17:58.160100 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:17:58.167251 systemd-udevd[433]: Using default interface naming scheme 'v255'. Aug 13 07:17:58.169689 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:17:58.177031 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 07:17:58.185482 dracut-pre-trigger[436]: rd.md=0: removing MD RAID activation Aug 13 07:17:58.202291 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:17:58.206025 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:17:58.282088 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:17:58.288025 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 07:17:58.295199 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 07:17:58.295884 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:17:58.296243 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:17:58.296703 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:17:58.301046 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 07:17:58.309342 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:17:58.354947 kernel: VMware PVSCSI driver - version 1.0.7.0-k Aug 13 07:17:58.354984 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Aug 13 07:17:58.355944 kernel: vmw_pvscsi: using 64bit dma Aug 13 07:17:58.357942 kernel: vmw_pvscsi: max_id: 16 Aug 13 07:17:58.357964 kernel: vmw_pvscsi: setting ring_pages to 8 Aug 13 07:17:58.366986 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Aug 13 07:17:58.372035 kernel: vmw_pvscsi: enabling reqCallThreshold Aug 13 07:17:58.372071 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Aug 13 07:17:58.372199 kernel: vmw_pvscsi: driver-based request coalescing enabled Aug 13 07:17:58.372213 kernel: vmw_pvscsi: using MSI-X Aug 13 07:17:58.373942 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 07:17:58.377961 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Aug 13 07:17:58.378113 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Aug 13 07:17:58.378197 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Aug 13 07:17:58.378288 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Aug 13 07:17:58.380045 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:17:58.380133 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:17:58.380367 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:17:58.380486 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:17:58.380576 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:17:58.380712 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:17:58.390360 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:17:58.392952 kernel: libata version 3.00 loaded. Aug 13 07:17:58.396965 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 07:17:58.398972 kernel: AES CTR mode by8 optimization enabled Aug 13 07:17:58.398988 kernel: ata_piix 0000:00:07.1: version 2.13 Aug 13 07:17:58.402949 kernel: scsi host1: ata_piix Aug 13 07:17:58.403039 kernel: scsi host2: ata_piix Aug 13 07:17:58.403108 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Aug 13 07:17:58.403118 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Aug 13 07:17:58.408295 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:17:58.413037 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:17:58.425155 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:17:58.573667 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Aug 13 07:17:58.577977 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Aug 13 07:17:58.593459 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Aug 13 07:17:58.593566 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 07:17:58.593633 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Aug 13 07:17:58.593695 kernel: sd 0:0:0:0: [sda] Cache data unavailable Aug 13 07:17:58.595020 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Aug 13 07:17:58.596993 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Aug 13 07:17:58.597096 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 07:17:58.604333 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:17:58.604349 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 07:17:58.607953 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Aug 13 07:17:58.635967 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (485) Aug 13 07:17:58.641439 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Aug 13 07:17:58.641951 kernel: BTRFS: device fsid 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (483) Aug 13 07:17:58.644436 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Aug 13 07:17:58.647806 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Aug 13 07:17:58.650224 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Aug 13 07:17:58.650377 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Aug 13 07:17:58.655013 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 07:17:58.749245 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:17:58.754953 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:17:59.755967 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:17:59.756879 disk-uuid[591]: The operation has completed successfully. Aug 13 07:17:59.876769 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 07:17:59.876835 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 07:17:59.884084 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 07:17:59.886246 sh[608]: Success Aug 13 07:17:59.895959 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 07:18:00.033479 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 07:18:00.034602 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 07:18:00.035000 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 07:18:00.079875 kernel: BTRFS info (device dm-0): first mount of filesystem 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad Aug 13 07:18:00.079922 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:18:00.079942 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 07:18:00.082456 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 07:18:00.082474 kernel: BTRFS info (device dm-0): using free space tree Aug 13 07:18:00.127967 kernel: BTRFS info (device dm-0): enabling ssd optimizations Aug 13 07:18:00.129563 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 07:18:00.134063 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Aug 13 07:18:00.136053 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 07:18:00.156912 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:18:00.156975 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:18:00.156987 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:18:00.160956 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 07:18:00.177085 kernel: BTRFS info (device sda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:18:00.176975 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 07:18:00.180054 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 07:18:00.183054 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 07:18:00.260777 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Aug 13 07:18:00.266200 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 07:18:00.328447 ignition[666]: Ignition 2.19.0 Aug 13 07:18:00.328455 ignition[666]: Stage: fetch-offline Aug 13 07:18:00.328480 ignition[666]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:18:00.328487 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Aug 13 07:18:00.328556 ignition[666]: parsed url from cmdline: "" Aug 13 07:18:00.328558 ignition[666]: no config URL provided Aug 13 07:18:00.328561 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:18:00.328566 ignition[666]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:18:00.329012 ignition[666]: config successfully fetched Aug 13 07:18:00.329029 ignition[666]: parsing config with SHA512: d62524f43e831de10ea973686b00c37fc2da0c01d803dd3e1df369c331b22e80b0e0009fd355fde093dcda205ef3435392a08689f60b52121a7ab3572fc2546c Aug 13 07:18:00.332325 unknown[666]: fetched base config from "system" Aug 13 07:18:00.332334 unknown[666]: fetched user config from "vmware" Aug 13 07:18:00.332830 ignition[666]: fetch-offline: fetch-offline passed Aug 13 07:18:00.333078 ignition[666]: Ignition finished successfully Aug 13 07:18:00.334201 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:18:00.345398 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:18:00.349022 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:18:00.360698 systemd-networkd[799]: lo: Link UP Aug 13 07:18:00.360703 systemd-networkd[799]: lo: Gained carrier Aug 13 07:18:00.361396 systemd-networkd[799]: Enumeration completed Aug 13 07:18:00.361697 systemd-networkd[799]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Aug 13 07:18:00.361720 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:18:00.361858 systemd[1]: Reached target network.target - Network. Aug 13 07:18:00.361958 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 13 07:18:00.364939 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Aug 13 07:18:00.365064 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Aug 13 07:18:00.365217 systemd-networkd[799]: ens192: Link UP Aug 13 07:18:00.365221 systemd-networkd[799]: ens192: Gained carrier Aug 13 07:18:00.376086 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 07:18:00.385044 ignition[801]: Ignition 2.19.0 Aug 13 07:18:00.385056 ignition[801]: Stage: kargs Aug 13 07:18:00.385172 ignition[801]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:18:00.385179 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Aug 13 07:18:00.385775 ignition[801]: kargs: kargs passed Aug 13 07:18:00.385807 ignition[801]: Ignition finished successfully Aug 13 07:18:00.387117 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 07:18:00.391155 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 07:18:00.397767 ignition[809]: Ignition 2.19.0 Aug 13 07:18:00.397774 ignition[809]: Stage: disks Aug 13 07:18:00.397871 ignition[809]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:18:00.397878 ignition[809]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Aug 13 07:18:00.398432 ignition[809]: disks: disks passed Aug 13 07:18:00.398460 ignition[809]: Ignition finished successfully Aug 13 07:18:00.399338 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 07:18:00.399642 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 07:18:00.399848 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:18:00.400109 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:18:00.400330 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:18:00.400420 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:18:00.407077 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 07:18:00.417658 systemd-fsck[817]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Aug 13 07:18:00.419120 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 07:18:00.424663 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 07:18:00.515948 kernel: EXT4-fs (sda9): mounted filesystem 98cc0201-e9ec-4d2c-8a62-5b521bf9317d r/w with ordered data mode. Quota mode: none. Aug 13 07:18:00.516299 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 07:18:00.516727 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 07:18:00.532014 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:18:00.534404 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 07:18:00.535060 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 07:18:00.535107 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 07:18:00.535132 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:18:00.541957 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (825) Aug 13 07:18:00.545307 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 07:18:00.546903 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:18:00.546920 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:18:00.546929 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:18:00.550951 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 07:18:00.552060 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 07:18:00.553153 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:18:00.579834 initrd-setup-root[849]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 07:18:00.582681 initrd-setup-root[856]: cut: /sysroot/etc/group: No such file or directory Aug 13 07:18:00.585208 initrd-setup-root[863]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 07:18:00.597906 initrd-setup-root[870]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 07:18:00.713695 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 07:18:00.718022 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 07:18:00.720445 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 07:18:00.723944 kernel: BTRFS info (device sda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:18:00.740720 ignition[938]: INFO : Ignition 2.19.0 Aug 13 07:18:00.740720 ignition[938]: INFO : Stage: mount Aug 13 07:18:00.741031 ignition[938]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:18:00.741031 ignition[938]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Aug 13 07:18:00.741809 ignition[938]: INFO : mount: mount passed Aug 13 07:18:00.741904 ignition[938]: INFO : Ignition finished successfully Aug 13 07:18:00.742445 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 07:18:00.746042 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 07:18:00.820222 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 07:18:01.063217 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 07:18:01.068166 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:18:01.077772 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (949) Aug 13 07:18:01.077812 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:18:01.077832 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:18:01.079401 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:18:01.082947 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 07:18:01.083470 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:18:01.094929 ignition[966]: INFO : Ignition 2.19.0 Aug 13 07:18:01.094929 ignition[966]: INFO : Stage: files Aug 13 07:18:01.095435 ignition[966]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:18:01.095435 ignition[966]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Aug 13 07:18:01.095927 ignition[966]: DEBUG : files: compiled without relabeling support, skipping Aug 13 07:18:01.096386 ignition[966]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 07:18:01.096386 ignition[966]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 07:18:01.098748 ignition[966]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 07:18:01.098906 ignition[966]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 07:18:01.099073 ignition[966]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 07:18:01.099025 unknown[966]: wrote ssh authorized keys file for user: core Aug 13 07:18:01.100322 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 07:18:01.100657 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 07:18:01.258100 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 07:18:01.461285 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 07:18:01.461285 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 07:18:01.461826 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 07:18:01.461826 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:18:01.461826 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:18:01.461826 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:18:01.461826 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:18:01.461826 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:18:01.461826 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:18:01.463104 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:18:01.463104 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:18:01.463104 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:18:01.463104 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:18:01.463104 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:18:01.463104 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 07:18:01.489033 systemd-networkd[799]: ens192: Gained IPv6LL Aug 13 07:18:01.740421 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 07:18:01.943673 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:18:01.943960 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Aug 13 07:18:01.943960 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Aug 13 07:18:01.943960 ignition[966]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 13 07:18:01.949824 ignition[966]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:18:01.950010 ignition[966]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:18:01.950010 ignition[966]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 13 07:18:01.950010 ignition[966]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Aug 13 07:18:01.950010 ignition[966]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 07:18:01.950010 ignition[966]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 07:18:01.950010 ignition[966]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Aug 13 07:18:01.950010 ignition[966]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Aug 13 07:18:01.997381 ignition[966]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 07:18:02.000409 ignition[966]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 07:18:02.000409 ignition[966]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Aug 13 07:18:02.000409 ignition[966]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Aug 13 07:18:02.000409 ignition[966]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 07:18:02.000409 ignition[966]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:18:02.000409 ignition[966]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:18:02.000409 ignition[966]: INFO : files: files passed Aug 13 07:18:02.000409 ignition[966]: INFO : Ignition finished successfully Aug 13 07:18:02.001680 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 07:18:02.007061 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 07:18:02.008381 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 07:18:02.020510 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:18:02.020510 initrd-setup-root-after-ignition[996]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:18:02.021539 initrd-setup-root-after-ignition[1000]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:18:02.021907 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 07:18:02.022078 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 07:18:02.022332 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:18:02.022880 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 07:18:02.026029 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 07:18:02.037610 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 07:18:02.037668 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 07:18:02.038028 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 07:18:02.038158 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 07:18:02.038347 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 07:18:02.038743 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 07:18:02.047766 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:18:02.054149 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 07:18:02.060508 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:18:02.060850 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:18:02.061281 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 07:18:02.061571 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 07:18:02.061646 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:18:02.062214 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 07:18:02.062494 systemd[1]: Stopped target basic.target - Basic System. Aug 13 07:18:02.062801 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 07:18:02.063099 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:18:02.063420 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 07:18:02.063965 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 07:18:02.064131 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:18:02.064601 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 07:18:02.064902 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 07:18:02.065263 systemd[1]: Stopped target swap.target - Swaps. Aug 13 07:18:02.065402 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 07:18:02.065467 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:18:02.065833 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:18:02.066017 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:18:02.066216 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 07:18:02.066260 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:18:02.066466 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 07:18:02.066523 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 07:18:02.066771 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 07:18:02.066832 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:18:02.067069 systemd[1]: Stopped target paths.target - Path Units. Aug 13 07:18:02.067214 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 07:18:02.071046 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:18:02.071248 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 07:18:02.071471 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 07:18:02.071687 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 07:18:02.071735 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:18:02.071953 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 07:18:02.072000 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:18:02.072227 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 07:18:02.072284 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:18:02.072523 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 07:18:02.072576 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 07:18:02.079171 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 07:18:02.081045 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 07:18:02.081159 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 07:18:02.081244 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:18:02.081526 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 07:18:02.081603 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:18:02.083821 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 07:18:02.083878 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 07:18:02.088845 ignition[1021]: INFO : Ignition 2.19.0 Aug 13 07:18:02.088845 ignition[1021]: INFO : Stage: umount Aug 13 07:18:02.089600 ignition[1021]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:18:02.089600 ignition[1021]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Aug 13 07:18:02.090208 ignition[1021]: INFO : umount: umount passed Aug 13 07:18:02.090707 ignition[1021]: INFO : Ignition finished successfully Aug 13 07:18:02.091073 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 07:18:02.091146 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 07:18:02.091758 systemd[1]: Stopped target network.target - Network. Aug 13 07:18:02.091870 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 07:18:02.091897 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 07:18:02.092087 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 07:18:02.092108 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 07:18:02.092245 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 07:18:02.092267 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 07:18:02.092408 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 07:18:02.092428 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 07:18:02.092637 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 07:18:02.092807 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 07:18:02.095447 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 07:18:02.099783 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 07:18:02.099871 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 07:18:02.100308 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 07:18:02.100337 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:18:02.108143 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 07:18:02.108259 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 07:18:02.108293 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:18:02.108441 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Aug 13 07:18:02.108466 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Aug 13 07:18:02.108651 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:18:02.108895 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 07:18:02.108962 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 07:18:02.112064 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:18:02.112121 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:18:02.112646 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 07:18:02.112827 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 07:18:02.113095 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 07:18:02.113123 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:18:02.117293 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 07:18:02.117359 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 07:18:02.122410 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 07:18:02.122504 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:18:02.122885 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 07:18:02.122913 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 07:18:02.123167 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 07:18:02.123187 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:18:02.123350 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 07:18:02.123374 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:18:02.123699 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 07:18:02.123726 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 07:18:02.124042 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:18:02.124066 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:18:02.129050 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 07:18:02.129342 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 07:18:02.129377 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:18:02.129518 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 07:18:02.129546 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:18:02.129686 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 07:18:02.129719 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:18:02.129862 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:18:02.129885 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:18:02.132628 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 07:18:02.132702 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 07:18:02.196953 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 07:18:02.197050 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 07:18:02.197585 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 07:18:02.197758 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 07:18:02.197797 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 07:18:02.202049 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 07:18:02.214108 systemd[1]: Switching root. Aug 13 07:18:02.253113 systemd-journald[215]: Journal stopped Aug 13 07:17:57.759233 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 22:14:58 -00 2025 Aug 13 07:17:57.759249 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:17:57.759256 kernel: Disabled fast string operations Aug 13 07:17:57.759260 kernel: BIOS-provided physical RAM map: Aug 13 07:17:57.759264 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Aug 13 07:17:57.759268 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Aug 13 07:17:57.759274 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Aug 13 07:17:57.759278 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Aug 13 07:17:57.759283 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Aug 13 07:17:57.759287 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Aug 13 07:17:57.759291 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Aug 13 07:17:57.759295 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Aug 13 07:17:57.759299 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Aug 13 07:17:57.759304 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Aug 13 07:17:57.759310 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Aug 13 07:17:57.759315 kernel: NX (Execute Disable) protection: active Aug 13 07:17:57.759320 kernel: APIC: Static calls initialized Aug 13 07:17:57.759324 kernel: SMBIOS 2.7 present. Aug 13 07:17:57.759329 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Aug 13 07:17:57.759334 kernel: vmware: hypercall mode: 0x00 Aug 13 07:17:57.759339 kernel: Hypervisor detected: VMware Aug 13 07:17:57.759343 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Aug 13 07:17:57.759349 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Aug 13 07:17:57.759354 kernel: vmware: using clock offset of 3419116089 ns Aug 13 07:17:57.759359 kernel: tsc: Detected 3408.000 MHz processor Aug 13 07:17:57.759364 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 07:17:57.759369 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 07:17:57.759374 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Aug 13 07:17:57.759379 kernel: total RAM covered: 3072M Aug 13 07:17:57.759384 kernel: Found optimal setting for mtrr clean up Aug 13 07:17:57.759389 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Aug 13 07:17:57.759395 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Aug 13 07:17:57.759401 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 07:17:57.759405 kernel: Using GB pages for direct mapping Aug 13 07:17:57.759410 kernel: ACPI: Early table checksum verification disabled Aug 13 07:17:57.759415 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Aug 13 07:17:57.759420 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Aug 13 07:17:57.759425 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Aug 13 07:17:57.759430 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Aug 13 07:17:57.759435 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Aug 13 07:17:57.759443 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Aug 13 07:17:57.759448 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Aug 13 07:17:57.759453 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Aug 13 07:17:57.759458 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Aug 13 07:17:57.759463 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Aug 13 07:17:57.759469 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Aug 13 07:17:57.759475 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Aug 13 07:17:57.759485 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Aug 13 07:17:57.759491 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Aug 13 07:17:57.759497 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Aug 13 07:17:57.759502 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Aug 13 07:17:57.759507 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Aug 13 07:17:57.759512 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Aug 13 07:17:57.759517 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Aug 13 07:17:57.759522 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Aug 13 07:17:57.759529 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Aug 13 07:17:57.759534 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Aug 13 07:17:57.759539 kernel: system APIC only can use physical flat Aug 13 07:17:57.759545 kernel: APIC: Switched APIC routing to: physical flat Aug 13 07:17:57.759550 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 07:17:57.759555 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Aug 13 07:17:57.759560 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Aug 13 07:17:57.759565 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Aug 13 07:17:57.759570 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Aug 13 07:17:57.759576 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Aug 13 07:17:57.759581 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Aug 13 07:17:57.759586 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Aug 13 07:17:57.759591 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Aug 13 07:17:57.759596 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Aug 13 07:17:57.759601 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Aug 13 07:17:57.759606 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Aug 13 07:17:57.759612 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Aug 13 07:17:57.759616 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Aug 13 07:17:57.759621 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Aug 13 07:17:57.759628 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Aug 13 07:17:57.759633 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Aug 13 07:17:57.759638 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Aug 13 07:17:57.759643 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Aug 13 07:17:57.759648 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Aug 13 07:17:57.759653 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Aug 13 07:17:57.759657 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Aug 13 07:17:57.759663 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Aug 13 07:17:57.759668 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Aug 13 07:17:57.759673 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Aug 13 07:17:57.759678 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Aug 13 07:17:57.759684 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Aug 13 07:17:57.759689 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Aug 13 07:17:57.759694 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Aug 13 07:17:57.759699 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Aug 13 07:17:57.759704 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Aug 13 07:17:57.759709 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Aug 13 07:17:57.759714 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Aug 13 07:17:57.759719 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Aug 13 07:17:57.759724 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Aug 13 07:17:57.759729 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Aug 13 07:17:57.759735 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Aug 13 07:17:57.759741 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Aug 13 07:17:57.759746 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Aug 13 07:17:57.759751 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Aug 13 07:17:57.759756 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Aug 13 07:17:57.759761 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Aug 13 07:17:57.759766 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Aug 13 07:17:57.759771 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Aug 13 07:17:57.759776 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Aug 13 07:17:57.759781 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Aug 13 07:17:57.759787 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Aug 13 07:17:57.759792 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Aug 13 07:17:57.759797 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Aug 13 07:17:57.759802 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Aug 13 07:17:57.759807 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Aug 13 07:17:57.759812 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Aug 13 07:17:57.759817 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Aug 13 07:17:57.759822 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Aug 13 07:17:57.759827 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Aug 13 07:17:57.759832 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Aug 13 07:17:57.759838 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Aug 13 07:17:57.759843 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Aug 13 07:17:57.759848 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Aug 13 07:17:57.759858 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Aug 13 07:17:57.759863 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Aug 13 07:17:57.759869 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Aug 13 07:17:57.759874 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Aug 13 07:17:57.759879 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Aug 13 07:17:57.759886 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Aug 13 07:17:57.759892 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Aug 13 07:17:57.759897 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Aug 13 07:17:57.759903 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Aug 13 07:17:57.759908 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Aug 13 07:17:57.759913 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Aug 13 07:17:57.759919 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Aug 13 07:17:57.759924 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Aug 13 07:17:57.759929 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Aug 13 07:17:57.760476 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Aug 13 07:17:57.760486 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Aug 13 07:17:57.760491 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Aug 13 07:17:57.760497 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Aug 13 07:17:57.760503 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Aug 13 07:17:57.760508 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Aug 13 07:17:57.760513 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Aug 13 07:17:57.760519 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Aug 13 07:17:57.760524 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Aug 13 07:17:57.760530 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Aug 13 07:17:57.760535 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Aug 13 07:17:57.760542 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Aug 13 07:17:57.760547 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Aug 13 07:17:57.760553 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Aug 13 07:17:57.760558 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Aug 13 07:17:57.760564 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Aug 13 07:17:57.760569 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Aug 13 07:17:57.760574 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Aug 13 07:17:57.760580 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Aug 13 07:17:57.760585 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Aug 13 07:17:57.760591 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Aug 13 07:17:57.760597 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Aug 13 07:17:57.760602 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Aug 13 07:17:57.760608 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Aug 13 07:17:57.760613 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Aug 13 07:17:57.760619 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Aug 13 07:17:57.760624 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Aug 13 07:17:57.760629 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Aug 13 07:17:57.760635 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Aug 13 07:17:57.760640 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Aug 13 07:17:57.760645 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Aug 13 07:17:57.760652 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Aug 13 07:17:57.760657 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Aug 13 07:17:57.760663 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Aug 13 07:17:57.760668 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Aug 13 07:17:57.760673 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Aug 13 07:17:57.760679 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Aug 13 07:17:57.760684 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Aug 13 07:17:57.760689 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Aug 13 07:17:57.760695 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Aug 13 07:17:57.760700 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Aug 13 07:17:57.760707 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Aug 13 07:17:57.760712 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Aug 13 07:17:57.760718 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Aug 13 07:17:57.760723 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Aug 13 07:17:57.760729 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Aug 13 07:17:57.760734 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Aug 13 07:17:57.760739 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Aug 13 07:17:57.760745 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Aug 13 07:17:57.760750 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Aug 13 07:17:57.760755 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Aug 13 07:17:57.760762 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Aug 13 07:17:57.760767 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Aug 13 07:17:57.760772 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Aug 13 07:17:57.760778 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Aug 13 07:17:57.760783 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 13 07:17:57.760789 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Aug 13 07:17:57.760794 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Aug 13 07:17:57.760800 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Aug 13 07:17:57.760806 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Aug 13 07:17:57.760811 kernel: Zone ranges: Aug 13 07:17:57.760818 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 07:17:57.760824 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Aug 13 07:17:57.760829 kernel: Normal empty Aug 13 07:17:57.760835 kernel: Movable zone start for each node Aug 13 07:17:57.760840 kernel: Early memory node ranges Aug 13 07:17:57.760846 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Aug 13 07:17:57.760851 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Aug 13 07:17:57.760857 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Aug 13 07:17:57.760862 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Aug 13 07:17:57.760869 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:17:57.760875 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Aug 13 07:17:57.760880 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Aug 13 07:17:57.760886 kernel: ACPI: PM-Timer IO Port: 0x1008 Aug 13 07:17:57.760891 kernel: system APIC only can use physical flat Aug 13 07:17:57.760897 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Aug 13 07:17:57.760902 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Aug 13 07:17:57.760908 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Aug 13 07:17:57.760913 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Aug 13 07:17:57.760918 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Aug 13 07:17:57.760925 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Aug 13 07:17:57.760931 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Aug 13 07:17:57.761998 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Aug 13 07:17:57.762005 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Aug 13 07:17:57.762010 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Aug 13 07:17:57.762016 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Aug 13 07:17:57.762021 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Aug 13 07:17:57.762027 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Aug 13 07:17:57.762032 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Aug 13 07:17:57.762041 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Aug 13 07:17:57.762046 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Aug 13 07:17:57.762052 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Aug 13 07:17:57.762057 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Aug 13 07:17:57.762063 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Aug 13 07:17:57.762068 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Aug 13 07:17:57.762073 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Aug 13 07:17:57.762079 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Aug 13 07:17:57.762085 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Aug 13 07:17:57.762090 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Aug 13 07:17:57.762097 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Aug 13 07:17:57.762102 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Aug 13 07:17:57.762108 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Aug 13 07:17:57.762113 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Aug 13 07:17:57.762119 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Aug 13 07:17:57.762124 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Aug 13 07:17:57.762130 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Aug 13 07:17:57.762135 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Aug 13 07:17:57.762140 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Aug 13 07:17:57.762147 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Aug 13 07:17:57.762152 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Aug 13 07:17:57.762158 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Aug 13 07:17:57.762163 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Aug 13 07:17:57.762169 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Aug 13 07:17:57.762174 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Aug 13 07:17:57.762180 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Aug 13 07:17:57.762185 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Aug 13 07:17:57.762191 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Aug 13 07:17:57.762196 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Aug 13 07:17:57.762203 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Aug 13 07:17:57.762208 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Aug 13 07:17:57.762214 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Aug 13 07:17:57.762219 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Aug 13 07:17:57.762224 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Aug 13 07:17:57.762230 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Aug 13 07:17:57.762235 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Aug 13 07:17:57.762241 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Aug 13 07:17:57.762246 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Aug 13 07:17:57.762252 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Aug 13 07:17:57.762258 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Aug 13 07:17:57.762264 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Aug 13 07:17:57.762269 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Aug 13 07:17:57.762275 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Aug 13 07:17:57.762280 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Aug 13 07:17:57.762286 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Aug 13 07:17:57.762291 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Aug 13 07:17:57.762296 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Aug 13 07:17:57.762302 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Aug 13 07:17:57.762307 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Aug 13 07:17:57.762314 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Aug 13 07:17:57.762319 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Aug 13 07:17:57.762325 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Aug 13 07:17:57.762330 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Aug 13 07:17:57.762336 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Aug 13 07:17:57.762341 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Aug 13 07:17:57.762347 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Aug 13 07:17:57.762352 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Aug 13 07:17:57.762357 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Aug 13 07:17:57.762367 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Aug 13 07:17:57.762376 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Aug 13 07:17:57.762386 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Aug 13 07:17:57.762394 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Aug 13 07:17:57.762404 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Aug 13 07:17:57.762414 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Aug 13 07:17:57.762419 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Aug 13 07:17:57.762425 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Aug 13 07:17:57.762431 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Aug 13 07:17:57.762436 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Aug 13 07:17:57.762444 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Aug 13 07:17:57.762449 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Aug 13 07:17:57.762454 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Aug 13 07:17:57.762460 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Aug 13 07:17:57.762465 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Aug 13 07:17:57.762471 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Aug 13 07:17:57.762476 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Aug 13 07:17:57.762482 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Aug 13 07:17:57.762487 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Aug 13 07:17:57.762492 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Aug 13 07:17:57.762499 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Aug 13 07:17:57.762505 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Aug 13 07:17:57.762510 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Aug 13 07:17:57.762516 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Aug 13 07:17:57.762521 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Aug 13 07:17:57.762527 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Aug 13 07:17:57.762532 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Aug 13 07:17:57.762537 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Aug 13 07:17:57.762543 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Aug 13 07:17:57.762550 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Aug 13 07:17:57.762555 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Aug 13 07:17:57.762561 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Aug 13 07:17:57.762566 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Aug 13 07:17:57.762572 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Aug 13 07:17:57.762577 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Aug 13 07:17:57.762582 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Aug 13 07:17:57.762588 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Aug 13 07:17:57.762593 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Aug 13 07:17:57.762599 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Aug 13 07:17:57.762605 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Aug 13 07:17:57.762611 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Aug 13 07:17:57.762616 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Aug 13 07:17:57.762622 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Aug 13 07:17:57.762627 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Aug 13 07:17:57.762632 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Aug 13 07:17:57.762638 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Aug 13 07:17:57.762643 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Aug 13 07:17:57.762649 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Aug 13 07:17:57.762655 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Aug 13 07:17:57.762661 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Aug 13 07:17:57.762666 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Aug 13 07:17:57.762672 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Aug 13 07:17:57.762677 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Aug 13 07:17:57.762682 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Aug 13 07:17:57.762688 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Aug 13 07:17:57.762693 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Aug 13 07:17:57.762699 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Aug 13 07:17:57.762704 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Aug 13 07:17:57.762711 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 07:17:57.762717 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Aug 13 07:17:57.762722 kernel: TSC deadline timer available Aug 13 07:17:57.762728 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Aug 13 07:17:57.762734 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Aug 13 07:17:57.762740 kernel: Booting paravirtualized kernel on VMware hypervisor Aug 13 07:17:57.762745 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 07:17:57.762751 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Aug 13 07:17:57.762757 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 Aug 13 07:17:57.762763 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 Aug 13 07:17:57.762769 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Aug 13 07:17:57.762775 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Aug 13 07:17:57.762780 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Aug 13 07:17:57.762785 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Aug 13 07:17:57.762791 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Aug 13 07:17:57.762805 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Aug 13 07:17:57.762812 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Aug 13 07:17:57.762817 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Aug 13 07:17:57.762824 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Aug 13 07:17:57.762830 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Aug 13 07:17:57.762835 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Aug 13 07:17:57.762841 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Aug 13 07:17:57.762847 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Aug 13 07:17:57.762852 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Aug 13 07:17:57.762858 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Aug 13 07:17:57.762864 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Aug 13 07:17:57.762872 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:17:57.762878 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 07:17:57.762884 kernel: random: crng init done Aug 13 07:17:57.762889 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Aug 13 07:17:57.762895 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Aug 13 07:17:57.762901 kernel: printk: log_buf_len min size: 262144 bytes Aug 13 07:17:57.762907 kernel: printk: log_buf_len: 1048576 bytes Aug 13 07:17:57.762913 kernel: printk: early log buf free: 239648(91%) Aug 13 07:17:57.762919 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 07:17:57.762926 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 07:17:57.764016 kernel: Fallback order for Node 0: 0 Aug 13 07:17:57.764027 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Aug 13 07:17:57.764033 kernel: Policy zone: DMA32 Aug 13 07:17:57.764039 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 07:17:57.764046 kernel: Memory: 1936348K/2096628K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 160020K reserved, 0K cma-reserved) Aug 13 07:17:57.764054 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Aug 13 07:17:57.764060 kernel: ftrace: allocating 37968 entries in 149 pages Aug 13 07:17:57.764066 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 07:17:57.764072 kernel: Dynamic Preempt: voluntary Aug 13 07:17:57.764078 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 07:17:57.764084 kernel: rcu: RCU event tracing is enabled. Aug 13 07:17:57.764090 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Aug 13 07:17:57.764097 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 07:17:57.764104 kernel: Rude variant of Tasks RCU enabled. Aug 13 07:17:57.764110 kernel: Tracing variant of Tasks RCU enabled. Aug 13 07:17:57.764116 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 07:17:57.764122 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Aug 13 07:17:57.764128 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Aug 13 07:17:57.764134 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Aug 13 07:17:57.764139 kernel: Console: colour VGA+ 80x25 Aug 13 07:17:57.764145 kernel: printk: console [tty0] enabled Aug 13 07:17:57.764151 kernel: printk: console [ttyS0] enabled Aug 13 07:17:57.764158 kernel: ACPI: Core revision 20230628 Aug 13 07:17:57.764165 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Aug 13 07:17:57.764171 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 07:17:57.764177 kernel: x2apic enabled Aug 13 07:17:57.764183 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 07:17:57.764188 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 07:17:57.764194 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Aug 13 07:17:57.764200 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Aug 13 07:17:57.764207 kernel: Disabled fast string operations Aug 13 07:17:57.764213 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Aug 13 07:17:57.764220 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Aug 13 07:17:57.764226 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 07:17:57.764232 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Aug 13 07:17:57.764238 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Aug 13 07:17:57.764244 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Aug 13 07:17:57.764250 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Aug 13 07:17:57.764256 kernel: RETBleed: Mitigation: Enhanced IBRS Aug 13 07:17:57.764262 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 07:17:57.764268 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 07:17:57.764275 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 07:17:57.764281 kernel: SRBDS: Unknown: Dependent on hypervisor status Aug 13 07:17:57.764287 kernel: GDS: Unknown: Dependent on hypervisor status Aug 13 07:17:57.764293 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 07:17:57.764305 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 07:17:57.764314 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 07:17:57.764320 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 07:17:57.764326 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 07:17:57.764333 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Aug 13 07:17:57.764339 kernel: Freeing SMP alternatives memory: 32K Aug 13 07:17:57.764345 kernel: pid_max: default: 131072 minimum: 1024 Aug 13 07:17:57.764351 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 07:17:57.764357 kernel: landlock: Up and running. Aug 13 07:17:57.764362 kernel: SELinux: Initializing. Aug 13 07:17:57.764368 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 07:17:57.764374 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 07:17:57.764380 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Aug 13 07:17:57.764387 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Aug 13 07:17:57.764393 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Aug 13 07:17:57.764399 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Aug 13 07:17:57.764405 kernel: Performance Events: Skylake events, core PMU driver. Aug 13 07:17:57.764411 kernel: core: CPUID marked event: 'cpu cycles' unavailable Aug 13 07:17:57.764418 kernel: core: CPUID marked event: 'instructions' unavailable Aug 13 07:17:57.764423 kernel: core: CPUID marked event: 'bus cycles' unavailable Aug 13 07:17:57.764429 kernel: core: CPUID marked event: 'cache references' unavailable Aug 13 07:17:57.764435 kernel: core: CPUID marked event: 'cache misses' unavailable Aug 13 07:17:57.764442 kernel: core: CPUID marked event: 'branch instructions' unavailable Aug 13 07:17:57.764448 kernel: core: CPUID marked event: 'branch misses' unavailable Aug 13 07:17:57.764453 kernel: ... version: 1 Aug 13 07:17:57.764463 kernel: ... bit width: 48 Aug 13 07:17:57.764473 kernel: ... generic registers: 4 Aug 13 07:17:57.764485 kernel: ... value mask: 0000ffffffffffff Aug 13 07:17:57.764492 kernel: ... max period: 000000007fffffff Aug 13 07:17:57.764498 kernel: ... fixed-purpose events: 0 Aug 13 07:17:57.764504 kernel: ... event mask: 000000000000000f Aug 13 07:17:57.764512 kernel: signal: max sigframe size: 1776 Aug 13 07:17:57.764518 kernel: rcu: Hierarchical SRCU implementation. Aug 13 07:17:57.764524 kernel: rcu: Max phase no-delay instances is 400. Aug 13 07:17:57.764529 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 07:17:57.764535 kernel: smp: Bringing up secondary CPUs ... Aug 13 07:17:57.764541 kernel: smpboot: x86: Booting SMP configuration: Aug 13 07:17:57.764547 kernel: .... node #0, CPUs: #1 Aug 13 07:17:57.764553 kernel: Disabled fast string operations Aug 13 07:17:57.764559 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Aug 13 07:17:57.764565 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Aug 13 07:17:57.764571 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 07:17:57.764577 kernel: smpboot: Max logical packages: 128 Aug 13 07:17:57.764583 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Aug 13 07:17:57.764589 kernel: devtmpfs: initialized Aug 13 07:17:57.764595 kernel: x86/mm: Memory block size: 128MB Aug 13 07:17:57.764601 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Aug 13 07:17:57.764607 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 07:17:57.764613 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Aug 13 07:17:57.764619 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 07:17:57.764626 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 07:17:57.764632 kernel: audit: initializing netlink subsys (disabled) Aug 13 07:17:57.764638 kernel: audit: type=2000 audit(1755069476.093:1): state=initialized audit_enabled=0 res=1 Aug 13 07:17:57.764644 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 07:17:57.764650 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 07:17:57.764656 kernel: cpuidle: using governor menu Aug 13 07:17:57.764662 kernel: Simple Boot Flag at 0x36 set to 0x80 Aug 13 07:17:57.764668 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 07:17:57.764673 kernel: dca service started, version 1.12.1 Aug 13 07:17:57.764680 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Aug 13 07:17:57.764686 kernel: PCI: Using configuration type 1 for base access Aug 13 07:17:57.764693 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 07:17:57.764699 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 07:17:57.764704 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 07:17:57.764710 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 07:17:57.764716 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 07:17:57.764722 kernel: ACPI: Added _OSI(Module Device) Aug 13 07:17:57.764728 kernel: ACPI: Added _OSI(Processor Device) Aug 13 07:17:57.764735 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 07:17:57.764741 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 07:17:57.764747 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Aug 13 07:17:57.764753 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 07:17:57.764759 kernel: ACPI: Interpreter enabled Aug 13 07:17:57.764764 kernel: ACPI: PM: (supports S0 S1 S5) Aug 13 07:17:57.764770 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 07:17:57.764776 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 07:17:57.764784 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 07:17:57.764790 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Aug 13 07:17:57.764796 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Aug 13 07:17:57.764880 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 07:17:57.765972 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Aug 13 07:17:57.766033 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Aug 13 07:17:57.766042 kernel: PCI host bridge to bus 0000:00 Aug 13 07:17:57.766096 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 07:17:57.766147 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Aug 13 07:17:57.766192 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 07:17:57.766237 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 07:17:57.766282 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Aug 13 07:17:57.766327 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Aug 13 07:17:57.766389 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Aug 13 07:17:57.766449 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Aug 13 07:17:57.766506 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Aug 13 07:17:57.766561 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Aug 13 07:17:57.766612 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Aug 13 07:17:57.766663 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Aug 13 07:17:57.766713 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Aug 13 07:17:57.766767 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Aug 13 07:17:57.766818 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Aug 13 07:17:57.766885 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Aug 13 07:17:57.767958 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Aug 13 07:17:57.768015 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Aug 13 07:17:57.768072 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Aug 13 07:17:57.768123 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Aug 13 07:17:57.768178 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Aug 13 07:17:57.768233 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Aug 13 07:17:57.768285 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Aug 13 07:17:57.768336 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Aug 13 07:17:57.768387 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Aug 13 07:17:57.768437 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Aug 13 07:17:57.768487 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 07:17:57.768548 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Aug 13 07:17:57.768603 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.768655 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.768710 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.768762 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.768817 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.768871 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.768926 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.771019 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.771079 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.771132 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.771189 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.771245 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.771304 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.771357 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.771413 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.771466 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.771538 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.771595 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.771653 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.771705 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.771761 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.771814 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.771871 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.771926 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.771998 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.772051 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.772107 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.772160 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.772216 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.772272 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.772329 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.772381 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.772442 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.772494 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.772551 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.772606 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.772662 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.772714 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.772770 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.772824 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.772880 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.773898 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.773999 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.774055 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.774111 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.774164 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.774259 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.774549 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.774628 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.775011 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.775080 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.775134 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.775190 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.775244 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.775303 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.775356 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.775412 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.775465 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.775522 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.775575 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.775634 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.775686 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.775743 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Aug 13 07:17:57.775795 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.775849 kernel: pci_bus 0000:01: extended config space not accessible Aug 13 07:17:57.775902 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Aug 13 07:17:57.775970 kernel: pci_bus 0000:02: extended config space not accessible Aug 13 07:17:57.775980 kernel: acpiphp: Slot [32] registered Aug 13 07:17:57.775986 kernel: acpiphp: Slot [33] registered Aug 13 07:17:57.775992 kernel: acpiphp: Slot [34] registered Aug 13 07:17:57.775998 kernel: acpiphp: Slot [35] registered Aug 13 07:17:57.776004 kernel: acpiphp: Slot [36] registered Aug 13 07:17:57.776011 kernel: acpiphp: Slot [37] registered Aug 13 07:17:57.776016 kernel: acpiphp: Slot [38] registered Aug 13 07:17:57.776022 kernel: acpiphp: Slot [39] registered Aug 13 07:17:57.776030 kernel: acpiphp: Slot [40] registered Aug 13 07:17:57.776037 kernel: acpiphp: Slot [41] registered Aug 13 07:17:57.776042 kernel: acpiphp: Slot [42] registered Aug 13 07:17:57.776048 kernel: acpiphp: Slot [43] registered Aug 13 07:17:57.776054 kernel: acpiphp: Slot [44] registered Aug 13 07:17:57.776060 kernel: acpiphp: Slot [45] registered Aug 13 07:17:57.776066 kernel: acpiphp: Slot [46] registered Aug 13 07:17:57.776072 kernel: acpiphp: Slot [47] registered Aug 13 07:17:57.776078 kernel: acpiphp: Slot [48] registered Aug 13 07:17:57.776085 kernel: acpiphp: Slot [49] registered Aug 13 07:17:57.776091 kernel: acpiphp: Slot [50] registered Aug 13 07:17:57.776105 kernel: acpiphp: Slot [51] registered Aug 13 07:17:57.776124 kernel: acpiphp: Slot [52] registered Aug 13 07:17:57.776140 kernel: acpiphp: Slot [53] registered Aug 13 07:17:57.776153 kernel: acpiphp: Slot [54] registered Aug 13 07:17:57.776160 kernel: acpiphp: Slot [55] registered Aug 13 07:17:57.776166 kernel: acpiphp: Slot [56] registered Aug 13 07:17:57.776172 kernel: acpiphp: Slot [57] registered Aug 13 07:17:57.776178 kernel: acpiphp: Slot [58] registered Aug 13 07:17:57.776185 kernel: acpiphp: Slot [59] registered Aug 13 07:17:57.776191 kernel: acpiphp: Slot [60] registered Aug 13 07:17:57.776205 kernel: acpiphp: Slot [61] registered Aug 13 07:17:57.776211 kernel: acpiphp: Slot [62] registered Aug 13 07:17:57.776217 kernel: acpiphp: Slot [63] registered Aug 13 07:17:57.778098 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Aug 13 07:17:57.778168 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Aug 13 07:17:57.778222 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Aug 13 07:17:57.778274 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Aug 13 07:17:57.778329 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Aug 13 07:17:57.778381 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Aug 13 07:17:57.778432 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Aug 13 07:17:57.778482 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Aug 13 07:17:57.778534 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Aug 13 07:17:57.778593 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Aug 13 07:17:57.778646 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Aug 13 07:17:57.778702 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Aug 13 07:17:57.778755 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Aug 13 07:17:57.778808 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Aug 13 07:17:57.778861 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Aug 13 07:17:57.778914 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Aug 13 07:17:57.778987 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Aug 13 07:17:57.779041 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Aug 13 07:17:57.779098 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Aug 13 07:17:57.779150 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Aug 13 07:17:57.779202 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Aug 13 07:17:57.779253 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Aug 13 07:17:57.779307 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Aug 13 07:17:57.779358 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Aug 13 07:17:57.779410 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Aug 13 07:17:57.779461 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Aug 13 07:17:57.779532 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Aug 13 07:17:57.779583 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Aug 13 07:17:57.779633 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Aug 13 07:17:57.779685 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Aug 13 07:17:57.779735 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Aug 13 07:17:57.779786 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Aug 13 07:17:57.779840 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Aug 13 07:17:57.779892 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Aug 13 07:17:57.779978 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Aug 13 07:17:57.780033 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Aug 13 07:17:57.780083 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Aug 13 07:17:57.780134 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Aug 13 07:17:57.780189 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Aug 13 07:17:57.780239 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Aug 13 07:17:57.780289 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Aug 13 07:17:57.780348 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Aug 13 07:17:57.780401 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Aug 13 07:17:57.780460 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Aug 13 07:17:57.780518 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Aug 13 07:17:57.780573 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Aug 13 07:17:57.780625 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Aug 13 07:17:57.780677 kernel: pci 0000:0b:00.0: supports D1 D2 Aug 13 07:17:57.780749 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Aug 13 07:17:57.780802 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Aug 13 07:17:57.780855 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Aug 13 07:17:57.780906 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Aug 13 07:17:57.781962 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Aug 13 07:17:57.782038 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Aug 13 07:17:57.782098 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Aug 13 07:17:57.782152 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Aug 13 07:17:57.782205 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Aug 13 07:17:57.782260 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Aug 13 07:17:57.782312 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Aug 13 07:17:57.782365 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Aug 13 07:17:57.782418 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Aug 13 07:17:57.782474 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Aug 13 07:17:57.782527 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Aug 13 07:17:57.782579 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Aug 13 07:17:57.782632 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Aug 13 07:17:57.782684 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Aug 13 07:17:57.782736 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Aug 13 07:17:57.782789 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Aug 13 07:17:57.782841 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Aug 13 07:17:57.782896 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Aug 13 07:17:57.782964 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Aug 13 07:17:57.783016 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Aug 13 07:17:57.783068 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Aug 13 07:17:57.783121 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Aug 13 07:17:57.783173 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Aug 13 07:17:57.783225 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Aug 13 07:17:57.783279 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Aug 13 07:17:57.783335 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Aug 13 07:17:57.783387 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Aug 13 07:17:57.783439 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Aug 13 07:17:57.783493 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Aug 13 07:17:57.783546 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Aug 13 07:17:57.783597 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Aug 13 07:17:57.783649 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Aug 13 07:17:57.783706 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Aug 13 07:17:57.783758 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Aug 13 07:17:57.783811 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Aug 13 07:17:57.783863 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Aug 13 07:17:57.783917 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Aug 13 07:17:57.784002 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Aug 13 07:17:57.784055 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Aug 13 07:17:57.784107 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Aug 13 07:17:57.784162 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Aug 13 07:17:57.784214 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Aug 13 07:17:57.784266 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Aug 13 07:17:57.784317 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Aug 13 07:17:57.784369 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Aug 13 07:17:57.784421 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Aug 13 07:17:57.784473 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Aug 13 07:17:57.784535 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Aug 13 07:17:57.784591 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Aug 13 07:17:57.784643 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Aug 13 07:17:57.784694 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Aug 13 07:17:57.784746 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Aug 13 07:17:57.784799 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Aug 13 07:17:57.784851 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Aug 13 07:17:57.784903 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Aug 13 07:17:57.784966 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Aug 13 07:17:57.785023 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Aug 13 07:17:57.785075 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Aug 13 07:17:57.785127 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Aug 13 07:17:57.785181 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Aug 13 07:17:57.785233 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Aug 13 07:17:57.785285 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Aug 13 07:17:57.785339 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Aug 13 07:17:57.785393 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Aug 13 07:17:57.785446 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Aug 13 07:17:57.785500 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Aug 13 07:17:57.785552 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Aug 13 07:17:57.785604 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Aug 13 07:17:57.785657 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Aug 13 07:17:57.785709 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Aug 13 07:17:57.785761 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Aug 13 07:17:57.785815 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Aug 13 07:17:57.785870 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Aug 13 07:17:57.785923 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Aug 13 07:17:57.786019 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Aug 13 07:17:57.786071 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Aug 13 07:17:57.786123 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Aug 13 07:17:57.786131 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Aug 13 07:17:57.786138 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Aug 13 07:17:57.786144 kernel: ACPI: PCI: Interrupt link LNKB disabled Aug 13 07:17:57.786153 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 07:17:57.786159 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Aug 13 07:17:57.786166 kernel: iommu: Default domain type: Translated Aug 13 07:17:57.786172 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 07:17:57.786178 kernel: PCI: Using ACPI for IRQ routing Aug 13 07:17:57.786184 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 07:17:57.786190 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Aug 13 07:17:57.786196 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Aug 13 07:17:57.786247 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Aug 13 07:17:57.786302 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Aug 13 07:17:57.786354 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 07:17:57.786363 kernel: vgaarb: loaded Aug 13 07:17:57.786369 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Aug 13 07:17:57.786375 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Aug 13 07:17:57.786381 kernel: clocksource: Switched to clocksource tsc-early Aug 13 07:17:57.786387 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 07:17:57.786393 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 07:17:57.786399 kernel: pnp: PnP ACPI init Aug 13 07:17:57.786456 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Aug 13 07:17:57.786506 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Aug 13 07:17:57.786553 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Aug 13 07:17:57.786604 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Aug 13 07:17:57.786656 kernel: pnp 00:06: [dma 2] Aug 13 07:17:57.786708 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Aug 13 07:17:57.786760 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Aug 13 07:17:57.786807 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Aug 13 07:17:57.786816 kernel: pnp: PnP ACPI: found 8 devices Aug 13 07:17:57.786822 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 07:17:57.786828 kernel: NET: Registered PF_INET protocol family Aug 13 07:17:57.786834 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 07:17:57.786840 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 13 07:17:57.786847 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 07:17:57.786854 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 07:17:57.786861 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 13 07:17:57.786867 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 13 07:17:57.786873 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 07:17:57.786880 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 07:17:57.786886 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 07:17:57.786892 kernel: NET: Registered PF_XDP protocol family Aug 13 07:17:57.787015 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Aug 13 07:17:57.787074 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Aug 13 07:17:57.787127 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Aug 13 07:17:57.787180 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Aug 13 07:17:57.787232 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Aug 13 07:17:57.787285 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Aug 13 07:17:57.787337 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Aug 13 07:17:57.787391 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Aug 13 07:17:57.787443 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Aug 13 07:17:57.787499 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Aug 13 07:17:57.787551 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Aug 13 07:17:57.787603 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Aug 13 07:17:57.787654 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Aug 13 07:17:57.787710 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Aug 13 07:17:57.787762 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Aug 13 07:17:57.787814 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Aug 13 07:17:57.787866 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Aug 13 07:17:57.787918 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Aug 13 07:17:57.787978 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Aug 13 07:17:57.788033 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Aug 13 07:17:57.788085 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Aug 13 07:17:57.788137 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Aug 13 07:17:57.788190 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Aug 13 07:17:57.788242 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Aug 13 07:17:57.788294 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Aug 13 07:17:57.788349 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.788401 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.788453 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.788505 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.788557 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.788608 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.788660 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.788711 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.788766 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.788818 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.788870 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.788922 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.788989 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.789041 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.789093 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.789145 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.789199 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.789252 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.789304 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.789356 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.789408 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.789470 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.789522 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.789574 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.789628 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.789679 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.789731 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.789787 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.789840 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.789891 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.791970 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.792057 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.792120 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.792178 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.792257 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.792347 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.792410 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.792476 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.792530 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.792583 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.792648 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.792702 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.792763 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.792828 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.792891 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.792969 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.793028 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.793095 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.793150 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.793205 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.793262 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.793323 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.793385 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.793456 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.793522 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.793595 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.793651 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.793704 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.793756 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.793820 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.793885 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.794518 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.794594 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.794678 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.794735 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.794788 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.794842 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.794898 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.794987 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.795045 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.795102 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.795163 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.795223 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.795284 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.795347 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.795419 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.795479 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.795534 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.795596 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.795668 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.795725 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.795778 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.795837 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Aug 13 07:17:57.795894 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Aug 13 07:17:57.796222 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Aug 13 07:17:57.796304 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Aug 13 07:17:57.796371 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Aug 13 07:17:57.796433 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Aug 13 07:17:57.796492 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Aug 13 07:17:57.796560 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Aug 13 07:17:57.796628 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Aug 13 07:17:57.796695 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Aug 13 07:17:57.796766 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Aug 13 07:17:57.796820 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Aug 13 07:17:57.796874 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Aug 13 07:17:57.796931 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Aug 13 07:17:57.799033 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Aug 13 07:17:57.799092 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Aug 13 07:17:57.799155 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Aug 13 07:17:57.799238 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Aug 13 07:17:57.799320 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Aug 13 07:17:57.799377 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Aug 13 07:17:57.799444 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Aug 13 07:17:57.799505 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Aug 13 07:17:57.799568 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Aug 13 07:17:57.799636 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Aug 13 07:17:57.799717 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Aug 13 07:17:57.799789 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Aug 13 07:17:57.799856 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Aug 13 07:17:57.799921 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Aug 13 07:17:57.800019 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Aug 13 07:17:57.800090 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Aug 13 07:17:57.800143 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Aug 13 07:17:57.800212 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Aug 13 07:17:57.800267 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Aug 13 07:17:57.800341 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Aug 13 07:17:57.800419 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Aug 13 07:17:57.800517 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Aug 13 07:17:57.800583 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Aug 13 07:17:57.800636 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Aug 13 07:17:57.800699 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Aug 13 07:17:57.800780 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Aug 13 07:17:57.800844 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Aug 13 07:17:57.800903 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Aug 13 07:17:57.800972 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Aug 13 07:17:57.801037 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Aug 13 07:17:57.801106 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Aug 13 07:17:57.801177 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Aug 13 07:17:57.801246 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Aug 13 07:17:57.801309 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Aug 13 07:17:57.801387 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Aug 13 07:17:57.801442 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Aug 13 07:17:57.801503 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Aug 13 07:17:57.801559 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Aug 13 07:17:57.801616 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Aug 13 07:17:57.801674 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Aug 13 07:17:57.801735 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Aug 13 07:17:57.801788 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Aug 13 07:17:57.801851 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Aug 13 07:17:57.801925 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Aug 13 07:17:57.802677 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Aug 13 07:17:57.802764 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Aug 13 07:17:57.802838 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Aug 13 07:17:57.802906 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Aug 13 07:17:57.802976 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Aug 13 07:17:57.803055 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Aug 13 07:17:57.803130 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Aug 13 07:17:57.803186 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Aug 13 07:17:57.803257 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Aug 13 07:17:57.803322 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Aug 13 07:17:57.803376 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Aug 13 07:17:57.803446 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Aug 13 07:17:57.803506 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Aug 13 07:17:57.803573 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Aug 13 07:17:57.803633 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Aug 13 07:17:57.803706 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Aug 13 07:17:57.803782 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Aug 13 07:17:57.803856 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Aug 13 07:17:57.803925 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Aug 13 07:17:57.803996 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Aug 13 07:17:57.804064 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Aug 13 07:17:57.804127 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Aug 13 07:17:57.804181 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Aug 13 07:17:57.804235 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Aug 13 07:17:57.804290 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Aug 13 07:17:57.804355 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Aug 13 07:17:57.804414 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Aug 13 07:17:57.804471 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Aug 13 07:17:57.804530 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Aug 13 07:17:57.804585 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Aug 13 07:17:57.804648 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Aug 13 07:17:57.804704 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Aug 13 07:17:57.804776 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Aug 13 07:17:57.804843 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Aug 13 07:17:57.804904 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Aug 13 07:17:57.805421 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Aug 13 07:17:57.805484 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Aug 13 07:17:57.805558 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Aug 13 07:17:57.805621 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Aug 13 07:17:57.805675 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Aug 13 07:17:57.805748 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Aug 13 07:17:57.805803 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Aug 13 07:17:57.805869 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Aug 13 07:17:57.805928 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Aug 13 07:17:57.805999 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Aug 13 07:17:57.806065 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Aug 13 07:17:57.806125 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Aug 13 07:17:57.806187 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Aug 13 07:17:57.806241 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Aug 13 07:17:57.806311 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Aug 13 07:17:57.806367 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Aug 13 07:17:57.806431 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Aug 13 07:17:57.806485 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Aug 13 07:17:57.806550 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Aug 13 07:17:57.806604 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Aug 13 07:17:57.806664 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Aug 13 07:17:57.806727 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Aug 13 07:17:57.806783 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Aug 13 07:17:57.806847 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Aug 13 07:17:57.806896 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Aug 13 07:17:57.807007 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Aug 13 07:17:57.807064 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Aug 13 07:17:57.807115 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Aug 13 07:17:57.807184 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Aug 13 07:17:57.807240 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Aug 13 07:17:57.807291 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Aug 13 07:17:57.807350 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Aug 13 07:17:57.807405 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Aug 13 07:17:57.807462 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Aug 13 07:17:57.807516 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Aug 13 07:17:57.807569 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Aug 13 07:17:57.807650 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Aug 13 07:17:57.807703 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Aug 13 07:17:57.807763 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Aug 13 07:17:57.807828 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Aug 13 07:17:57.807879 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Aug 13 07:17:57.807928 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Aug 13 07:17:57.808007 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Aug 13 07:17:57.808077 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Aug 13 07:17:57.808135 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Aug 13 07:17:57.808202 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Aug 13 07:17:57.808266 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Aug 13 07:17:57.808328 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Aug 13 07:17:57.808384 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Aug 13 07:17:57.808451 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Aug 13 07:17:57.808502 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Aug 13 07:17:57.808560 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Aug 13 07:17:57.808622 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Aug 13 07:17:57.808680 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Aug 13 07:17:57.808746 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Aug 13 07:17:57.808826 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Aug 13 07:17:57.808877 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Aug 13 07:17:57.808981 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Aug 13 07:17:57.809037 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Aug 13 07:17:57.809086 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Aug 13 07:17:57.809411 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Aug 13 07:17:57.809470 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Aug 13 07:17:57.809530 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Aug 13 07:17:57.809583 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Aug 13 07:17:57.809636 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Aug 13 07:17:57.809686 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Aug 13 07:17:57.809755 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Aug 13 07:17:57.809806 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Aug 13 07:17:57.809879 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Aug 13 07:17:57.809939 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Aug 13 07:17:57.810008 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Aug 13 07:17:57.810071 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Aug 13 07:17:57.810130 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Aug 13 07:17:57.810193 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Aug 13 07:17:57.810256 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Aug 13 07:17:57.810313 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Aug 13 07:17:57.810366 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Aug 13 07:17:57.810427 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Aug 13 07:17:57.810477 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Aug 13 07:17:57.810538 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Aug 13 07:17:57.810592 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Aug 13 07:17:57.810645 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Aug 13 07:17:57.810693 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Aug 13 07:17:57.810757 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Aug 13 07:17:57.810823 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Aug 13 07:17:57.810892 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Aug 13 07:17:57.810962 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Aug 13 07:17:57.811050 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Aug 13 07:17:57.811126 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Aug 13 07:17:57.811191 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Aug 13 07:17:57.811262 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Aug 13 07:17:57.811338 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Aug 13 07:17:57.811398 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Aug 13 07:17:57.811465 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Aug 13 07:17:57.811542 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Aug 13 07:17:57.811601 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Aug 13 07:17:57.811676 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Aug 13 07:17:57.811741 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Aug 13 07:17:57.811816 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Aug 13 07:17:57.811877 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Aug 13 07:17:57.812296 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Aug 13 07:17:57.812367 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Aug 13 07:17:57.812435 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Aug 13 07:17:57.812494 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Aug 13 07:17:57.812552 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Aug 13 07:17:57.812606 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Aug 13 07:17:57.812660 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Aug 13 07:17:57.812713 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Aug 13 07:17:57.812784 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Aug 13 07:17:57.812845 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Aug 13 07:17:57.812914 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Aug 13 07:17:57.813047 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 13 07:17:57.813065 kernel: PCI: CLS 32 bytes, default 64 Aug 13 07:17:57.813076 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 07:17:57.813085 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Aug 13 07:17:57.813094 kernel: clocksource: Switched to clocksource tsc Aug 13 07:17:57.813101 kernel: Initialise system trusted keyrings Aug 13 07:17:57.813112 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 13 07:17:57.813121 kernel: Key type asymmetric registered Aug 13 07:17:57.813128 kernel: Asymmetric key parser 'x509' registered Aug 13 07:17:57.813135 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 07:17:57.813145 kernel: io scheduler mq-deadline registered Aug 13 07:17:57.813152 kernel: io scheduler kyber registered Aug 13 07:17:57.813158 kernel: io scheduler bfq registered Aug 13 07:17:57.813219 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Aug 13 07:17:57.813289 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.813345 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Aug 13 07:17:57.813399 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.813458 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Aug 13 07:17:57.813528 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.815279 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Aug 13 07:17:57.815363 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.815449 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Aug 13 07:17:57.815519 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.815577 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Aug 13 07:17:57.815640 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.815711 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Aug 13 07:17:57.815774 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.815829 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Aug 13 07:17:57.815883 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.816243 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Aug 13 07:17:57.816315 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.816383 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Aug 13 07:17:57.816447 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.816503 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Aug 13 07:17:57.816562 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.816618 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Aug 13 07:17:57.816682 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.816736 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Aug 13 07:17:57.816805 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.816874 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Aug 13 07:17:57.816930 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.817003 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Aug 13 07:17:57.817077 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.817136 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Aug 13 07:17:57.817196 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.817267 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Aug 13 07:17:57.817347 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.817672 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Aug 13 07:17:57.817733 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.817796 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Aug 13 07:17:57.817875 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.817956 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Aug 13 07:17:57.818019 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.818074 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Aug 13 07:17:57.818137 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.818195 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Aug 13 07:17:57.818249 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.818331 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Aug 13 07:17:57.818395 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.818464 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Aug 13 07:17:57.818523 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.818588 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Aug 13 07:17:57.818650 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.818711 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Aug 13 07:17:57.818769 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.818828 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Aug 13 07:17:57.818882 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.818953 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Aug 13 07:17:57.819025 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.819086 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Aug 13 07:17:57.819142 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.819203 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Aug 13 07:17:57.819270 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.819330 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Aug 13 07:17:57.819397 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.819454 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Aug 13 07:17:57.819528 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 07:17:57.819541 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 07:17:57.819548 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 07:17:57.819555 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 07:17:57.819561 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Aug 13 07:17:57.819571 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 07:17:57.819578 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 07:17:57.819646 kernel: rtc_cmos 00:01: registered as rtc0 Aug 13 07:17:57.819712 kernel: rtc_cmos 00:01: setting system clock to 2025-08-13T07:17:57 UTC (1755069477) Aug 13 07:17:57.819770 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Aug 13 07:17:57.819780 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 07:17:57.819788 kernel: intel_pstate: CPU model not supported Aug 13 07:17:57.819794 kernel: NET: Registered PF_INET6 protocol family Aug 13 07:17:57.819800 kernel: Segment Routing with IPv6 Aug 13 07:17:57.819807 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 07:17:57.819815 kernel: NET: Registered PF_PACKET protocol family Aug 13 07:17:57.819822 kernel: Key type dns_resolver registered Aug 13 07:17:57.819831 kernel: IPI shorthand broadcast: enabled Aug 13 07:17:57.819838 kernel: sched_clock: Marking stable (949003622, 231506761)->(1235223535, -54713152) Aug 13 07:17:57.819845 kernel: registered taskstats version 1 Aug 13 07:17:57.819851 kernel: Loading compiled-in X.509 certificates Aug 13 07:17:57.819857 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 264e720147fa8df9744bb9dc1c08171c0cb20041' Aug 13 07:17:57.819864 kernel: Key type .fscrypt registered Aug 13 07:17:57.819870 kernel: Key type fscrypt-provisioning registered Aug 13 07:17:57.819878 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 07:17:57.819884 kernel: ima: Allocated hash algorithm: sha1 Aug 13 07:17:57.819892 kernel: ima: No architecture policies found Aug 13 07:17:57.819898 kernel: clk: Disabling unused clocks Aug 13 07:17:57.819909 kernel: Freeing unused kernel image (initmem) memory: 42876K Aug 13 07:17:57.819915 kernel: Write protecting the kernel read-only data: 36864k Aug 13 07:17:57.819922 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Aug 13 07:17:57.819928 kernel: Run /init as init process Aug 13 07:17:57.819944 kernel: with arguments: Aug 13 07:17:57.819955 kernel: /init Aug 13 07:17:57.819961 kernel: with environment: Aug 13 07:17:57.819967 kernel: HOME=/ Aug 13 07:17:57.819974 kernel: TERM=linux Aug 13 07:17:57.819980 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 07:17:57.819987 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:17:57.819995 systemd[1]: Detected virtualization vmware. Aug 13 07:17:57.820002 systemd[1]: Detected architecture x86-64. Aug 13 07:17:57.820010 systemd[1]: Running in initrd. Aug 13 07:17:57.820016 systemd[1]: No hostname configured, using default hostname. Aug 13 07:17:57.820023 systemd[1]: Hostname set to . Aug 13 07:17:57.820030 systemd[1]: Initializing machine ID from random generator. Aug 13 07:17:57.820036 systemd[1]: Queued start job for default target initrd.target. Aug 13 07:17:57.820042 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:17:57.820049 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:17:57.820056 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 07:17:57.820064 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:17:57.820070 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 07:17:57.820077 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 07:17:57.820084 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 07:17:57.820091 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 07:17:57.820098 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:17:57.820104 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:17:57.820112 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:17:57.820121 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:17:57.820128 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:17:57.820135 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:17:57.820141 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:17:57.820148 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:17:57.820154 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:17:57.820161 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:17:57.820168 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:17:57.820176 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:17:57.820183 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:17:57.820192 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:17:57.820201 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 07:17:57.820211 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:17:57.820219 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 07:17:57.820225 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 07:17:57.820232 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:17:57.820240 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:17:57.820246 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:17:57.820253 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 07:17:57.820260 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:17:57.820266 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 07:17:57.820287 systemd-journald[215]: Collecting audit messages is disabled. Aug 13 07:17:57.820304 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:17:57.820311 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:17:57.820319 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 07:17:57.820326 kernel: Bridge firewalling registered Aug 13 07:17:57.820333 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:17:57.820339 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:17:57.820346 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:17:57.820353 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:17:57.820361 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:17:57.820368 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:17:57.820374 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 07:17:57.820383 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:17:57.820389 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:17:57.820397 systemd-journald[215]: Journal started Aug 13 07:17:57.820411 systemd-journald[215]: Runtime Journal (/run/log/journal/bf4b32bcda05408ba43e9509c5b75d1b) is 4.8M, max 38.6M, 33.8M free. Aug 13 07:17:57.754945 systemd-modules-load[216]: Inserted module 'overlay' Aug 13 07:17:57.822016 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:17:57.778135 systemd-modules-load[216]: Inserted module 'br_netfilter' Aug 13 07:17:57.822485 dracut-cmdline[236]: dracut-dracut-053 Aug 13 07:17:57.824806 dracut-cmdline[236]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:17:57.829050 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:17:57.834939 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:17:57.843997 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:17:57.858876 systemd-resolved[274]: Positive Trust Anchors: Aug 13 07:17:57.858889 systemd-resolved[274]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:17:57.858911 systemd-resolved[274]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:17:57.860878 systemd-resolved[274]: Defaulting to hostname 'linux'. Aug 13 07:17:57.861757 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:17:57.861916 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:17:57.879945 kernel: SCSI subsystem initialized Aug 13 07:17:57.886949 kernel: Loading iSCSI transport class v2.0-870. Aug 13 07:17:57.893946 kernel: iscsi: registered transport (tcp) Aug 13 07:17:57.908058 kernel: iscsi: registered transport (qla4xxx) Aug 13 07:17:57.908077 kernel: QLogic iSCSI HBA Driver Aug 13 07:17:57.928339 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 07:17:57.932071 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 07:17:57.949960 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 07:17:57.950006 kernel: device-mapper: uevent: version 1.0.3 Aug 13 07:17:57.950025 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 07:17:57.983949 kernel: raid6: avx2x4 gen() 43095 MB/s Aug 13 07:17:58.000944 kernel: raid6: avx2x2 gen() 52923 MB/s Aug 13 07:17:58.018231 kernel: raid6: avx2x1 gen() 44085 MB/s Aug 13 07:17:58.018251 kernel: raid6: using algorithm avx2x2 gen() 52923 MB/s Aug 13 07:17:58.036157 kernel: raid6: .... xor() 31102 MB/s, rmw enabled Aug 13 07:17:58.036178 kernel: raid6: using avx2x2 recovery algorithm Aug 13 07:17:58.049943 kernel: xor: automatically using best checksumming function avx Aug 13 07:17:58.150000 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 07:17:58.155187 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:17:58.160100 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:17:58.167251 systemd-udevd[433]: Using default interface naming scheme 'v255'. Aug 13 07:17:58.169689 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:17:58.177031 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 07:17:58.185482 dracut-pre-trigger[436]: rd.md=0: removing MD RAID activation Aug 13 07:17:58.202291 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:17:58.206025 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:17:58.282088 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:17:58.288025 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 07:17:58.295199 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 07:17:58.295884 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:17:58.296243 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:17:58.296703 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:17:58.301046 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 07:17:58.309342 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:17:58.354947 kernel: VMware PVSCSI driver - version 1.0.7.0-k Aug 13 07:17:58.354984 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Aug 13 07:17:58.355944 kernel: vmw_pvscsi: using 64bit dma Aug 13 07:17:58.357942 kernel: vmw_pvscsi: max_id: 16 Aug 13 07:17:58.357964 kernel: vmw_pvscsi: setting ring_pages to 8 Aug 13 07:17:58.366986 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Aug 13 07:17:58.372035 kernel: vmw_pvscsi: enabling reqCallThreshold Aug 13 07:17:58.372071 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Aug 13 07:17:58.372199 kernel: vmw_pvscsi: driver-based request coalescing enabled Aug 13 07:17:58.372213 kernel: vmw_pvscsi: using MSI-X Aug 13 07:17:58.373942 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 07:17:58.377961 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Aug 13 07:17:58.378113 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Aug 13 07:17:58.378197 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Aug 13 07:17:58.378288 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Aug 13 07:17:58.380045 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:17:58.380133 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:17:58.380367 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:17:58.380486 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:17:58.380576 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:17:58.380712 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:17:58.390360 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:17:58.392952 kernel: libata version 3.00 loaded. Aug 13 07:17:58.396965 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 07:17:58.398972 kernel: AES CTR mode by8 optimization enabled Aug 13 07:17:58.398988 kernel: ata_piix 0000:00:07.1: version 2.13 Aug 13 07:17:58.402949 kernel: scsi host1: ata_piix Aug 13 07:17:58.403039 kernel: scsi host2: ata_piix Aug 13 07:17:58.403108 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Aug 13 07:17:58.403118 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Aug 13 07:17:58.408295 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:17:58.413037 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:17:58.425155 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:17:58.573667 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Aug 13 07:17:58.577977 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Aug 13 07:17:58.593459 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Aug 13 07:17:58.593566 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 07:17:58.593633 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Aug 13 07:17:58.593695 kernel: sd 0:0:0:0: [sda] Cache data unavailable Aug 13 07:17:58.595020 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Aug 13 07:17:58.596993 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Aug 13 07:17:58.597096 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 07:17:58.604333 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:17:58.604349 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 07:17:58.607953 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Aug 13 07:17:58.635967 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (485) Aug 13 07:17:58.641439 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Aug 13 07:17:58.641951 kernel: BTRFS: device fsid 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (483) Aug 13 07:17:58.644436 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Aug 13 07:17:58.647806 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Aug 13 07:17:58.650224 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Aug 13 07:17:58.650377 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Aug 13 07:17:58.655013 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 07:17:58.749245 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:17:58.754953 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:17:59.755967 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:17:59.756879 disk-uuid[591]: The operation has completed successfully. Aug 13 07:17:59.876769 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 07:17:59.876835 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 07:17:59.884084 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 07:17:59.886246 sh[608]: Success Aug 13 07:17:59.895959 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 07:18:00.033479 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 07:18:00.034602 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 07:18:00.035000 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 07:18:00.079875 kernel: BTRFS info (device dm-0): first mount of filesystem 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad Aug 13 07:18:00.079922 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:18:00.079942 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 07:18:00.082456 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 07:18:00.082474 kernel: BTRFS info (device dm-0): using free space tree Aug 13 07:18:00.127967 kernel: BTRFS info (device dm-0): enabling ssd optimizations Aug 13 07:18:00.129563 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 07:18:00.134063 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Aug 13 07:18:00.136053 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 07:18:00.156912 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:18:00.156975 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:18:00.156987 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:18:00.160956 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 07:18:00.177085 kernel: BTRFS info (device sda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:18:00.176975 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 07:18:00.180054 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 07:18:00.183054 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 07:18:00.260777 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Aug 13 07:18:00.266200 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 07:18:00.328447 ignition[666]: Ignition 2.19.0 Aug 13 07:18:00.328455 ignition[666]: Stage: fetch-offline Aug 13 07:18:00.328480 ignition[666]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:18:00.328487 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Aug 13 07:18:00.328556 ignition[666]: parsed url from cmdline: "" Aug 13 07:18:00.328558 ignition[666]: no config URL provided Aug 13 07:18:00.328561 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:18:00.328566 ignition[666]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:18:00.329012 ignition[666]: config successfully fetched Aug 13 07:18:00.329029 ignition[666]: parsing config with SHA512: d62524f43e831de10ea973686b00c37fc2da0c01d803dd3e1df369c331b22e80b0e0009fd355fde093dcda205ef3435392a08689f60b52121a7ab3572fc2546c Aug 13 07:18:00.332325 unknown[666]: fetched base config from "system" Aug 13 07:18:00.332334 unknown[666]: fetched user config from "vmware" Aug 13 07:18:00.332830 ignition[666]: fetch-offline: fetch-offline passed Aug 13 07:18:00.333078 ignition[666]: Ignition finished successfully Aug 13 07:18:00.334201 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:18:00.345398 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:18:00.349022 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:18:00.360698 systemd-networkd[799]: lo: Link UP Aug 13 07:18:00.360703 systemd-networkd[799]: lo: Gained carrier Aug 13 07:18:00.361396 systemd-networkd[799]: Enumeration completed Aug 13 07:18:00.361697 systemd-networkd[799]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Aug 13 07:18:00.361720 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:18:00.361858 systemd[1]: Reached target network.target - Network. Aug 13 07:18:00.361958 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 13 07:18:00.364939 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Aug 13 07:18:00.365064 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Aug 13 07:18:00.365217 systemd-networkd[799]: ens192: Link UP Aug 13 07:18:00.365221 systemd-networkd[799]: ens192: Gained carrier Aug 13 07:18:00.376086 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 07:18:00.385044 ignition[801]: Ignition 2.19.0 Aug 13 07:18:00.385056 ignition[801]: Stage: kargs Aug 13 07:18:00.385172 ignition[801]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:18:00.385179 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Aug 13 07:18:00.385775 ignition[801]: kargs: kargs passed Aug 13 07:18:00.385807 ignition[801]: Ignition finished successfully Aug 13 07:18:00.387117 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 07:18:00.391155 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 07:18:00.397767 ignition[809]: Ignition 2.19.0 Aug 13 07:18:00.397774 ignition[809]: Stage: disks Aug 13 07:18:00.397871 ignition[809]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:18:00.397878 ignition[809]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Aug 13 07:18:00.398432 ignition[809]: disks: disks passed Aug 13 07:18:00.398460 ignition[809]: Ignition finished successfully Aug 13 07:18:00.399338 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 07:18:00.399642 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 07:18:00.399848 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:18:00.400109 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:18:00.400330 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:18:00.400420 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:18:00.407077 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 07:18:00.417658 systemd-fsck[817]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Aug 13 07:18:00.419120 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 07:18:00.424663 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 07:18:00.515948 kernel: EXT4-fs (sda9): mounted filesystem 98cc0201-e9ec-4d2c-8a62-5b521bf9317d r/w with ordered data mode. Quota mode: none. Aug 13 07:18:00.516299 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 07:18:00.516727 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 07:18:00.532014 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:18:00.534404 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 07:18:00.535060 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 07:18:00.535107 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 07:18:00.535132 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:18:00.541957 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (825) Aug 13 07:18:00.545307 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 07:18:00.546903 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:18:00.546920 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:18:00.546929 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:18:00.550951 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 07:18:00.552060 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 07:18:00.553153 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:18:00.579834 initrd-setup-root[849]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 07:18:00.582681 initrd-setup-root[856]: cut: /sysroot/etc/group: No such file or directory Aug 13 07:18:00.585208 initrd-setup-root[863]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 07:18:00.597906 initrd-setup-root[870]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 07:18:00.713695 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 07:18:00.718022 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 07:18:00.720445 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 07:18:00.723944 kernel: BTRFS info (device sda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:18:00.740720 ignition[938]: INFO : Ignition 2.19.0 Aug 13 07:18:00.740720 ignition[938]: INFO : Stage: mount Aug 13 07:18:00.741031 ignition[938]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:18:00.741031 ignition[938]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Aug 13 07:18:00.741809 ignition[938]: INFO : mount: mount passed Aug 13 07:18:00.741904 ignition[938]: INFO : Ignition finished successfully Aug 13 07:18:00.742445 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 07:18:00.746042 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 07:18:00.820222 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 07:18:01.063217 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 07:18:01.068166 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:18:01.077772 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (949) Aug 13 07:18:01.077812 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:18:01.077832 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:18:01.079401 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:18:01.082947 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 07:18:01.083470 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:18:01.094929 ignition[966]: INFO : Ignition 2.19.0 Aug 13 07:18:01.094929 ignition[966]: INFO : Stage: files Aug 13 07:18:01.095435 ignition[966]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:18:01.095435 ignition[966]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Aug 13 07:18:01.095927 ignition[966]: DEBUG : files: compiled without relabeling support, skipping Aug 13 07:18:01.096386 ignition[966]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 07:18:01.096386 ignition[966]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 07:18:01.098748 ignition[966]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 07:18:01.098906 ignition[966]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 07:18:01.099073 ignition[966]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 07:18:01.099025 unknown[966]: wrote ssh authorized keys file for user: core Aug 13 07:18:01.100322 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 07:18:01.100657 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 07:18:01.258100 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 07:18:01.461285 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 07:18:01.461285 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 07:18:01.461826 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 07:18:01.461826 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:18:01.461826 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:18:01.461826 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:18:01.461826 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:18:01.461826 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:18:01.461826 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:18:01.463104 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:18:01.463104 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:18:01.463104 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:18:01.463104 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:18:01.463104 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:18:01.463104 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 07:18:01.489033 systemd-networkd[799]: ens192: Gained IPv6LL Aug 13 07:18:01.740421 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 07:18:01.943673 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:18:01.943960 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Aug 13 07:18:01.943960 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Aug 13 07:18:01.943960 ignition[966]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 13 07:18:01.949824 ignition[966]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:18:01.950010 ignition[966]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:18:01.950010 ignition[966]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 13 07:18:01.950010 ignition[966]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Aug 13 07:18:01.950010 ignition[966]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 07:18:01.950010 ignition[966]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 07:18:01.950010 ignition[966]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Aug 13 07:18:01.950010 ignition[966]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Aug 13 07:18:01.997381 ignition[966]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 07:18:02.000409 ignition[966]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 07:18:02.000409 ignition[966]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Aug 13 07:18:02.000409 ignition[966]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Aug 13 07:18:02.000409 ignition[966]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 07:18:02.000409 ignition[966]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:18:02.000409 ignition[966]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:18:02.000409 ignition[966]: INFO : files: files passed Aug 13 07:18:02.000409 ignition[966]: INFO : Ignition finished successfully Aug 13 07:18:02.001680 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 07:18:02.007061 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 07:18:02.008381 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 07:18:02.020510 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:18:02.020510 initrd-setup-root-after-ignition[996]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:18:02.021539 initrd-setup-root-after-ignition[1000]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:18:02.021907 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 07:18:02.022078 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 07:18:02.022332 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:18:02.022880 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 07:18:02.026029 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 07:18:02.037610 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 07:18:02.037668 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 07:18:02.038028 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 07:18:02.038158 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 07:18:02.038347 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 07:18:02.038743 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 07:18:02.047766 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:18:02.054149 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 07:18:02.060508 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:18:02.060850 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:18:02.061281 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 07:18:02.061571 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 07:18:02.061646 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:18:02.062214 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 07:18:02.062494 systemd[1]: Stopped target basic.target - Basic System. Aug 13 07:18:02.062801 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 07:18:02.063099 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:18:02.063420 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 07:18:02.063965 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 07:18:02.064131 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:18:02.064601 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 07:18:02.064902 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 07:18:02.065263 systemd[1]: Stopped target swap.target - Swaps. Aug 13 07:18:02.065402 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 07:18:02.065467 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:18:02.065833 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:18:02.066017 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:18:02.066216 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 07:18:02.066260 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:18:02.066466 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 07:18:02.066523 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 07:18:02.066771 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 07:18:02.066832 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:18:02.067069 systemd[1]: Stopped target paths.target - Path Units. Aug 13 07:18:02.067214 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 07:18:02.071046 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:18:02.071248 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 07:18:02.071471 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 07:18:02.071687 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 07:18:02.071735 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:18:02.071953 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 07:18:02.072000 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:18:02.072227 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 07:18:02.072284 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:18:02.072523 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 07:18:02.072576 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 07:18:02.079171 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 07:18:02.081045 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 07:18:02.081159 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 07:18:02.081244 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:18:02.081526 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 07:18:02.081603 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:18:02.083821 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 07:18:02.083878 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 07:18:02.088845 ignition[1021]: INFO : Ignition 2.19.0 Aug 13 07:18:02.088845 ignition[1021]: INFO : Stage: umount Aug 13 07:18:02.089600 ignition[1021]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:18:02.089600 ignition[1021]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Aug 13 07:18:02.090208 ignition[1021]: INFO : umount: umount passed Aug 13 07:18:02.090707 ignition[1021]: INFO : Ignition finished successfully Aug 13 07:18:02.091073 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 07:18:02.091146 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 07:18:02.091758 systemd[1]: Stopped target network.target - Network. Aug 13 07:18:02.091870 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 07:18:02.091897 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 07:18:02.092087 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 07:18:02.092108 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 07:18:02.092245 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 07:18:02.092267 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 07:18:02.092408 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 07:18:02.092428 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 07:18:02.092637 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 07:18:02.092807 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 07:18:02.095447 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 07:18:02.099783 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 07:18:02.099871 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 07:18:02.100308 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 07:18:02.100337 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:18:02.108143 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 07:18:02.108259 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 07:18:02.108293 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:18:02.108441 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Aug 13 07:18:02.108466 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Aug 13 07:18:02.108651 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:18:02.108895 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 07:18:02.108962 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 07:18:02.112064 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:18:02.112121 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:18:02.112646 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 07:18:02.112827 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 07:18:02.113095 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 07:18:02.113123 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:18:02.117293 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 07:18:02.117359 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 07:18:02.122410 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 07:18:02.122504 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:18:02.122885 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 07:18:02.122913 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 07:18:02.123167 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 07:18:02.123187 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:18:02.123350 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 07:18:02.123374 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:18:02.123699 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 07:18:02.123726 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 07:18:02.124042 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:18:02.124066 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:18:02.129050 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 07:18:02.129342 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 07:18:02.129377 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:18:02.129518 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 07:18:02.129546 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:18:02.129686 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 07:18:02.129719 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:18:02.129862 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:18:02.129885 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:18:02.132628 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 07:18:02.132702 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 07:18:02.196953 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 07:18:02.197050 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 07:18:02.197585 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 07:18:02.197758 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 07:18:02.197797 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 07:18:02.202049 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 07:18:02.214108 systemd[1]: Switching root. Aug 13 07:18:02.253113 systemd-journald[215]: Journal stopped Aug 13 07:18:04.069521 systemd-journald[215]: Received SIGTERM from PID 1 (systemd). Aug 13 07:18:04.083891 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 07:18:04.083906 kernel: SELinux: policy capability open_perms=1 Aug 13 07:18:04.083911 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 07:18:04.083917 kernel: SELinux: policy capability always_check_network=0 Aug 13 07:18:04.083922 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 07:18:04.083931 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 07:18:04.084516 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 07:18:04.084540 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 07:18:04.084547 systemd[1]: Successfully loaded SELinux policy in 34.861ms. Aug 13 07:18:04.084555 kernel: audit: type=1403 audit(1755069483.098:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 07:18:04.084576 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.297ms. Aug 13 07:18:04.084583 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:18:04.084591 systemd[1]: Detected virtualization vmware. Aug 13 07:18:04.084598 systemd[1]: Detected architecture x86-64. Aug 13 07:18:04.084604 systemd[1]: Detected first boot. Aug 13 07:18:04.084610 systemd[1]: Initializing machine ID from random generator. Aug 13 07:18:04.084618 zram_generator::config[1063]: No configuration found. Aug 13 07:18:04.084625 systemd[1]: Populated /etc with preset unit settings. Aug 13 07:18:04.084632 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Aug 13 07:18:04.084639 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Aug 13 07:18:04.084645 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 07:18:04.084651 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 07:18:04.084657 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 07:18:04.084665 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 07:18:04.084671 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 07:18:04.084677 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 07:18:04.084684 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 07:18:04.084690 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 07:18:04.084696 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 07:18:04.084702 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 07:18:04.084710 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 07:18:04.084717 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:18:04.084724 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:18:04.084730 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 07:18:04.084743 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 07:18:04.084752 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 07:18:04.084759 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:18:04.084765 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 07:18:04.084774 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:18:04.084780 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 07:18:04.084788 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 07:18:04.084795 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 07:18:04.084801 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 07:18:04.084808 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:18:04.084814 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:18:04.084820 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:18:04.084828 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:18:04.084835 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 07:18:04.084841 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 07:18:04.084848 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:18:04.084854 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:18:04.084863 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:18:04.084870 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 07:18:04.084876 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 07:18:04.084883 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 07:18:04.084889 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 07:18:04.084896 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:18:04.084903 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 07:18:04.084909 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 07:18:04.084917 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 07:18:04.084924 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 07:18:04.084931 systemd[1]: Reached target machines.target - Containers. Aug 13 07:18:04.086454 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 07:18:04.086462 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Aug 13 07:18:04.086469 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:18:04.086476 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 07:18:04.086499 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:18:04.086509 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:18:04.086516 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:18:04.086536 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 07:18:04.086543 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:18:04.086550 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 07:18:04.086557 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 07:18:04.086563 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 07:18:04.086570 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 07:18:04.086577 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 07:18:04.086585 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:18:04.086592 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:18:04.086598 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 07:18:04.086605 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 07:18:04.086611 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:18:04.086618 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 07:18:04.086625 systemd[1]: Stopped verity-setup.service. Aug 13 07:18:04.086631 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:18:04.086639 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 07:18:04.086646 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 07:18:04.086653 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 07:18:04.086659 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 07:18:04.086666 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 07:18:04.086672 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 07:18:04.086679 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:18:04.086686 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:18:04.086692 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:18:04.086700 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:18:04.086707 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:18:04.086713 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 07:18:04.086720 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 07:18:04.086726 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 07:18:04.086733 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 07:18:04.086739 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:18:04.086746 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 07:18:04.086768 systemd-journald[1146]: Collecting audit messages is disabled. Aug 13 07:18:04.086784 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 07:18:04.086791 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 07:18:04.086798 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:18:04.086806 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 07:18:04.086813 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:18:04.086820 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 07:18:04.086826 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 07:18:04.086833 kernel: fuse: init (API version 7.39) Aug 13 07:18:04.086839 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:18:04.086846 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 07:18:04.086852 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 07:18:04.086860 kernel: loop: module loaded Aug 13 07:18:04.086866 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 07:18:04.086873 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 07:18:04.086880 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:18:04.086889 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:18:04.086897 systemd-journald[1146]: Journal started Aug 13 07:18:04.086912 systemd-journald[1146]: Runtime Journal (/run/log/journal/7e60457a33d948cd99f6e8f37737b236) is 4.8M, max 38.6M, 33.8M free. Aug 13 07:18:03.834063 systemd[1]: Queued start job for default target multi-user.target. Aug 13 07:18:03.865156 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 07:18:03.865448 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 07:18:04.092311 jq[1130]: true Aug 13 07:18:04.092872 jq[1151]: true Aug 13 07:18:04.103127 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:18:04.103147 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:18:04.096378 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 07:18:04.113247 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 07:18:04.115038 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 07:18:04.116094 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 07:18:04.116255 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:18:04.120836 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:18:04.121590 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 07:18:04.122173 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 07:18:04.122997 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 07:18:04.125393 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 07:18:04.132319 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 07:18:04.135191 kernel: loop0: detected capacity change from 0 to 2976 Aug 13 07:18:04.160949 systemd-journald[1146]: Time spent on flushing to /var/log/journal/7e60457a33d948cd99f6e8f37737b236 is 40.223ms for 1834 entries. Aug 13 07:18:04.160949 systemd-journald[1146]: System Journal (/var/log/journal/7e60457a33d948cd99f6e8f37737b236) is 8.0M, max 584.8M, 576.8M free. Aug 13 07:18:04.369156 systemd-journald[1146]: Received client request to flush runtime journal. Aug 13 07:18:04.369213 kernel: ACPI: bus type drm_connector registered Aug 13 07:18:04.272686 ignition[1165]: Ignition 2.19.0 Aug 13 07:18:04.189459 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:18:04.272860 ignition[1165]: deleting config from guestinfo properties Aug 13 07:18:04.189564 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:18:04.368433 ignition[1165]: Successfully deleted config Aug 13 07:18:04.196785 systemd-tmpfiles[1161]: ACLs are not supported, ignoring. Aug 13 07:18:04.196793 systemd-tmpfiles[1161]: ACLs are not supported, ignoring. Aug 13 07:18:04.201973 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:18:04.234102 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:18:04.242568 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 07:18:04.246754 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 07:18:04.255178 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 07:18:04.256508 udevadm[1217]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 07:18:04.270248 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:18:04.370909 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Aug 13 07:18:04.371315 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 07:18:04.418951 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 07:18:04.435973 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 07:18:04.437210 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 07:18:04.454951 kernel: loop1: detected capacity change from 0 to 221472 Aug 13 07:18:04.473662 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 07:18:04.480028 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:18:04.488556 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Aug 13 07:18:04.488569 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Aug 13 07:18:04.490702 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:18:04.569169 kernel: loop2: detected capacity change from 0 to 140768 Aug 13 07:18:04.678952 kernel: loop3: detected capacity change from 0 to 142488 Aug 13 07:18:04.876953 kernel: loop4: detected capacity change from 0 to 2976 Aug 13 07:18:04.924955 kernel: loop5: detected capacity change from 0 to 221472 Aug 13 07:18:05.099113 kernel: loop6: detected capacity change from 0 to 140768 Aug 13 07:18:05.119406 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 07:18:05.125358 kernel: loop7: detected capacity change from 0 to 142488 Aug 13 07:18:05.125135 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:18:05.138780 systemd-udevd[1238]: Using default interface naming scheme 'v255'. Aug 13 07:18:05.261119 (sd-merge)[1236]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Aug 13 07:18:05.261387 (sd-merge)[1236]: Merged extensions into '/usr'. Aug 13 07:18:05.269487 systemd[1]: Reloading requested from client PID 1160 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 07:18:05.269498 systemd[1]: Reloading... Aug 13 07:18:05.308947 zram_generator::config[1261]: No configuration found. Aug 13 07:18:05.414841 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Aug 13 07:18:05.438031 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:18:05.451945 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 13 07:18:05.457134 kernel: ACPI: button: Power Button [PWRF] Aug 13 07:18:05.460945 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1306) Aug 13 07:18:05.504264 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 07:18:05.504379 systemd[1]: Reloading finished in 234 ms. Aug 13 07:18:05.512743 ldconfig[1153]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 07:18:05.526383 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:18:05.527474 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 07:18:05.529781 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 07:18:05.541969 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Aug 13 07:18:05.546910 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Aug 13 07:18:05.556965 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Aug 13 07:18:05.556060 systemd[1]: Starting ensure-sysext.service... Aug 13 07:18:05.558789 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 07:18:05.562523 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:18:05.567003 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:18:05.574948 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 07:18:05.578047 systemd[1]: Reloading requested from client PID 1347 ('systemctl') (unit ensure-sysext.service)... Aug 13 07:18:05.578058 systemd[1]: Reloading... Aug 13 07:18:05.599945 kernel: Guest personality initialized and is active Aug 13 07:18:05.606314 (udev-worker)[1302]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Aug 13 07:18:05.610952 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 13 07:18:05.610990 kernel: Initialized host personality Aug 13 07:18:05.611284 systemd-tmpfiles[1350]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 07:18:05.614177 systemd-tmpfiles[1350]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 07:18:05.614738 systemd-tmpfiles[1350]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 07:18:05.617341 systemd-tmpfiles[1350]: ACLs are not supported, ignoring. Aug 13 07:18:05.617740 systemd-tmpfiles[1350]: ACLs are not supported, ignoring. Aug 13 07:18:05.620963 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 07:18:05.626902 systemd-tmpfiles[1350]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:18:05.627002 systemd-tmpfiles[1350]: Skipping /boot Aug 13 07:18:05.638469 systemd-tmpfiles[1350]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:18:05.638555 systemd-tmpfiles[1350]: Skipping /boot Aug 13 07:18:05.648000 zram_generator::config[1396]: No configuration found. Aug 13 07:18:05.700516 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Aug 13 07:18:05.715414 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:18:05.759570 systemd[1]: Reloading finished in 181 ms. Aug 13 07:18:05.774762 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 07:18:05.775146 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:18:05.786419 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 07:18:05.789209 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:18:05.795165 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:18:05.799069 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 07:18:05.800096 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 07:18:05.802099 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:18:05.805117 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:18:05.806128 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:18:05.809066 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:18:05.809274 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:18:05.811080 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 07:18:05.813584 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:18:05.816006 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:18:05.817102 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 07:18:05.824113 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 07:18:05.826086 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:18:05.826222 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:18:05.827455 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:18:05.828438 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:18:05.834190 systemd[1]: Finished ensure-sysext.service. Aug 13 07:18:05.839030 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 07:18:05.839894 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:18:05.840025 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:18:05.847182 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 07:18:05.847601 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:18:05.847703 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:18:05.849030 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:18:05.857099 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 07:18:05.857249 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:18:05.857452 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:18:05.857556 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:18:05.859916 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:18:05.864779 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 07:18:05.868740 lvm[1466]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:18:05.880611 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 07:18:05.888423 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 07:18:05.897097 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 07:18:05.897461 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 07:18:05.898301 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 07:18:05.898486 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 07:18:05.908405 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 07:18:05.910874 augenrules[1488]: No rules Aug 13 07:18:05.911178 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:18:05.929949 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:18:05.955313 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 07:18:05.955497 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 07:18:05.957622 systemd-networkd[1349]: lo: Link UP Aug 13 07:18:05.957627 systemd-networkd[1349]: lo: Gained carrier Aug 13 07:18:05.958431 systemd-timesyncd[1459]: No network connectivity, watching for changes. Aug 13 07:18:05.958548 systemd-networkd[1349]: Enumeration completed Aug 13 07:18:05.958584 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:18:05.959803 systemd-networkd[1349]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Aug 13 07:18:05.961994 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Aug 13 07:18:05.962121 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Aug 13 07:18:05.962891 systemd-networkd[1349]: ens192: Link UP Aug 13 07:18:05.963060 systemd-networkd[1349]: ens192: Gained carrier Aug 13 07:18:05.968066 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 07:18:05.970403 systemd-timesyncd[1459]: Network configuration changed, trying to establish connection. Aug 13 07:18:05.973403 systemd-resolved[1450]: Positive Trust Anchors: Aug 13 07:18:05.973411 systemd-resolved[1450]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:18:05.973435 systemd-resolved[1450]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:18:05.995986 systemd-resolved[1450]: Defaulting to hostname 'linux'. Aug 13 07:18:05.997165 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:18:05.997414 systemd[1]: Reached target network.target - Network. Aug 13 07:18:05.997581 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:18:05.997721 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:18:05.997904 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 07:18:05.998066 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 07:18:05.998302 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 07:18:05.998487 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 07:18:05.998623 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 07:18:05.998754 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 07:18:05.998771 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:18:05.998881 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:18:06.008349 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 07:18:06.009713 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 07:18:06.014330 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 07:18:06.014887 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 07:18:06.015106 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:18:06.015244 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:18:06.015404 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:18:06.015435 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:18:06.016331 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 07:18:06.019047 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 07:18:06.022090 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 07:18:06.024054 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 07:18:06.025215 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 07:18:06.028042 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 07:18:06.029799 jq[1506]: false Aug 13 07:18:06.032049 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 07:18:06.033589 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 07:18:06.035172 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 07:18:06.046055 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 07:18:06.046842 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 07:18:06.049038 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 07:18:06.050793 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 07:18:06.052993 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 07:18:06.060154 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Aug 13 07:18:06.063128 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 07:18:06.063256 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 07:18:06.063418 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 07:18:06.063815 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 07:18:06.065394 dbus-daemon[1505]: [system] SELinux support is enabled Aug 13 07:19:36.144667 systemd-timesyncd[1459]: Contacted time server 104.234.67.234:123 (2.flatcar.pool.ntp.org). Aug 13 07:19:36.144737 systemd-timesyncd[1459]: Initial clock synchronization to Wed 2025-08-13 07:19:36.144612 UTC. Aug 13 07:19:36.144851 systemd-resolved[1450]: Clock change detected. Flushing caches. Aug 13 07:19:36.145937 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 07:19:36.147286 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 07:19:36.147407 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 07:19:36.153431 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 07:19:36.153468 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 07:19:36.153889 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 07:19:36.155862 jq[1521]: true Aug 13 07:19:36.153905 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 07:19:36.169083 extend-filesystems[1507]: Found loop4 Aug 13 07:19:36.169083 extend-filesystems[1507]: Found loop5 Aug 13 07:19:36.169083 extend-filesystems[1507]: Found loop6 Aug 13 07:19:36.169083 extend-filesystems[1507]: Found loop7 Aug 13 07:19:36.169083 extend-filesystems[1507]: Found sda Aug 13 07:19:36.169083 extend-filesystems[1507]: Found sda1 Aug 13 07:19:36.169083 extend-filesystems[1507]: Found sda2 Aug 13 07:19:36.169083 extend-filesystems[1507]: Found sda3 Aug 13 07:19:36.169083 extend-filesystems[1507]: Found usr Aug 13 07:19:36.169083 extend-filesystems[1507]: Found sda4 Aug 13 07:19:36.169083 extend-filesystems[1507]: Found sda6 Aug 13 07:19:36.169083 extend-filesystems[1507]: Found sda7 Aug 13 07:19:36.169083 extend-filesystems[1507]: Found sda9 Aug 13 07:19:36.169083 extend-filesystems[1507]: Checking size of /dev/sda9 Aug 13 07:19:36.168975 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Aug 13 07:19:36.172419 jq[1532]: true Aug 13 07:19:36.172570 systemd-logind[1513]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 07:19:36.172873 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Aug 13 07:19:36.175561 update_engine[1520]: I20250813 07:19:36.175326 1520 main.cc:92] Flatcar Update Engine starting Aug 13 07:19:36.176896 systemd-logind[1513]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 07:19:36.177926 systemd-logind[1513]: New seat seat0. Aug 13 07:19:36.179870 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 07:19:36.183064 extend-filesystems[1507]: Old size kept for /dev/sda9 Aug 13 07:19:36.186830 extend-filesystems[1507]: Found sr0 Aug 13 07:19:36.187185 update_engine[1520]: I20250813 07:19:36.187158 1520 update_check_scheduler.cc:74] Next update check in 5m27s Aug 13 07:19:36.189144 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 07:19:36.189273 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 07:19:36.190066 (ntainerd)[1542]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 07:19:36.190237 systemd[1]: Started update-engine.service - Update Engine. Aug 13 07:19:36.193846 tar[1525]: linux-amd64/helm Aug 13 07:19:36.196015 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 07:19:36.215937 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Aug 13 07:19:36.229667 unknown[1535]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Aug 13 07:19:36.232554 unknown[1535]: Core dump limit set to -1 Aug 13 07:19:36.234149 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1305) Aug 13 07:19:36.246891 kernel: NET: Registered PF_VSOCK protocol family Aug 13 07:19:36.266673 bash[1566]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:19:36.265355 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 07:19:36.268722 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 13 07:19:36.386368 locksmithd[1549]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 07:19:36.411698 sshd_keygen[1537]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 07:19:36.441314 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 07:19:36.447014 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 07:19:36.456437 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 07:19:36.456573 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 07:19:36.462998 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 07:19:36.465336 containerd[1542]: time="2025-08-13T07:19:36.465300437Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 07:19:36.477705 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 07:19:36.484200 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 07:19:36.485729 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 07:19:36.486019 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 07:19:36.501954 containerd[1542]: time="2025-08-13T07:19:36.501847528Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:19:36.502876 containerd[1542]: time="2025-08-13T07:19:36.502860997Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:19:36.502920 containerd[1542]: time="2025-08-13T07:19:36.502912684Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 07:19:36.502956 containerd[1542]: time="2025-08-13T07:19:36.502948961Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 07:19:36.503100 containerd[1542]: time="2025-08-13T07:19:36.503090632Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 07:19:36.503197 containerd[1542]: time="2025-08-13T07:19:36.503129058Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 07:19:36.503228 containerd[1542]: time="2025-08-13T07:19:36.503169233Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:19:36.503255 containerd[1542]: time="2025-08-13T07:19:36.503248912Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:19:36.503407 containerd[1542]: time="2025-08-13T07:19:36.503396425Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:19:36.503532 containerd[1542]: time="2025-08-13T07:19:36.503435769Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 07:19:36.503532 containerd[1542]: time="2025-08-13T07:19:36.503446536Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:19:36.503532 containerd[1542]: time="2025-08-13T07:19:36.503452779Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 07:19:36.503532 containerd[1542]: time="2025-08-13T07:19:36.503495387Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:19:36.503772 containerd[1542]: time="2025-08-13T07:19:36.503761882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:19:36.503905 containerd[1542]: time="2025-08-13T07:19:36.503869727Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:19:36.504095 containerd[1542]: time="2025-08-13T07:19:36.503934074Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 07:19:36.504095 containerd[1542]: time="2025-08-13T07:19:36.503983616Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 07:19:36.504095 containerd[1542]: time="2025-08-13T07:19:36.504012411Z" level=info msg="metadata content store policy set" policy=shared Aug 13 07:19:36.505377 containerd[1542]: time="2025-08-13T07:19:36.505152064Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 07:19:36.505377 containerd[1542]: time="2025-08-13T07:19:36.505175187Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 07:19:36.505377 containerd[1542]: time="2025-08-13T07:19:36.505185767Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 07:19:36.505377 containerd[1542]: time="2025-08-13T07:19:36.505194553Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 07:19:36.505377 containerd[1542]: time="2025-08-13T07:19:36.505202538Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 07:19:36.505377 containerd[1542]: time="2025-08-13T07:19:36.505267325Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 07:19:36.505470 containerd[1542]: time="2025-08-13T07:19:36.505402000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 07:19:36.505487 containerd[1542]: time="2025-08-13T07:19:36.505472717Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 07:19:36.505487 containerd[1542]: time="2025-08-13T07:19:36.505483412Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 07:19:36.505513 containerd[1542]: time="2025-08-13T07:19:36.505490984Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 07:19:36.505513 containerd[1542]: time="2025-08-13T07:19:36.505498639Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 07:19:36.505513 containerd[1542]: time="2025-08-13T07:19:36.505506205Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 07:19:36.505552 containerd[1542]: time="2025-08-13T07:19:36.505513293Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 07:19:36.505552 containerd[1542]: time="2025-08-13T07:19:36.505520807Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 07:19:36.505552 containerd[1542]: time="2025-08-13T07:19:36.505528510Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 07:19:36.505552 containerd[1542]: time="2025-08-13T07:19:36.505535728Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 07:19:36.505552 containerd[1542]: time="2025-08-13T07:19:36.505542883Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 07:19:36.505552 containerd[1542]: time="2025-08-13T07:19:36.505549198Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 07:19:36.505628 containerd[1542]: time="2025-08-13T07:19:36.505560358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 07:19:36.505628 containerd[1542]: time="2025-08-13T07:19:36.505568175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 07:19:36.505628 containerd[1542]: time="2025-08-13T07:19:36.505574854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 07:19:36.505628 containerd[1542]: time="2025-08-13T07:19:36.505585034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 07:19:36.505628 containerd[1542]: time="2025-08-13T07:19:36.505592480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 07:19:36.505628 containerd[1542]: time="2025-08-13T07:19:36.505600455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 07:19:36.505628 containerd[1542]: time="2025-08-13T07:19:36.505606736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 07:19:36.505628 containerd[1542]: time="2025-08-13T07:19:36.505613493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 07:19:36.505628 containerd[1542]: time="2025-08-13T07:19:36.505620352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 07:19:36.505628 containerd[1542]: time="2025-08-13T07:19:36.505629095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 07:19:36.505764 containerd[1542]: time="2025-08-13T07:19:36.505635719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 07:19:36.505764 containerd[1542]: time="2025-08-13T07:19:36.505642281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 07:19:36.505764 containerd[1542]: time="2025-08-13T07:19:36.505649021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 07:19:36.505764 containerd[1542]: time="2025-08-13T07:19:36.505657320Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 07:19:36.505764 containerd[1542]: time="2025-08-13T07:19:36.505669578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 07:19:36.505764 containerd[1542]: time="2025-08-13T07:19:36.505677138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 07:19:36.505764 containerd[1542]: time="2025-08-13T07:19:36.505682707Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 07:19:36.505764 containerd[1542]: time="2025-08-13T07:19:36.505708752Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 07:19:36.505764 containerd[1542]: time="2025-08-13T07:19:36.505718513Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 07:19:36.505764 containerd[1542]: time="2025-08-13T07:19:36.505725611Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 07:19:36.505764 containerd[1542]: time="2025-08-13T07:19:36.505732277Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 07:19:36.505764 containerd[1542]: time="2025-08-13T07:19:36.505737526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 07:19:36.505764 containerd[1542]: time="2025-08-13T07:19:36.505744222Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 07:19:36.505764 containerd[1542]: time="2025-08-13T07:19:36.505752568Z" level=info msg="NRI interface is disabled by configuration." Aug 13 07:19:36.505991 containerd[1542]: time="2025-08-13T07:19:36.505758496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 07:19:36.506009 containerd[1542]: time="2025-08-13T07:19:36.505920963Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 07:19:36.506009 containerd[1542]: time="2025-08-13T07:19:36.505954779Z" level=info msg="Connect containerd service" Aug 13 07:19:36.506009 containerd[1542]: time="2025-08-13T07:19:36.505977227Z" level=info msg="using legacy CRI server" Aug 13 07:19:36.506009 containerd[1542]: time="2025-08-13T07:19:36.505981658Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 07:19:36.506122 containerd[1542]: time="2025-08-13T07:19:36.506031318Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 07:19:36.506348 containerd[1542]: time="2025-08-13T07:19:36.506331869Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:19:36.506552 containerd[1542]: time="2025-08-13T07:19:36.506465331Z" level=info msg="Start subscribing containerd event" Aug 13 07:19:36.506552 containerd[1542]: time="2025-08-13T07:19:36.506510487Z" level=info msg="Start recovering state" Aug 13 07:19:36.506552 containerd[1542]: time="2025-08-13T07:19:36.506527314Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 07:19:36.506600 containerd[1542]: time="2025-08-13T07:19:36.506554003Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 07:19:36.506766 containerd[1542]: time="2025-08-13T07:19:36.506755597Z" level=info msg="Start event monitor" Aug 13 07:19:36.507048 containerd[1542]: time="2025-08-13T07:19:36.506793417Z" level=info msg="Start snapshots syncer" Aug 13 07:19:36.507048 containerd[1542]: time="2025-08-13T07:19:36.506801970Z" level=info msg="Start cni network conf syncer for default" Aug 13 07:19:36.507048 containerd[1542]: time="2025-08-13T07:19:36.506811700Z" level=info msg="Start streaming server" Aug 13 07:19:36.506919 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 07:19:36.507386 containerd[1542]: time="2025-08-13T07:19:36.507333342Z" level=info msg="containerd successfully booted in 0.043115s" Aug 13 07:19:36.650607 tar[1525]: linux-amd64/LICENSE Aug 13 07:19:36.651043 tar[1525]: linux-amd64/README.md Aug 13 07:19:36.661582 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 07:19:38.031993 systemd-networkd[1349]: ens192: Gained IPv6LL Aug 13 07:19:38.033453 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 07:19:38.033869 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 07:19:38.038990 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Aug 13 07:19:38.041924 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:19:38.043876 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 07:19:38.064740 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 07:19:38.070110 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 13 07:19:38.070243 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Aug 13 07:19:38.070642 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 07:19:39.343305 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:19:39.344018 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 07:19:39.346552 systemd[1]: Startup finished in 1.034s (kernel) + 5.465s (initrd) + 6.202s (userspace) = 12.702s. Aug 13 07:19:39.350559 (kubelet)[1682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:19:39.398859 login[1638]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 07:19:39.401668 login[1643]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 07:19:39.407380 systemd-logind[1513]: New session 1 of user core. Aug 13 07:19:39.407771 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 07:19:39.412010 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 07:19:39.415185 systemd-logind[1513]: New session 2 of user core. Aug 13 07:19:39.420682 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 07:19:39.423982 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 07:19:39.428408 (systemd)[1690]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 07:19:39.514904 systemd[1690]: Queued start job for default target default.target. Aug 13 07:19:39.519945 systemd[1690]: Created slice app.slice - User Application Slice. Aug 13 07:19:39.519965 systemd[1690]: Reached target paths.target - Paths. Aug 13 07:19:39.519974 systemd[1690]: Reached target timers.target - Timers. Aug 13 07:19:39.520658 systemd[1690]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 07:19:39.527591 systemd[1690]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 07:19:39.528023 systemd[1690]: Reached target sockets.target - Sockets. Aug 13 07:19:39.528078 systemd[1690]: Reached target basic.target - Basic System. Aug 13 07:19:39.528136 systemd[1690]: Reached target default.target - Main User Target. Aug 13 07:19:39.528177 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 07:19:39.528240 systemd[1690]: Startup finished in 95ms. Aug 13 07:19:39.537328 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 07:19:39.538019 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 07:19:40.167391 kubelet[1682]: E0813 07:19:40.167321 1682 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:19:40.169129 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:19:40.169223 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:19:50.253051 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 07:19:50.265002 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:19:50.597335 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:19:50.600199 (kubelet)[1735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:19:50.655515 kubelet[1735]: E0813 07:19:50.655473 1735 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:19:50.658247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:19:50.658333 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:20:00.753187 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 07:20:00.765074 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:20:01.196493 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:20:01.199612 (kubelet)[1749]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:20:01.222040 kubelet[1749]: E0813 07:20:01.221987 1749 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:20:01.223210 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:20:01.223292 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:20:06.333755 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 07:20:06.335973 systemd[1]: Started sshd@0-139.178.70.105:22-139.178.68.195:44240.service - OpenSSH per-connection server daemon (139.178.68.195:44240). Aug 13 07:20:06.365133 sshd[1756]: Accepted publickey for core from 139.178.68.195 port 44240 ssh2: RSA SHA256:ngPUvt8IOvnSuX56Jd1rJjbUaAfs3mZgf3k+pwY7KWw Aug 13 07:20:06.366011 sshd[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:06.368644 systemd-logind[1513]: New session 3 of user core. Aug 13 07:20:06.376930 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 07:20:06.436330 systemd[1]: Started sshd@1-139.178.70.105:22-139.178.68.195:44248.service - OpenSSH per-connection server daemon (139.178.68.195:44248). Aug 13 07:20:06.461755 sshd[1761]: Accepted publickey for core from 139.178.68.195 port 44248 ssh2: RSA SHA256:ngPUvt8IOvnSuX56Jd1rJjbUaAfs3mZgf3k+pwY7KWw Aug 13 07:20:06.462816 sshd[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:06.467845 systemd-logind[1513]: New session 4 of user core. Aug 13 07:20:06.472918 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 07:20:06.520500 sshd[1761]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:06.529467 systemd[1]: sshd@1-139.178.70.105:22-139.178.68.195:44248.service: Deactivated successfully. Aug 13 07:20:06.530418 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 07:20:06.531376 systemd-logind[1513]: Session 4 logged out. Waiting for processes to exit. Aug 13 07:20:06.537072 systemd[1]: Started sshd@2-139.178.70.105:22-139.178.68.195:44264.service - OpenSSH per-connection server daemon (139.178.68.195:44264). Aug 13 07:20:06.537945 systemd-logind[1513]: Removed session 4. Aug 13 07:20:06.562640 sshd[1768]: Accepted publickey for core from 139.178.68.195 port 44264 ssh2: RSA SHA256:ngPUvt8IOvnSuX56Jd1rJjbUaAfs3mZgf3k+pwY7KWw Aug 13 07:20:06.563403 sshd[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:06.566456 systemd-logind[1513]: New session 5 of user core. Aug 13 07:20:06.572992 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 07:20:06.620227 sshd[1768]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:06.630582 systemd[1]: sshd@2-139.178.70.105:22-139.178.68.195:44264.service: Deactivated successfully. Aug 13 07:20:06.631519 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 07:20:06.631972 systemd-logind[1513]: Session 5 logged out. Waiting for processes to exit. Aug 13 07:20:06.633147 systemd[1]: Started sshd@3-139.178.70.105:22-139.178.68.195:44274.service - OpenSSH per-connection server daemon (139.178.68.195:44274). Aug 13 07:20:06.635903 systemd-logind[1513]: Removed session 5. Aug 13 07:20:06.662085 sshd[1775]: Accepted publickey for core from 139.178.68.195 port 44274 ssh2: RSA SHA256:ngPUvt8IOvnSuX56Jd1rJjbUaAfs3mZgf3k+pwY7KWw Aug 13 07:20:06.662958 sshd[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:06.666269 systemd-logind[1513]: New session 6 of user core. Aug 13 07:20:06.684071 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 07:20:06.733092 sshd[1775]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:06.740568 systemd[1]: sshd@3-139.178.70.105:22-139.178.68.195:44274.service: Deactivated successfully. Aug 13 07:20:06.741556 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 07:20:06.742565 systemd-logind[1513]: Session 6 logged out. Waiting for processes to exit. Aug 13 07:20:06.747079 systemd[1]: Started sshd@4-139.178.70.105:22-139.178.68.195:44288.service - OpenSSH per-connection server daemon (139.178.68.195:44288). Aug 13 07:20:06.748052 systemd-logind[1513]: Removed session 6. Aug 13 07:20:06.772788 sshd[1782]: Accepted publickey for core from 139.178.68.195 port 44288 ssh2: RSA SHA256:ngPUvt8IOvnSuX56Jd1rJjbUaAfs3mZgf3k+pwY7KWw Aug 13 07:20:06.773604 sshd[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:06.776370 systemd-logind[1513]: New session 7 of user core. Aug 13 07:20:06.784985 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 07:20:06.843659 sudo[1785]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 07:20:06.843881 sudo[1785]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:20:06.856483 sudo[1785]: pam_unix(sudo:session): session closed for user root Aug 13 07:20:06.858305 sshd[1782]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:06.862518 systemd[1]: sshd@4-139.178.70.105:22-139.178.68.195:44288.service: Deactivated successfully. Aug 13 07:20:06.863537 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 07:20:06.864560 systemd-logind[1513]: Session 7 logged out. Waiting for processes to exit. Aug 13 07:20:06.867088 systemd[1]: Started sshd@5-139.178.70.105:22-139.178.68.195:44298.service - OpenSSH per-connection server daemon (139.178.68.195:44298). Aug 13 07:20:06.868105 systemd-logind[1513]: Removed session 7. Aug 13 07:20:06.894302 sshd[1790]: Accepted publickey for core from 139.178.68.195 port 44298 ssh2: RSA SHA256:ngPUvt8IOvnSuX56Jd1rJjbUaAfs3mZgf3k+pwY7KWw Aug 13 07:20:06.895207 sshd[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:06.898624 systemd-logind[1513]: New session 8 of user core. Aug 13 07:20:06.908022 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 07:20:06.957865 sudo[1794]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 07:20:06.958075 sudo[1794]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:20:06.960860 sudo[1794]: pam_unix(sudo:session): session closed for user root Aug 13 07:20:06.964482 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 07:20:06.964680 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:20:06.980004 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 07:20:06.981157 auditctl[1797]: No rules Aug 13 07:20:06.981480 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 07:20:06.981621 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 07:20:06.983483 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:20:07.002216 augenrules[1815]: No rules Aug 13 07:20:07.002863 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:20:07.003543 sudo[1793]: pam_unix(sudo:session): session closed for user root Aug 13 07:20:07.004345 sshd[1790]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:07.008339 systemd[1]: sshd@5-139.178.70.105:22-139.178.68.195:44298.service: Deactivated successfully. Aug 13 07:20:07.009064 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 07:20:07.009786 systemd-logind[1513]: Session 8 logged out. Waiting for processes to exit. Aug 13 07:20:07.015152 systemd[1]: Started sshd@6-139.178.70.105:22-139.178.68.195:44304.service - OpenSSH per-connection server daemon (139.178.68.195:44304). Aug 13 07:20:07.015943 systemd-logind[1513]: Removed session 8. Aug 13 07:20:07.037716 sshd[1823]: Accepted publickey for core from 139.178.68.195 port 44304 ssh2: RSA SHA256:ngPUvt8IOvnSuX56Jd1rJjbUaAfs3mZgf3k+pwY7KWw Aug 13 07:20:07.038743 sshd[1823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:07.042954 systemd-logind[1513]: New session 9 of user core. Aug 13 07:20:07.051998 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 07:20:07.101521 sudo[1826]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 07:20:07.101729 sudo[1826]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:20:07.393946 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 07:20:07.394034 (dockerd)[1843]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 07:20:07.723039 dockerd[1843]: time="2025-08-13T07:20:07.722836572Z" level=info msg="Starting up" Aug 13 07:20:07.805879 dockerd[1843]: time="2025-08-13T07:20:07.805848073Z" level=info msg="Loading containers: start." Aug 13 07:20:07.883843 kernel: Initializing XFRM netlink socket Aug 13 07:20:07.943296 systemd-networkd[1349]: docker0: Link UP Aug 13 07:20:07.955651 dockerd[1843]: time="2025-08-13T07:20:07.955625774Z" level=info msg="Loading containers: done." Aug 13 07:20:07.962641 dockerd[1843]: time="2025-08-13T07:20:07.962616867Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 07:20:07.962719 dockerd[1843]: time="2025-08-13T07:20:07.962674122Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 07:20:07.962742 dockerd[1843]: time="2025-08-13T07:20:07.962723081Z" level=info msg="Daemon has completed initialization" Aug 13 07:20:07.979889 dockerd[1843]: time="2025-08-13T07:20:07.979784455Z" level=info msg="API listen on /run/docker.sock" Aug 13 07:20:07.980158 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 07:20:08.799441 containerd[1542]: time="2025-08-13T07:20:08.799360128Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 07:20:09.434936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2811669134.mount: Deactivated successfully. Aug 13 07:20:10.325661 containerd[1542]: time="2025-08-13T07:20:10.325101653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:10.326020 containerd[1542]: time="2025-08-13T07:20:10.326002137Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=28077759" Aug 13 07:20:10.326407 containerd[1542]: time="2025-08-13T07:20:10.326384648Z" level=info msg="ImageCreate event name:\"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:10.327810 containerd[1542]: time="2025-08-13T07:20:10.327794327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:10.328552 containerd[1542]: time="2025-08-13T07:20:10.328452227Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"28074559\" in 1.529068012s" Aug 13 07:20:10.328552 containerd[1542]: time="2025-08-13T07:20:10.328469872Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 07:20:10.328829 containerd[1542]: time="2025-08-13T07:20:10.328806239Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 07:20:11.253113 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 07:20:11.264029 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:20:11.334728 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:20:11.337303 (kubelet)[2043]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:20:11.358416 kubelet[2043]: E0813 07:20:11.358373 2043 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:20:11.359757 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:20:11.359868 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:20:12.737708 containerd[1542]: time="2025-08-13T07:20:12.737665721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:12.738258 containerd[1542]: time="2025-08-13T07:20:12.738195736Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=24713245" Aug 13 07:20:12.738873 containerd[1542]: time="2025-08-13T07:20:12.738569615Z" level=info msg="ImageCreate event name:\"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:12.740742 containerd[1542]: time="2025-08-13T07:20:12.740698222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:12.742437 containerd[1542]: time="2025-08-13T07:20:12.742384745Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"26315079\" in 2.413475761s" Aug 13 07:20:12.742437 containerd[1542]: time="2025-08-13T07:20:12.742405498Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 07:20:12.743014 containerd[1542]: time="2025-08-13T07:20:12.742868635Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 07:20:13.857472 containerd[1542]: time="2025-08-13T07:20:13.857183338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:13.858468 containerd[1542]: time="2025-08-13T07:20:13.857567520Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=18783700" Aug 13 07:20:13.858468 containerd[1542]: time="2025-08-13T07:20:13.857777284Z" level=info msg="ImageCreate event name:\"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:13.859515 containerd[1542]: time="2025-08-13T07:20:13.859486114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:13.860110 containerd[1542]: time="2025-08-13T07:20:13.860093376Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"20385552\" in 1.11720503s" Aug 13 07:20:13.860140 containerd[1542]: time="2025-08-13T07:20:13.860110791Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 07:20:13.860601 containerd[1542]: time="2025-08-13T07:20:13.860514304Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 07:20:14.774553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1438161523.mount: Deactivated successfully. Aug 13 07:20:15.394147 containerd[1542]: time="2025-08-13T07:20:15.393854199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:15.394416 containerd[1542]: time="2025-08-13T07:20:15.394406848Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=30383612" Aug 13 07:20:15.395045 containerd[1542]: time="2025-08-13T07:20:15.394709002Z" level=info msg="ImageCreate event name:\"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:15.396473 containerd[1542]: time="2025-08-13T07:20:15.396453323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:15.396988 containerd[1542]: time="2025-08-13T07:20:15.396844627Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"30382631\" in 1.536313523s" Aug 13 07:20:15.396988 containerd[1542]: time="2025-08-13T07:20:15.396867135Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 07:20:15.397458 containerd[1542]: time="2025-08-13T07:20:15.397308406Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 07:20:15.884867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3244891537.mount: Deactivated successfully. Aug 13 07:20:16.600772 containerd[1542]: time="2025-08-13T07:20:16.600734052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:16.605850 containerd[1542]: time="2025-08-13T07:20:16.605649186Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 07:20:16.610780 containerd[1542]: time="2025-08-13T07:20:16.610763821Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:16.615630 containerd[1542]: time="2025-08-13T07:20:16.615601198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:16.616476 containerd[1542]: time="2025-08-13T07:20:16.616374492Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.21904594s" Aug 13 07:20:16.616476 containerd[1542]: time="2025-08-13T07:20:16.616401411Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 07:20:16.617279 containerd[1542]: time="2025-08-13T07:20:16.617259775Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 07:20:17.396718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2268465252.mount: Deactivated successfully. Aug 13 07:20:17.398678 containerd[1542]: time="2025-08-13T07:20:17.398657893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:17.399016 containerd[1542]: time="2025-08-13T07:20:17.398993871Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 07:20:17.399416 containerd[1542]: time="2025-08-13T07:20:17.399388979Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:17.400428 containerd[1542]: time="2025-08-13T07:20:17.400405388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:17.400922 containerd[1542]: time="2025-08-13T07:20:17.400853720Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 783.573287ms" Aug 13 07:20:17.400922 containerd[1542]: time="2025-08-13T07:20:17.400871863Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 07:20:17.401208 containerd[1542]: time="2025-08-13T07:20:17.401109401Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 07:20:17.957173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2468769087.mount: Deactivated successfully. Aug 13 07:20:21.503070 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Aug 13 07:20:21.509944 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:20:21.658174 update_engine[1520]: I20250813 07:20:21.657843 1520 update_attempter.cc:509] Updating boot flags... Aug 13 07:20:21.972297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:20:21.974966 (kubelet)[2180]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:20:22.096869 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2188) Aug 13 07:20:22.232469 kubelet[2180]: E0813 07:20:22.232351 2180 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:20:22.234116 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:20:22.234229 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:20:23.442414 containerd[1542]: time="2025-08-13T07:20:23.442385547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:23.442858 containerd[1542]: time="2025-08-13T07:20:23.442832890Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Aug 13 07:20:23.443326 containerd[1542]: time="2025-08-13T07:20:23.443313927Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:23.445060 containerd[1542]: time="2025-08-13T07:20:23.445047992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:23.445749 containerd[1542]: time="2025-08-13T07:20:23.445735431Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 6.044610086s" Aug 13 07:20:23.445797 containerd[1542]: time="2025-08-13T07:20:23.445788718Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 07:20:25.010143 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:20:25.015973 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:20:25.037709 systemd[1]: Reloading requested from client PID 2233 ('systemctl') (unit session-9.scope)... Aug 13 07:20:25.037719 systemd[1]: Reloading... Aug 13 07:20:25.096832 zram_generator::config[2277]: No configuration found. Aug 13 07:20:25.151392 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Aug 13 07:20:25.166324 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:20:25.210354 systemd[1]: Reloading finished in 172 ms. Aug 13 07:20:25.233767 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 07:20:25.233813 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 07:20:25.234193 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:20:25.241000 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:20:25.674899 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:20:25.678662 (kubelet)[2338]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:20:25.723341 kubelet[2338]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:20:25.723341 kubelet[2338]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 07:20:25.723341 kubelet[2338]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:20:25.723574 kubelet[2338]: I0813 07:20:25.723382 2338 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:20:25.896832 kubelet[2338]: I0813 07:20:25.896388 2338 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 07:20:25.896832 kubelet[2338]: I0813 07:20:25.896407 2338 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:20:25.896832 kubelet[2338]: I0813 07:20:25.896562 2338 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 07:20:26.012357 kubelet[2338]: E0813 07:20:26.012325 2338 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.105:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:20:26.013643 kubelet[2338]: I0813 07:20:26.013628 2338 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:20:26.031928 kubelet[2338]: E0813 07:20:26.031904 2338 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:20:26.031928 kubelet[2338]: I0813 07:20:26.031923 2338 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:20:26.035326 kubelet[2338]: I0813 07:20:26.035270 2338 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:20:26.036792 kubelet[2338]: I0813 07:20:26.036779 2338 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 07:20:26.036902 kubelet[2338]: I0813 07:20:26.036884 2338 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:20:26.037011 kubelet[2338]: I0813 07:20:26.036901 2338 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:20:26.037095 kubelet[2338]: I0813 07:20:26.037016 2338 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:20:26.037095 kubelet[2338]: I0813 07:20:26.037023 2338 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 07:20:26.037095 kubelet[2338]: I0813 07:20:26.037094 2338 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:20:26.041333 kubelet[2338]: I0813 07:20:26.041234 2338 kubelet.go:408] "Attempting to sync node with API server" Aug 13 07:20:26.041333 kubelet[2338]: I0813 07:20:26.041256 2338 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:20:26.042072 kubelet[2338]: I0813 07:20:26.041904 2338 kubelet.go:314] "Adding apiserver pod source" Aug 13 07:20:26.042072 kubelet[2338]: I0813 07:20:26.041937 2338 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:20:26.047513 kubelet[2338]: W0813 07:20:26.047479 2338 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Aug 13 07:20:26.047567 kubelet[2338]: E0813 07:20:26.047531 2338 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:20:26.050278 kubelet[2338]: I0813 07:20:26.050178 2338 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:20:26.058886 kubelet[2338]: W0813 07:20:26.058857 2338 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Aug 13 07:20:26.058926 kubelet[2338]: E0813 07:20:26.058888 2338 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:20:26.075591 kubelet[2338]: I0813 07:20:26.075463 2338 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:20:26.081095 kubelet[2338]: W0813 07:20:26.081083 2338 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 07:20:26.081673 kubelet[2338]: I0813 07:20:26.081571 2338 server.go:1274] "Started kubelet" Aug 13 07:20:26.084650 kubelet[2338]: I0813 07:20:26.084230 2338 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:20:26.098334 kubelet[2338]: I0813 07:20:26.097737 2338 server.go:449] "Adding debug handlers to kubelet server" Aug 13 07:20:26.102435 kubelet[2338]: I0813 07:20:26.101967 2338 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:20:26.102435 kubelet[2338]: I0813 07:20:26.102116 2338 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:20:26.103466 kubelet[2338]: I0813 07:20:26.103452 2338 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:20:26.113803 kubelet[2338]: I0813 07:20:26.113783 2338 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:20:26.118289 kubelet[2338]: E0813 07:20:26.102352 2338 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.105:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.105:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b4286b2ab2e01 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 07:20:26.081553921 +0000 UTC m=+0.399451882,LastTimestamp:2025-08-13 07:20:26.081553921 +0000 UTC m=+0.399451882,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 07:20:26.121002 kubelet[2338]: I0813 07:20:26.120986 2338 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 07:20:26.121625 kubelet[2338]: E0813 07:20:26.121140 2338 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:20:26.125260 kubelet[2338]: I0813 07:20:26.125241 2338 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 07:20:26.132191 kubelet[2338]: I0813 07:20:26.125314 2338 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:20:26.132191 kubelet[2338]: E0813 07:20:26.125610 2338 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="200ms" Aug 13 07:20:26.133778 kubelet[2338]: W0813 07:20:26.133216 2338 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Aug 13 07:20:26.133778 kubelet[2338]: E0813 07:20:26.133255 2338 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:20:26.133778 kubelet[2338]: I0813 07:20:26.133332 2338 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:20:26.133778 kubelet[2338]: I0813 07:20:26.133410 2338 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:20:26.135277 kubelet[2338]: I0813 07:20:26.134282 2338 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:20:26.136515 kubelet[2338]: E0813 07:20:26.134649 2338 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:20:26.145306 kubelet[2338]: I0813 07:20:26.145233 2338 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:20:26.148607 kubelet[2338]: I0813 07:20:26.148588 2338 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:20:26.148607 kubelet[2338]: I0813 07:20:26.148607 2338 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 07:20:26.148690 kubelet[2338]: I0813 07:20:26.148618 2338 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 07:20:26.149132 kubelet[2338]: E0813 07:20:26.148731 2338 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:20:26.149132 kubelet[2338]: I0813 07:20:26.148857 2338 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 07:20:26.149132 kubelet[2338]: I0813 07:20:26.148863 2338 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 07:20:26.149132 kubelet[2338]: I0813 07:20:26.148886 2338 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:20:26.150144 kubelet[2338]: I0813 07:20:26.150004 2338 policy_none.go:49] "None policy: Start" Aug 13 07:20:26.150574 kubelet[2338]: I0813 07:20:26.150567 2338 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 07:20:26.150628 kubelet[2338]: I0813 07:20:26.150623 2338 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:20:26.150792 kubelet[2338]: W0813 07:20:26.150741 2338 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Aug 13 07:20:26.150845 kubelet[2338]: E0813 07:20:26.150801 2338 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:20:26.156960 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 07:20:26.164400 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 07:20:26.172612 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 07:20:26.173899 kubelet[2338]: I0813 07:20:26.173234 2338 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:20:26.173899 kubelet[2338]: I0813 07:20:26.173334 2338 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:20:26.173899 kubelet[2338]: I0813 07:20:26.173340 2338 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:20:26.173899 kubelet[2338]: I0813 07:20:26.173606 2338 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:20:26.174443 kubelet[2338]: E0813 07:20:26.174436 2338 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 13 07:20:26.256315 systemd[1]: Created slice kubepods-burstable-podc63fd626b4c22e3faea22e7f17a8db57.slice - libcontainer container kubepods-burstable-podc63fd626b4c22e3faea22e7f17a8db57.slice. Aug 13 07:20:26.267956 systemd[1]: Created slice kubepods-burstable-pod407c569889bb86d746b0274843003fd0.slice - libcontainer container kubepods-burstable-pod407c569889bb86d746b0274843003fd0.slice. Aug 13 07:20:26.271663 systemd[1]: Created slice kubepods-burstable-pod27e4a50e94f48ec00f6bd509cb48ed05.slice - libcontainer container kubepods-burstable-pod27e4a50e94f48ec00f6bd509cb48ed05.slice. Aug 13 07:20:26.274267 kubelet[2338]: I0813 07:20:26.274244 2338 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:20:26.274529 kubelet[2338]: E0813 07:20:26.274504 2338 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Aug 13 07:20:26.326242 kubelet[2338]: E0813 07:20:26.326205 2338 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="400ms" Aug 13 07:20:26.427171 kubelet[2338]: I0813 07:20:26.427060 2338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:20:26.427171 kubelet[2338]: I0813 07:20:26.427095 2338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:20:26.427171 kubelet[2338]: I0813 07:20:26.427138 2338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:20:26.427171 kubelet[2338]: I0813 07:20:26.427156 2338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 13 07:20:26.427352 kubelet[2338]: I0813 07:20:26.427184 2338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c63fd626b4c22e3faea22e7f17a8db57-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c63fd626b4c22e3faea22e7f17a8db57\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:20:26.427352 kubelet[2338]: I0813 07:20:26.427210 2338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c63fd626b4c22e3faea22e7f17a8db57-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c63fd626b4c22e3faea22e7f17a8db57\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:20:26.427352 kubelet[2338]: I0813 07:20:26.427227 2338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:20:26.427352 kubelet[2338]: I0813 07:20:26.427243 2338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c63fd626b4c22e3faea22e7f17a8db57-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c63fd626b4c22e3faea22e7f17a8db57\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:20:26.427352 kubelet[2338]: I0813 07:20:26.427255 2338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:20:26.475598 kubelet[2338]: I0813 07:20:26.475576 2338 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:20:26.475847 kubelet[2338]: E0813 07:20:26.475813 2338 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Aug 13 07:20:26.567640 containerd[1542]: time="2025-08-13T07:20:26.567516248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c63fd626b4c22e3faea22e7f17a8db57,Namespace:kube-system,Attempt:0,}" Aug 13 07:20:26.577288 containerd[1542]: time="2025-08-13T07:20:26.577158628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,}" Aug 13 07:20:26.577288 containerd[1542]: time="2025-08-13T07:20:26.577191325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,}" Aug 13 07:20:26.726648 kubelet[2338]: E0813 07:20:26.726618 2338 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="800ms" Aug 13 07:20:26.877336 kubelet[2338]: I0813 07:20:26.876952 2338 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:20:26.877336 kubelet[2338]: E0813 07:20:26.877202 2338 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Aug 13 07:20:26.972131 kubelet[2338]: W0813 07:20:26.972053 2338 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Aug 13 07:20:26.972131 kubelet[2338]: E0813 07:20:26.972101 2338 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:20:27.006539 kubelet[2338]: W0813 07:20:27.006475 2338 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Aug 13 07:20:27.006539 kubelet[2338]: E0813 07:20:27.006516 2338 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:20:27.090813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1856745868.mount: Deactivated successfully. Aug 13 07:20:27.092645 containerd[1542]: time="2025-08-13T07:20:27.092619872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:20:27.093364 containerd[1542]: time="2025-08-13T07:20:27.093292471Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:20:27.093945 containerd[1542]: time="2025-08-13T07:20:27.093904154Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 13 07:20:27.094240 containerd[1542]: time="2025-08-13T07:20:27.094122242Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:20:27.095059 containerd[1542]: time="2025-08-13T07:20:27.095039931Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:20:27.095113 containerd[1542]: time="2025-08-13T07:20:27.095073692Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:20:27.097637 containerd[1542]: time="2025-08-13T07:20:27.097617787Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:20:27.098438 containerd[1542]: time="2025-08-13T07:20:27.098144817Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 530.567339ms" Aug 13 07:20:27.098999 containerd[1542]: time="2025-08-13T07:20:27.098981306Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 521.779284ms" Aug 13 07:20:27.100687 containerd[1542]: time="2025-08-13T07:20:27.100661393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:20:27.101521 containerd[1542]: time="2025-08-13T07:20:27.101502585Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 524.2727ms" Aug 13 07:20:27.390925 kubelet[2338]: W0813 07:20:27.390747 2338 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Aug 13 07:20:27.407029 kubelet[2338]: E0813 07:20:27.407003 2338 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:20:27.464221 containerd[1542]: time="2025-08-13T07:20:27.460532585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:20:27.464221 containerd[1542]: time="2025-08-13T07:20:27.460574477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:20:27.464221 containerd[1542]: time="2025-08-13T07:20:27.460595028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:27.464221 containerd[1542]: time="2025-08-13T07:20:27.460661056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:27.464221 containerd[1542]: time="2025-08-13T07:20:27.457424037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:20:27.464221 containerd[1542]: time="2025-08-13T07:20:27.457454566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:20:27.464221 containerd[1542]: time="2025-08-13T07:20:27.457462748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:27.464221 containerd[1542]: time="2025-08-13T07:20:27.457516325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:27.484921 systemd[1]: Started cri-containerd-861909dc41646d2cedd5ded3a5ea3565374f850093d70e20de7754af0a7e7a0a.scope - libcontainer container 861909dc41646d2cedd5ded3a5ea3565374f850093d70e20de7754af0a7e7a0a. Aug 13 07:20:27.486691 systemd[1]: Started cri-containerd-0031764d0a60b5913dc998bfb1ad5923d320d20d77b665878e3516df0bbfbc59.scope - libcontainer container 0031764d0a60b5913dc998bfb1ad5923d320d20d77b665878e3516df0bbfbc59. Aug 13 07:20:27.527603 kubelet[2338]: E0813 07:20:27.527573 2338 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="1.6s" Aug 13 07:20:27.538074 containerd[1542]: time="2025-08-13T07:20:27.537849202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c63fd626b4c22e3faea22e7f17a8db57,Namespace:kube-system,Attempt:0,} returns sandbox id \"0031764d0a60b5913dc998bfb1ad5923d320d20d77b665878e3516df0bbfbc59\"" Aug 13 07:20:27.542538 containerd[1542]: time="2025-08-13T07:20:27.538480307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,} returns sandbox id \"861909dc41646d2cedd5ded3a5ea3565374f850093d70e20de7754af0a7e7a0a\"" Aug 13 07:20:27.551268 containerd[1542]: time="2025-08-13T07:20:27.551254345Z" level=info msg="CreateContainer within sandbox \"861909dc41646d2cedd5ded3a5ea3565374f850093d70e20de7754af0a7e7a0a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 07:20:27.551556 containerd[1542]: time="2025-08-13T07:20:27.551423584Z" level=info msg="CreateContainer within sandbox \"0031764d0a60b5913dc998bfb1ad5923d320d20d77b665878e3516df0bbfbc59\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 07:20:27.635436 containerd[1542]: time="2025-08-13T07:20:27.635407999Z" level=info msg="CreateContainer within sandbox \"0031764d0a60b5913dc998bfb1ad5923d320d20d77b665878e3516df0bbfbc59\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f34b804c6e1c8498fb3cd41b4e7a61ff9b1495243ef79ad5da0e092975159afb\"" Aug 13 07:20:27.636597 containerd[1542]: time="2025-08-13T07:20:27.636414920Z" level=info msg="CreateContainer within sandbox \"861909dc41646d2cedd5ded3a5ea3565374f850093d70e20de7754af0a7e7a0a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e711e92309e8c7fc7f48832497dca3854c9a563191404c832225a818adacd002\"" Aug 13 07:20:27.637216 containerd[1542]: time="2025-08-13T07:20:27.636877915Z" level=info msg="StartContainer for \"e711e92309e8c7fc7f48832497dca3854c9a563191404c832225a818adacd002\"" Aug 13 07:20:27.641686 containerd[1542]: time="2025-08-13T07:20:27.641153105Z" level=info msg="StartContainer for \"f34b804c6e1c8498fb3cd41b4e7a61ff9b1495243ef79ad5da0e092975159afb\"" Aug 13 07:20:27.646193 containerd[1542]: time="2025-08-13T07:20:27.644389992Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:20:27.646193 containerd[1542]: time="2025-08-13T07:20:27.644418521Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:20:27.646193 containerd[1542]: time="2025-08-13T07:20:27.644485699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:27.646193 containerd[1542]: time="2025-08-13T07:20:27.644548463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:27.661917 systemd[1]: Started cri-containerd-81c6505da6598f3103b4ab411571be47f94c54c8c46f026890186184a4217704.scope - libcontainer container 81c6505da6598f3103b4ab411571be47f94c54c8c46f026890186184a4217704. Aug 13 07:20:27.665183 systemd[1]: Started cri-containerd-e711e92309e8c7fc7f48832497dca3854c9a563191404c832225a818adacd002.scope - libcontainer container e711e92309e8c7fc7f48832497dca3854c9a563191404c832225a818adacd002. Aug 13 07:20:27.666204 systemd[1]: Started cri-containerd-f34b804c6e1c8498fb3cd41b4e7a61ff9b1495243ef79ad5da0e092975159afb.scope - libcontainer container f34b804c6e1c8498fb3cd41b4e7a61ff9b1495243ef79ad5da0e092975159afb. Aug 13 07:20:27.678680 kubelet[2338]: I0813 07:20:27.678468 2338 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:20:27.678680 kubelet[2338]: E0813 07:20:27.678663 2338 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Aug 13 07:20:27.688164 kubelet[2338]: W0813 07:20:27.688136 2338 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Aug 13 07:20:27.688164 kubelet[2338]: E0813 07:20:27.688158 2338 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:20:27.714382 containerd[1542]: time="2025-08-13T07:20:27.714280787Z" level=info msg="StartContainer for \"e711e92309e8c7fc7f48832497dca3854c9a563191404c832225a818adacd002\" returns successfully" Aug 13 07:20:27.714382 containerd[1542]: time="2025-08-13T07:20:27.714346672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"81c6505da6598f3103b4ab411571be47f94c54c8c46f026890186184a4217704\"" Aug 13 07:20:27.717905 containerd[1542]: time="2025-08-13T07:20:27.717542416Z" level=info msg="CreateContainer within sandbox \"81c6505da6598f3103b4ab411571be47f94c54c8c46f026890186184a4217704\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 07:20:27.722765 containerd[1542]: time="2025-08-13T07:20:27.722687422Z" level=info msg="StartContainer for \"f34b804c6e1c8498fb3cd41b4e7a61ff9b1495243ef79ad5da0e092975159afb\" returns successfully" Aug 13 07:20:27.737279 containerd[1542]: time="2025-08-13T07:20:27.737251895Z" level=info msg="CreateContainer within sandbox \"81c6505da6598f3103b4ab411571be47f94c54c8c46f026890186184a4217704\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"702e68f3b2d8c7635c4a22fd480450bb2fc8edccb8c6f738650ccab160c53284\"" Aug 13 07:20:27.737908 containerd[1542]: time="2025-08-13T07:20:27.737733348Z" level=info msg="StartContainer for \"702e68f3b2d8c7635c4a22fd480450bb2fc8edccb8c6f738650ccab160c53284\"" Aug 13 07:20:27.758917 systemd[1]: Started cri-containerd-702e68f3b2d8c7635c4a22fd480450bb2fc8edccb8c6f738650ccab160c53284.scope - libcontainer container 702e68f3b2d8c7635c4a22fd480450bb2fc8edccb8c6f738650ccab160c53284. Aug 13 07:20:27.792038 containerd[1542]: time="2025-08-13T07:20:27.792013197Z" level=info msg="StartContainer for \"702e68f3b2d8c7635c4a22fd480450bb2fc8edccb8c6f738650ccab160c53284\" returns successfully" Aug 13 07:20:28.056697 kubelet[2338]: E0813 07:20:28.056667 2338 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.105:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:20:29.226161 kubelet[2338]: E0813 07:20:29.226123 2338 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 13 07:20:29.280461 kubelet[2338]: I0813 07:20:29.280410 2338 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:20:29.303552 kubelet[2338]: I0813 07:20:29.303525 2338 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 13 07:20:29.303552 kubelet[2338]: E0813 07:20:29.303549 2338 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 13 07:20:29.312791 kubelet[2338]: E0813 07:20:29.312761 2338 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:20:29.413396 kubelet[2338]: E0813 07:20:29.413365 2338 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:20:30.049886 kubelet[2338]: I0813 07:20:30.049860 2338 apiserver.go:52] "Watching apiserver" Aug 13 07:20:30.125409 kubelet[2338]: I0813 07:20:30.125359 2338 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 07:20:31.369549 systemd[1]: Reloading requested from client PID 2607 ('systemctl') (unit session-9.scope)... Aug 13 07:20:31.369561 systemd[1]: Reloading... Aug 13 07:20:31.437848 zram_generator::config[2647]: No configuration found. Aug 13 07:20:31.496285 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Aug 13 07:20:31.511574 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:20:31.566367 systemd[1]: Reloading finished in 196 ms. Aug 13 07:20:31.597098 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:20:31.604053 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 07:20:31.604186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:20:31.613055 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:20:31.952926 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:20:31.962118 (kubelet)[2712]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:20:32.066787 kubelet[2712]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:20:32.066787 kubelet[2712]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 07:20:32.066787 kubelet[2712]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:20:32.066787 kubelet[2712]: I0813 07:20:32.066172 2712 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:20:32.093429 kubelet[2712]: I0813 07:20:32.093403 2712 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 07:20:32.093429 kubelet[2712]: I0813 07:20:32.093421 2712 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:20:32.093596 kubelet[2712]: I0813 07:20:32.093580 2712 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 07:20:32.094552 kubelet[2712]: I0813 07:20:32.094535 2712 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 07:20:32.102067 kubelet[2712]: I0813 07:20:32.102016 2712 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:20:32.123215 kubelet[2712]: E0813 07:20:32.122706 2712 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:20:32.123215 kubelet[2712]: I0813 07:20:32.122725 2712 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:20:32.126389 kubelet[2712]: I0813 07:20:32.126369 2712 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:20:32.126463 kubelet[2712]: I0813 07:20:32.126450 2712 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 07:20:32.126544 kubelet[2712]: I0813 07:20:32.126524 2712 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:20:32.126725 kubelet[2712]: I0813 07:20:32.126542 2712 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:20:32.126796 kubelet[2712]: I0813 07:20:32.126734 2712 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:20:32.126796 kubelet[2712]: I0813 07:20:32.126743 2712 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 07:20:32.126796 kubelet[2712]: I0813 07:20:32.126763 2712 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:20:32.126886 kubelet[2712]: I0813 07:20:32.126849 2712 kubelet.go:408] "Attempting to sync node with API server" Aug 13 07:20:32.126886 kubelet[2712]: I0813 07:20:32.126859 2712 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:20:32.126886 kubelet[2712]: I0813 07:20:32.126881 2712 kubelet.go:314] "Adding apiserver pod source" Aug 13 07:20:32.126950 kubelet[2712]: I0813 07:20:32.126889 2712 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:20:32.143886 kubelet[2712]: I0813 07:20:32.143770 2712 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:20:32.144585 kubelet[2712]: I0813 07:20:32.144170 2712 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:20:32.144585 kubelet[2712]: I0813 07:20:32.144491 2712 server.go:1274] "Started kubelet" Aug 13 07:20:32.147287 kubelet[2712]: I0813 07:20:32.147260 2712 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:20:32.150408 kubelet[2712]: I0813 07:20:32.150198 2712 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:20:32.152617 kubelet[2712]: I0813 07:20:32.150579 2712 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:20:32.152617 kubelet[2712]: I0813 07:20:32.151371 2712 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:20:32.155374 kubelet[2712]: I0813 07:20:32.155357 2712 server.go:449] "Adding debug handlers to kubelet server" Aug 13 07:20:32.158733 kubelet[2712]: I0813 07:20:32.158719 2712 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:20:32.160756 kubelet[2712]: I0813 07:20:32.160745 2712 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 07:20:32.165033 kubelet[2712]: E0813 07:20:32.165019 2712 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:20:32.168444 kubelet[2712]: I0813 07:20:32.168411 2712 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 07:20:32.168601 kubelet[2712]: I0813 07:20:32.168594 2712 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:20:32.169361 kubelet[2712]: E0813 07:20:32.169346 2712 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:20:32.169546 kubelet[2712]: I0813 07:20:32.169534 2712 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:20:32.169599 kubelet[2712]: I0813 07:20:32.169587 2712 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:20:32.170100 kubelet[2712]: I0813 07:20:32.170081 2712 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:20:32.170744 kubelet[2712]: I0813 07:20:32.170734 2712 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:20:32.170795 kubelet[2712]: I0813 07:20:32.170789 2712 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 07:20:32.170897 kubelet[2712]: I0813 07:20:32.170892 2712 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 07:20:32.170958 kubelet[2712]: E0813 07:20:32.170948 2712 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:20:32.176582 kubelet[2712]: I0813 07:20:32.176563 2712 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:20:32.215091 kubelet[2712]: I0813 07:20:32.214966 2712 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 07:20:32.215091 kubelet[2712]: I0813 07:20:32.215044 2712 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 07:20:32.215091 kubelet[2712]: I0813 07:20:32.215059 2712 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:20:32.216006 kubelet[2712]: I0813 07:20:32.215992 2712 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 07:20:32.216042 kubelet[2712]: I0813 07:20:32.216004 2712 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 07:20:32.216042 kubelet[2712]: I0813 07:20:32.216020 2712 policy_none.go:49] "None policy: Start" Aug 13 07:20:32.217252 kubelet[2712]: I0813 07:20:32.217228 2712 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 07:20:32.217282 kubelet[2712]: I0813 07:20:32.217254 2712 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:20:32.217363 kubelet[2712]: I0813 07:20:32.217352 2712 state_mem.go:75] "Updated machine memory state" Aug 13 07:20:32.219771 kubelet[2712]: I0813 07:20:32.219756 2712 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:20:32.220181 kubelet[2712]: I0813 07:20:32.220170 2712 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:20:32.220215 kubelet[2712]: I0813 07:20:32.220181 2712 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:20:32.220842 kubelet[2712]: I0813 07:20:32.220830 2712 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:20:32.321372 kubelet[2712]: I0813 07:20:32.321257 2712 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:20:32.333305 kubelet[2712]: I0813 07:20:32.333254 2712 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Aug 13 07:20:32.333446 kubelet[2712]: I0813 07:20:32.333378 2712 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 13 07:20:32.469792 kubelet[2712]: I0813 07:20:32.469714 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c63fd626b4c22e3faea22e7f17a8db57-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c63fd626b4c22e3faea22e7f17a8db57\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:20:32.469792 kubelet[2712]: I0813 07:20:32.469756 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 13 07:20:32.469792 kubelet[2712]: I0813 07:20:32.469772 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c63fd626b4c22e3faea22e7f17a8db57-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c63fd626b4c22e3faea22e7f17a8db57\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:20:32.469792 kubelet[2712]: I0813 07:20:32.469789 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:20:32.469962 kubelet[2712]: I0813 07:20:32.469850 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:20:32.469962 kubelet[2712]: I0813 07:20:32.469866 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:20:32.469962 kubelet[2712]: I0813 07:20:32.469876 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:20:32.470302 kubelet[2712]: I0813 07:20:32.469888 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:20:32.470390 kubelet[2712]: I0813 07:20:32.470309 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c63fd626b4c22e3faea22e7f17a8db57-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c63fd626b4c22e3faea22e7f17a8db57\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:20:33.143629 kubelet[2712]: I0813 07:20:33.143227 2712 apiserver.go:52] "Watching apiserver" Aug 13 07:20:33.168909 kubelet[2712]: I0813 07:20:33.168883 2712 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 07:20:33.208699 kubelet[2712]: E0813 07:20:33.208678 2712 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 13 07:20:33.221676 kubelet[2712]: I0813 07:20:33.221638 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.22161844 podStartE2EDuration="1.22161844s" podCreationTimestamp="2025-08-13 07:20:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:20:33.221544891 +0000 UTC m=+1.190977033" watchObservedRunningTime="2025-08-13 07:20:33.22161844 +0000 UTC m=+1.191050579" Aug 13 07:20:33.232579 kubelet[2712]: I0813 07:20:33.232513 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.232477169 podStartE2EDuration="1.232477169s" podCreationTimestamp="2025-08-13 07:20:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:20:33.232097132 +0000 UTC m=+1.201529282" watchObservedRunningTime="2025-08-13 07:20:33.232477169 +0000 UTC m=+1.201909313" Aug 13 07:20:33.233014 kubelet[2712]: I0813 07:20:33.232852 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.232717408 podStartE2EDuration="1.232717408s" podCreationTimestamp="2025-08-13 07:20:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:20:33.227775512 +0000 UTC m=+1.197207663" watchObservedRunningTime="2025-08-13 07:20:33.232717408 +0000 UTC m=+1.202149554" Aug 13 07:20:36.004058 kubelet[2712]: I0813 07:20:36.004011 2712 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 07:20:36.004308 containerd[1542]: time="2025-08-13T07:20:36.004187584Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 07:20:36.004815 kubelet[2712]: I0813 07:20:36.004488 2712 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 07:20:36.810362 systemd[1]: Created slice kubepods-besteffort-pod98bd2802_e86b_4385_846b_b283bea6a341.slice - libcontainer container kubepods-besteffort-pod98bd2802_e86b_4385_846b_b283bea6a341.slice. Aug 13 07:20:36.897339 kubelet[2712]: I0813 07:20:36.897203 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98bd2802-e86b-4385-846b-b283bea6a341-lib-modules\") pod \"kube-proxy-7ckzx\" (UID: \"98bd2802-e86b-4385-846b-b283bea6a341\") " pod="kube-system/kube-proxy-7ckzx" Aug 13 07:20:36.897339 kubelet[2712]: I0813 07:20:36.897235 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4f7b\" (UniqueName: \"kubernetes.io/projected/98bd2802-e86b-4385-846b-b283bea6a341-kube-api-access-v4f7b\") pod \"kube-proxy-7ckzx\" (UID: \"98bd2802-e86b-4385-846b-b283bea6a341\") " pod="kube-system/kube-proxy-7ckzx" Aug 13 07:20:36.897339 kubelet[2712]: I0813 07:20:36.897280 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/98bd2802-e86b-4385-846b-b283bea6a341-kube-proxy\") pod \"kube-proxy-7ckzx\" (UID: \"98bd2802-e86b-4385-846b-b283bea6a341\") " pod="kube-system/kube-proxy-7ckzx" Aug 13 07:20:36.897339 kubelet[2712]: I0813 07:20:36.897296 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98bd2802-e86b-4385-846b-b283bea6a341-xtables-lock\") pod \"kube-proxy-7ckzx\" (UID: \"98bd2802-e86b-4385-846b-b283bea6a341\") " pod="kube-system/kube-proxy-7ckzx" Aug 13 07:20:36.925136 systemd[1]: Created slice kubepods-besteffort-pod05d79934_4e33_4eea_9b4b_e57b83f17b14.slice - libcontainer container kubepods-besteffort-pod05d79934_4e33_4eea_9b4b_e57b83f17b14.slice. Aug 13 07:20:36.997392 kubelet[2712]: I0813 07:20:36.997364 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/05d79934-4e33-4eea-9b4b-e57b83f17b14-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-wx4st\" (UID: \"05d79934-4e33-4eea-9b4b-e57b83f17b14\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-wx4st" Aug 13 07:20:36.997488 kubelet[2712]: I0813 07:20:36.997403 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtgfv\" (UniqueName: \"kubernetes.io/projected/05d79934-4e33-4eea-9b4b-e57b83f17b14-kube-api-access-dtgfv\") pod \"tigera-operator-5bf8dfcb4-wx4st\" (UID: \"05d79934-4e33-4eea-9b4b-e57b83f17b14\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-wx4st" Aug 13 07:20:37.117009 containerd[1542]: time="2025-08-13T07:20:37.116947413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7ckzx,Uid:98bd2802-e86b-4385-846b-b283bea6a341,Namespace:kube-system,Attempt:0,}" Aug 13 07:20:37.132641 containerd[1542]: time="2025-08-13T07:20:37.132515266Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:20:37.132641 containerd[1542]: time="2025-08-13T07:20:37.132548163Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:20:37.132641 containerd[1542]: time="2025-08-13T07:20:37.132566183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:37.132799 containerd[1542]: time="2025-08-13T07:20:37.132733770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:37.146925 systemd[1]: Started cri-containerd-392183d1d59ace818a1d9174df25710fc64afeb4261434447d8cc87430ebe60a.scope - libcontainer container 392183d1d59ace818a1d9174df25710fc64afeb4261434447d8cc87430ebe60a. Aug 13 07:20:37.163480 containerd[1542]: time="2025-08-13T07:20:37.163447363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7ckzx,Uid:98bd2802-e86b-4385-846b-b283bea6a341,Namespace:kube-system,Attempt:0,} returns sandbox id \"392183d1d59ace818a1d9174df25710fc64afeb4261434447d8cc87430ebe60a\"" Aug 13 07:20:37.165382 containerd[1542]: time="2025-08-13T07:20:37.165273595Z" level=info msg="CreateContainer within sandbox \"392183d1d59ace818a1d9174df25710fc64afeb4261434447d8cc87430ebe60a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 07:20:37.174183 containerd[1542]: time="2025-08-13T07:20:37.174149880Z" level=info msg="CreateContainer within sandbox \"392183d1d59ace818a1d9174df25710fc64afeb4261434447d8cc87430ebe60a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"279240781f20f023cac060e9449681d1eff5b7c020c2353dd3e67a897655c798\"" Aug 13 07:20:37.175055 containerd[1542]: time="2025-08-13T07:20:37.174576799Z" level=info msg="StartContainer for \"279240781f20f023cac060e9449681d1eff5b7c020c2353dd3e67a897655c798\"" Aug 13 07:20:37.192260 systemd[1]: Started cri-containerd-279240781f20f023cac060e9449681d1eff5b7c020c2353dd3e67a897655c798.scope - libcontainer container 279240781f20f023cac060e9449681d1eff5b7c020c2353dd3e67a897655c798. Aug 13 07:20:37.212532 containerd[1542]: time="2025-08-13T07:20:37.212461155Z" level=info msg="StartContainer for \"279240781f20f023cac060e9449681d1eff5b7c020c2353dd3e67a897655c798\" returns successfully" Aug 13 07:20:37.228228 containerd[1542]: time="2025-08-13T07:20:37.228132907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-wx4st,Uid:05d79934-4e33-4eea-9b4b-e57b83f17b14,Namespace:tigera-operator,Attempt:0,}" Aug 13 07:20:37.246122 containerd[1542]: time="2025-08-13T07:20:37.245912903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:20:37.246122 containerd[1542]: time="2025-08-13T07:20:37.245958005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:20:37.246122 containerd[1542]: time="2025-08-13T07:20:37.245968523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:37.246122 containerd[1542]: time="2025-08-13T07:20:37.246037834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:37.264931 systemd[1]: Started cri-containerd-c400af607429291a50f7c94e7a1a796b8ed9fd5f010efb0f55116db09720819a.scope - libcontainer container c400af607429291a50f7c94e7a1a796b8ed9fd5f010efb0f55116db09720819a. Aug 13 07:20:37.294793 containerd[1542]: time="2025-08-13T07:20:37.294745040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-wx4st,Uid:05d79934-4e33-4eea-9b4b-e57b83f17b14,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c400af607429291a50f7c94e7a1a796b8ed9fd5f010efb0f55116db09720819a\"" Aug 13 07:20:37.296074 containerd[1542]: time="2025-08-13T07:20:37.295975176Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 07:20:38.764507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount17495987.mount: Deactivated successfully. Aug 13 07:20:39.224108 kubelet[2712]: I0813 07:20:39.223859 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7ckzx" podStartSLOduration=3.223847741 podStartE2EDuration="3.223847741s" podCreationTimestamp="2025-08-13 07:20:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:20:38.22249465 +0000 UTC m=+6.191926800" watchObservedRunningTime="2025-08-13 07:20:39.223847741 +0000 UTC m=+7.193279891" Aug 13 07:20:39.345953 containerd[1542]: time="2025-08-13T07:20:39.345889283Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:39.346392 containerd[1542]: time="2025-08-13T07:20:39.346370870Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 07:20:39.347641 containerd[1542]: time="2025-08-13T07:20:39.347609371Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:39.348668 containerd[1542]: time="2025-08-13T07:20:39.348646781Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:39.349375 containerd[1542]: time="2025-08-13T07:20:39.349095707Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.053100678s" Aug 13 07:20:39.349375 containerd[1542]: time="2025-08-13T07:20:39.349127009Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 07:20:39.353555 containerd[1542]: time="2025-08-13T07:20:39.353535746Z" level=info msg="CreateContainer within sandbox \"c400af607429291a50f7c94e7a1a796b8ed9fd5f010efb0f55116db09720819a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 07:20:39.635177 containerd[1542]: time="2025-08-13T07:20:39.635110393Z" level=info msg="CreateContainer within sandbox \"c400af607429291a50f7c94e7a1a796b8ed9fd5f010efb0f55116db09720819a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"66f1266fba33da79fe48948e84bb7742b9d1c336bebbba1ee7c66c7e55825fb3\"" Aug 13 07:20:39.635598 containerd[1542]: time="2025-08-13T07:20:39.635576593Z" level=info msg="StartContainer for \"66f1266fba33da79fe48948e84bb7742b9d1c336bebbba1ee7c66c7e55825fb3\"" Aug 13 07:20:39.657913 systemd[1]: Started cri-containerd-66f1266fba33da79fe48948e84bb7742b9d1c336bebbba1ee7c66c7e55825fb3.scope - libcontainer container 66f1266fba33da79fe48948e84bb7742b9d1c336bebbba1ee7c66c7e55825fb3. Aug 13 07:20:39.688292 containerd[1542]: time="2025-08-13T07:20:39.688194141Z" level=info msg="StartContainer for \"66f1266fba33da79fe48948e84bb7742b9d1c336bebbba1ee7c66c7e55825fb3\" returns successfully" Aug 13 07:20:40.225055 kubelet[2712]: I0813 07:20:40.224977 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-wx4st" podStartSLOduration=2.170976383 podStartE2EDuration="4.224964304s" podCreationTimestamp="2025-08-13 07:20:36 +0000 UTC" firstStartedPulling="2025-08-13 07:20:37.295640208 +0000 UTC m=+5.265072350" lastFinishedPulling="2025-08-13 07:20:39.349628128 +0000 UTC m=+7.319060271" observedRunningTime="2025-08-13 07:20:40.22374298 +0000 UTC m=+8.193175131" watchObservedRunningTime="2025-08-13 07:20:40.224964304 +0000 UTC m=+8.194396448" Aug 13 07:20:45.019686 sudo[1826]: pam_unix(sudo:session): session closed for user root Aug 13 07:20:45.023292 sshd[1823]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:45.025575 systemd[1]: sshd@6-139.178.70.105:22-139.178.68.195:44304.service: Deactivated successfully. Aug 13 07:20:45.028175 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 07:20:45.028576 systemd[1]: session-9.scope: Consumed 2.597s CPU time, 141.2M memory peak, 0B memory swap peak. Aug 13 07:20:45.030289 systemd-logind[1513]: Session 9 logged out. Waiting for processes to exit. Aug 13 07:20:45.031392 systemd-logind[1513]: Removed session 9. Aug 13 07:20:47.554351 systemd[1]: Created slice kubepods-besteffort-podc3379823_c4e0_4b36_9c7f_b62dd9455c3b.slice - libcontainer container kubepods-besteffort-podc3379823_c4e0_4b36_9c7f_b62dd9455c3b.slice. Aug 13 07:20:47.652560 systemd[1]: Created slice kubepods-besteffort-pod94fcc113_1673_4b4c_9dd4_6577b95fe4a9.slice - libcontainer container kubepods-besteffort-pod94fcc113_1673_4b4c_9dd4_6577b95fe4a9.slice. Aug 13 07:20:47.662481 kubelet[2712]: I0813 07:20:47.662276 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c3379823-c4e0-4b36-9c7f-b62dd9455c3b-typha-certs\") pod \"calico-typha-598b9659f5-w2p2x\" (UID: \"c3379823-c4e0-4b36-9c7f-b62dd9455c3b\") " pod="calico-system/calico-typha-598b9659f5-w2p2x" Aug 13 07:20:47.662481 kubelet[2712]: I0813 07:20:47.662309 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3379823-c4e0-4b36-9c7f-b62dd9455c3b-tigera-ca-bundle\") pod \"calico-typha-598b9659f5-w2p2x\" (UID: \"c3379823-c4e0-4b36-9c7f-b62dd9455c3b\") " pod="calico-system/calico-typha-598b9659f5-w2p2x" Aug 13 07:20:47.662481 kubelet[2712]: I0813 07:20:47.662322 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lf7t\" (UniqueName: \"kubernetes.io/projected/c3379823-c4e0-4b36-9c7f-b62dd9455c3b-kube-api-access-5lf7t\") pod \"calico-typha-598b9659f5-w2p2x\" (UID: \"c3379823-c4e0-4b36-9c7f-b62dd9455c3b\") " pod="calico-system/calico-typha-598b9659f5-w2p2x" Aug 13 07:20:47.763451 kubelet[2712]: I0813 07:20:47.763423 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94fcc113-1673-4b4c-9dd4-6577b95fe4a9-lib-modules\") pod \"calico-node-xg459\" (UID: \"94fcc113-1673-4b4c-9dd4-6577b95fe4a9\") " pod="calico-system/calico-node-xg459" Aug 13 07:20:47.763451 kubelet[2712]: I0813 07:20:47.763450 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/94fcc113-1673-4b4c-9dd4-6577b95fe4a9-node-certs\") pod \"calico-node-xg459\" (UID: \"94fcc113-1673-4b4c-9dd4-6577b95fe4a9\") " pod="calico-system/calico-node-xg459" Aug 13 07:20:47.763747 kubelet[2712]: I0813 07:20:47.763465 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94fcc113-1673-4b4c-9dd4-6577b95fe4a9-xtables-lock\") pod \"calico-node-xg459\" (UID: \"94fcc113-1673-4b4c-9dd4-6577b95fe4a9\") " pod="calico-system/calico-node-xg459" Aug 13 07:20:47.763747 kubelet[2712]: I0813 07:20:47.763483 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/94fcc113-1673-4b4c-9dd4-6577b95fe4a9-var-lib-calico\") pod \"calico-node-xg459\" (UID: \"94fcc113-1673-4b4c-9dd4-6577b95fe4a9\") " pod="calico-system/calico-node-xg459" Aug 13 07:20:47.763747 kubelet[2712]: I0813 07:20:47.763587 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/94fcc113-1673-4b4c-9dd4-6577b95fe4a9-policysync\") pod \"calico-node-xg459\" (UID: \"94fcc113-1673-4b4c-9dd4-6577b95fe4a9\") " pod="calico-system/calico-node-xg459" Aug 13 07:20:47.763747 kubelet[2712]: I0813 07:20:47.763609 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/94fcc113-1673-4b4c-9dd4-6577b95fe4a9-cni-bin-dir\") pod \"calico-node-xg459\" (UID: \"94fcc113-1673-4b4c-9dd4-6577b95fe4a9\") " pod="calico-system/calico-node-xg459" Aug 13 07:20:47.764345 kubelet[2712]: I0813 07:20:47.763624 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x29x\" (UniqueName: \"kubernetes.io/projected/94fcc113-1673-4b4c-9dd4-6577b95fe4a9-kube-api-access-7x29x\") pod \"calico-node-xg459\" (UID: \"94fcc113-1673-4b4c-9dd4-6577b95fe4a9\") " pod="calico-system/calico-node-xg459" Aug 13 07:20:47.764345 kubelet[2712]: I0813 07:20:47.763925 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94fcc113-1673-4b4c-9dd4-6577b95fe4a9-tigera-ca-bundle\") pod \"calico-node-xg459\" (UID: \"94fcc113-1673-4b4c-9dd4-6577b95fe4a9\") " pod="calico-system/calico-node-xg459" Aug 13 07:20:47.764345 kubelet[2712]: I0813 07:20:47.763951 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/94fcc113-1673-4b4c-9dd4-6577b95fe4a9-cni-log-dir\") pod \"calico-node-xg459\" (UID: \"94fcc113-1673-4b4c-9dd4-6577b95fe4a9\") " pod="calico-system/calico-node-xg459" Aug 13 07:20:47.764345 kubelet[2712]: I0813 07:20:47.763963 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/94fcc113-1673-4b4c-9dd4-6577b95fe4a9-flexvol-driver-host\") pod \"calico-node-xg459\" (UID: \"94fcc113-1673-4b4c-9dd4-6577b95fe4a9\") " pod="calico-system/calico-node-xg459" Aug 13 07:20:47.764345 kubelet[2712]: I0813 07:20:47.763997 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/94fcc113-1673-4b4c-9dd4-6577b95fe4a9-cni-net-dir\") pod \"calico-node-xg459\" (UID: \"94fcc113-1673-4b4c-9dd4-6577b95fe4a9\") " pod="calico-system/calico-node-xg459" Aug 13 07:20:47.764545 kubelet[2712]: I0813 07:20:47.764011 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/94fcc113-1673-4b4c-9dd4-6577b95fe4a9-var-run-calico\") pod \"calico-node-xg459\" (UID: \"94fcc113-1673-4b4c-9dd4-6577b95fe4a9\") " pod="calico-system/calico-node-xg459" Aug 13 07:20:47.823222 kubelet[2712]: E0813 07:20:47.823113 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cs2xn" podUID="58b74c31-6d05-4f11-8c94-9d85e9d65a22" Aug 13 07:20:47.860398 containerd[1542]: time="2025-08-13T07:20:47.860029044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-598b9659f5-w2p2x,Uid:c3379823-c4e0-4b36-9c7f-b62dd9455c3b,Namespace:calico-system,Attempt:0,}" Aug 13 07:20:47.865319 kubelet[2712]: I0813 07:20:47.864963 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/58b74c31-6d05-4f11-8c94-9d85e9d65a22-kubelet-dir\") pod \"csi-node-driver-cs2xn\" (UID: \"58b74c31-6d05-4f11-8c94-9d85e9d65a22\") " pod="calico-system/csi-node-driver-cs2xn" Aug 13 07:20:47.865319 kubelet[2712]: I0813 07:20:47.865022 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/58b74c31-6d05-4f11-8c94-9d85e9d65a22-socket-dir\") pod \"csi-node-driver-cs2xn\" (UID: \"58b74c31-6d05-4f11-8c94-9d85e9d65a22\") " pod="calico-system/csi-node-driver-cs2xn" Aug 13 07:20:47.865319 kubelet[2712]: I0813 07:20:47.865034 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh7pl\" (UniqueName: \"kubernetes.io/projected/58b74c31-6d05-4f11-8c94-9d85e9d65a22-kube-api-access-rh7pl\") pod \"csi-node-driver-cs2xn\" (UID: \"58b74c31-6d05-4f11-8c94-9d85e9d65a22\") " pod="calico-system/csi-node-driver-cs2xn" Aug 13 07:20:47.865319 kubelet[2712]: I0813 07:20:47.865044 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/58b74c31-6d05-4f11-8c94-9d85e9d65a22-varrun\") pod \"csi-node-driver-cs2xn\" (UID: \"58b74c31-6d05-4f11-8c94-9d85e9d65a22\") " pod="calico-system/csi-node-driver-cs2xn" Aug 13 07:20:47.865319 kubelet[2712]: I0813 07:20:47.865065 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/58b74c31-6d05-4f11-8c94-9d85e9d65a22-registration-dir\") pod \"csi-node-driver-cs2xn\" (UID: \"58b74c31-6d05-4f11-8c94-9d85e9d65a22\") " pod="calico-system/csi-node-driver-cs2xn" Aug 13 07:20:47.880775 containerd[1542]: time="2025-08-13T07:20:47.878724524Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:20:47.880775 containerd[1542]: time="2025-08-13T07:20:47.878765245Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:20:47.880775 containerd[1542]: time="2025-08-13T07:20:47.878775505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:47.880775 containerd[1542]: time="2025-08-13T07:20:47.880430386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:47.886211 kubelet[2712]: E0813 07:20:47.886180 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:47.886295 kubelet[2712]: W0813 07:20:47.886218 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:47.896072 kubelet[2712]: E0813 07:20:47.896026 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:47.934040 systemd[1]: Started cri-containerd-2f859698c141cb702bcde317269a38da54f9475d5dcb463b4ad5f9aa9c3dc867.scope - libcontainer container 2f859698c141cb702bcde317269a38da54f9475d5dcb463b4ad5f9aa9c3dc867. Aug 13 07:20:47.957059 containerd[1542]: time="2025-08-13T07:20:47.956890778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xg459,Uid:94fcc113-1673-4b4c-9dd4-6577b95fe4a9,Namespace:calico-system,Attempt:0,}" Aug 13 07:20:47.966181 kubelet[2712]: E0813 07:20:47.966138 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:47.966358 kubelet[2712]: W0813 07:20:47.966244 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:47.966358 kubelet[2712]: E0813 07:20:47.966261 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:47.966615 kubelet[2712]: E0813 07:20:47.966514 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:47.966615 kubelet[2712]: W0813 07:20:47.966521 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:47.966615 kubelet[2712]: E0813 07:20:47.966528 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:47.966779 kubelet[2712]: E0813 07:20:47.966702 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:47.966779 kubelet[2712]: W0813 07:20:47.966708 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:47.966779 kubelet[2712]: E0813 07:20:47.966714 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:47.967072 kubelet[2712]: E0813 07:20:47.967002 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:47.967072 kubelet[2712]: W0813 07:20:47.967010 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:47.967072 kubelet[2712]: E0813 07:20:47.967022 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:47.967597 kubelet[2712]: E0813 07:20:47.967448 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:47.967597 kubelet[2712]: W0813 07:20:47.967456 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:47.967597 kubelet[2712]: E0813 07:20:47.967466 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:47.967597 kubelet[2712]: E0813 07:20:47.967584 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:47.967597 kubelet[2712]: W0813 07:20:47.967589 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:47.967794 kubelet[2712]: E0813 07:20:47.967656 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:47.968332 kubelet[2712]: E0813 07:20:47.968120 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:47.968332 kubelet[2712]: W0813 07:20:47.968127 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:47.968332 kubelet[2712]: E0813 07:20:47.968138 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:47.968332 kubelet[2712]: E0813 07:20:47.968259 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:47.968332 kubelet[2712]: W0813 07:20:47.968265 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:47.968332 kubelet[2712]: E0813 07:20:47.968310 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:47.968591 kubelet[2712]: E0813 07:20:47.968545 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:47.968591 kubelet[2712]: W0813 07:20:47.968552 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:47.968661 kubelet[2712]: E0813 07:20:47.968648 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:47.968713 kubelet[2712]: E0813 07:20:47.968707 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:47.968765 kubelet[2712]: W0813 07:20:47.968746 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:47.968807 kubelet[2712]: E0813 07:20:47.968801 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:47.969187 kubelet[2712]: E0813 07:20:47.969133 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:47.969187 kubelet[2712]: W0813 07:20:47.969141 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:47.969187 kubelet[2712]: E0813 07:20:47.969175 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:47.969901 kubelet[2712]: E0813 07:20:47.969842 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:47.969901 kubelet[2712]: W0813 07:20:47.969849 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:47.969901 kubelet[2712]: E0813 07:20:47.969864 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:47.970108 kubelet[2712]: E0813 07:20:47.969980 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:47.970108 kubelet[2712]: W0813 07:20:47.969986 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:47.970108 kubelet[2712]: E0813 07:20:47.970000 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:47.970425 kubelet[2712]: E0813 07:20:47.970140 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:47.970425 kubelet[2712]: W0813 07:20:47.970145 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:47.970425 kubelet[2712]: E0813 07:20:47.970161 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:47.970425 kubelet[2712]: E0813 07:20:47.970256 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:47.970425 kubelet[2712]: W0813 07:20:47.970262 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:47.970425 kubelet[2712]: E0813 07:20:47.970313 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:47.970904 kubelet[2712]: E0813 07:20:47.970894 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:47.970904 kubelet[2712]: W0813 07:20:47.970902 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:47.971128 kubelet[2712]: E0813 07:20:47.970919 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:47.971128 kubelet[2712]: E0813 07:20:47.971001 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:47.971128 kubelet[2712]: W0813 07:20:47.971006 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:47.971128 kubelet[2712]: E0813 07:20:47.971024 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:47.971128 kubelet[2712]: E0813 07:20:47.971099 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:47.971128 kubelet[2712]: W0813 07:20:47.971104 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:47.971128 kubelet[2712]: E0813 07:20:47.971112 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:47.971606 kubelet[2712]: E0813 07:20:47.971366 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:47.971606 kubelet[2712]: W0813 07:20:47.971374 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:47.971606 kubelet[2712]: E0813 07:20:47.971382 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:47.971606 kubelet[2712]: E0813 07:20:47.971547 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:47.971606 kubelet[2712]: W0813 07:20:47.971552 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:47.971606 kubelet[2712]: E0813 07:20:47.971557 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:47.971711 kubelet[2712]: E0813 07:20:47.971644 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:47.971711 kubelet[2712]: W0813 07:20:47.971649 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:47.971711 kubelet[2712]: E0813 07:20:47.971656 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:47.971897 kubelet[2712]: E0813 07:20:47.971760 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:47.971897 kubelet[2712]: W0813 07:20:47.971764 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:47.971897 kubelet[2712]: E0813 07:20:47.971770 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:47.971897 kubelet[2712]: E0813 07:20:47.971896 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:47.971897 kubelet[2712]: W0813 07:20:47.971901 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:47.971897 kubelet[2712]: E0813 07:20:47.971907 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:47.972095 kubelet[2712]: E0813 07:20:47.972027 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:47.972095 kubelet[2712]: W0813 07:20:47.972032 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:47.972095 kubelet[2712]: E0813 07:20:47.972037 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:47.976665 containerd[1542]: time="2025-08-13T07:20:47.976590239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-598b9659f5-w2p2x,Uid:c3379823-c4e0-4b36-9c7f-b62dd9455c3b,Namespace:calico-system,Attempt:0,} returns sandbox id \"2f859698c141cb702bcde317269a38da54f9475d5dcb463b4ad5f9aa9c3dc867\"" Aug 13 07:20:47.979945 kubelet[2712]: E0813 07:20:47.979924 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:47.979945 kubelet[2712]: W0813 07:20:47.979939 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:47.980211 kubelet[2712]: E0813 07:20:47.979953 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:47.983626 kubelet[2712]: E0813 07:20:47.983562 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:47.983626 kubelet[2712]: W0813 07:20:47.983576 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:47.983626 kubelet[2712]: E0813 07:20:47.983590 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:48.019112 containerd[1542]: time="2025-08-13T07:20:48.019009396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 07:20:48.058866 containerd[1542]: time="2025-08-13T07:20:48.058784546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:20:48.058866 containerd[1542]: time="2025-08-13T07:20:48.058838080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:20:48.058866 containerd[1542]: time="2025-08-13T07:20:48.058851909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:48.059072 containerd[1542]: time="2025-08-13T07:20:48.058907444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:48.081964 systemd[1]: Started cri-containerd-b4e980c143140453871e104f3f7966de756cccb067652d687d83bccabc781ded.scope - libcontainer container b4e980c143140453871e104f3f7966de756cccb067652d687d83bccabc781ded. Aug 13 07:20:48.110925 containerd[1542]: time="2025-08-13T07:20:48.110870135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xg459,Uid:94fcc113-1673-4b4c-9dd4-6577b95fe4a9,Namespace:calico-system,Attempt:0,} returns sandbox id \"b4e980c143140453871e104f3f7966de756cccb067652d687d83bccabc781ded\"" Aug 13 07:20:49.171962 kubelet[2712]: E0813 07:20:49.171876 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cs2xn" podUID="58b74c31-6d05-4f11-8c94-9d85e9d65a22" Aug 13 07:20:49.721360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1061056047.mount: Deactivated successfully. Aug 13 07:20:50.784317 containerd[1542]: time="2025-08-13T07:20:50.783902114Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:50.785749 containerd[1542]: time="2025-08-13T07:20:50.785728703Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Aug 13 07:20:50.787898 containerd[1542]: time="2025-08-13T07:20:50.787867522Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:50.789873 containerd[1542]: time="2025-08-13T07:20:50.789848999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:50.790455 containerd[1542]: time="2025-08-13T07:20:50.790173668Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.77113768s" Aug 13 07:20:50.790455 containerd[1542]: time="2025-08-13T07:20:50.790191154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 07:20:50.793948 containerd[1542]: time="2025-08-13T07:20:50.790814143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 07:20:50.825010 containerd[1542]: time="2025-08-13T07:20:50.824982720Z" level=info msg="CreateContainer within sandbox \"2f859698c141cb702bcde317269a38da54f9475d5dcb463b4ad5f9aa9c3dc867\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 07:20:50.829489 containerd[1542]: time="2025-08-13T07:20:50.829471136Z" level=info msg="CreateContainer within sandbox \"2f859698c141cb702bcde317269a38da54f9475d5dcb463b4ad5f9aa9c3dc867\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"532bb1721beae066d07b0253f38312d189f6e671fcb186ce8d78ae201dbf3808\"" Aug 13 07:20:50.834894 containerd[1542]: time="2025-08-13T07:20:50.834849274Z" level=info msg="StartContainer for \"532bb1721beae066d07b0253f38312d189f6e671fcb186ce8d78ae201dbf3808\"" Aug 13 07:20:50.865989 systemd[1]: Started cri-containerd-532bb1721beae066d07b0253f38312d189f6e671fcb186ce8d78ae201dbf3808.scope - libcontainer container 532bb1721beae066d07b0253f38312d189f6e671fcb186ce8d78ae201dbf3808. Aug 13 07:20:50.902854 containerd[1542]: time="2025-08-13T07:20:50.902757717Z" level=info msg="StartContainer for \"532bb1721beae066d07b0253f38312d189f6e671fcb186ce8d78ae201dbf3808\" returns successfully" Aug 13 07:20:51.181629 kubelet[2712]: E0813 07:20:51.181521 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cs2xn" podUID="58b74c31-6d05-4f11-8c94-9d85e9d65a22" Aug 13 07:20:51.266580 kubelet[2712]: I0813 07:20:51.265454 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-598b9659f5-w2p2x" podStartSLOduration=1.492088081 podStartE2EDuration="4.265355557s" podCreationTimestamp="2025-08-13 07:20:47 +0000 UTC" firstStartedPulling="2025-08-13 07:20:48.017384571 +0000 UTC m=+15.986816713" lastFinishedPulling="2025-08-13 07:20:50.790652047 +0000 UTC m=+18.760084189" observedRunningTime="2025-08-13 07:20:51.264928586 +0000 UTC m=+19.234360736" watchObservedRunningTime="2025-08-13 07:20:51.265355557 +0000 UTC m=+19.234787701" Aug 13 07:20:51.287576 kubelet[2712]: E0813 07:20:51.287553 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:51.287754 kubelet[2712]: W0813 07:20:51.287667 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:51.287754 kubelet[2712]: E0813 07:20:51.287685 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:51.287864 kubelet[2712]: E0813 07:20:51.287816 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:51.287953 kubelet[2712]: W0813 07:20:51.287897 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:51.287953 kubelet[2712]: E0813 07:20:51.287907 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:51.288030 kubelet[2712]: E0813 07:20:51.288024 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:51.288065 kubelet[2712]: W0813 07:20:51.288060 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:51.288139 kubelet[2712]: E0813 07:20:51.288093 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:51.288263 kubelet[2712]: E0813 07:20:51.288203 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:51.288263 kubelet[2712]: W0813 07:20:51.288209 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:51.288263 kubelet[2712]: E0813 07:20:51.288214 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:51.288357 kubelet[2712]: E0813 07:20:51.288344 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:51.288402 kubelet[2712]: W0813 07:20:51.288396 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:51.288436 kubelet[2712]: E0813 07:20:51.288430 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:51.288561 kubelet[2712]: E0813 07:20:51.288555 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:51.288640 kubelet[2712]: W0813 07:20:51.288591 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:51.288640 kubelet[2712]: E0813 07:20:51.288598 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:51.288731 kubelet[2712]: E0813 07:20:51.288725 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:51.288775 kubelet[2712]: W0813 07:20:51.288769 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:51.288864 kubelet[2712]: E0813 07:20:51.288803 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:51.288925 kubelet[2712]: E0813 07:20:51.288919 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:51.288971 kubelet[2712]: W0813 07:20:51.288965 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:51.289002 kubelet[2712]: E0813 07:20:51.288997 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:51.289175 kubelet[2712]: E0813 07:20:51.289159 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:51.289213 kubelet[2712]: W0813 07:20:51.289207 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:51.289305 kubelet[2712]: E0813 07:20:51.289257 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:51.289374 kubelet[2712]: E0813 07:20:51.289368 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:51.289408 kubelet[2712]: W0813 07:20:51.289402 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:51.289509 kubelet[2712]: E0813 07:20:51.289472 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:51.289622 kubelet[2712]: E0813 07:20:51.289616 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:51.289704 kubelet[2712]: W0813 07:20:51.289655 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:51.289704 kubelet[2712]: E0813 07:20:51.289663 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:51.289774 kubelet[2712]: E0813 07:20:51.289768 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:51.289806 kubelet[2712]: W0813 07:20:51.289801 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:51.289870 kubelet[2712]: E0813 07:20:51.289864 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:51.290051 kubelet[2712]: E0813 07:20:51.289999 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:51.290051 kubelet[2712]: W0813 07:20:51.290005 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:51.290051 kubelet[2712]: E0813 07:20:51.290010 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:51.290153 kubelet[2712]: E0813 07:20:51.290145 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:51.290246 kubelet[2712]: W0813 07:20:51.290188 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:51.290246 kubelet[2712]: E0813 07:20:51.290197 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:51.290348 kubelet[2712]: E0813 07:20:51.290341 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:51.290540 kubelet[2712]: W0813 07:20:51.290379 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:51.290540 kubelet[2712]: E0813 07:20:51.290387 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:51.293276 kubelet[2712]: E0813 07:20:51.293233 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:51.293276 kubelet[2712]: W0813 07:20:51.293243 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:51.293276 kubelet[2712]: E0813 07:20:51.293252 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:51.293480 kubelet[2712]: E0813 07:20:51.293371 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:51.293480 kubelet[2712]: W0813 07:20:51.293376 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:51.293480 kubelet[2712]: E0813 07:20:51.293385 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:51.293611 kubelet[2712]: E0813 07:20:51.293489 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:51.293611 kubelet[2712]: W0813 07:20:51.293494 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:51.293611 kubelet[2712]: E0813 07:20:51.293501 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:51.293611 kubelet[2712]: E0813 07:20:51.293607 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:51.293611 kubelet[2712]: W0813 07:20:51.293612 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:51.293782 kubelet[2712]: E0813 07:20:51.293619 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:51.293782 kubelet[2712]: E0813 07:20:51.293706 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:51.293782 kubelet[2712]: W0813 07:20:51.293711 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:51.293782 kubelet[2712]: E0813 07:20:51.293718 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:51.294202 kubelet[2712]: E0813 07:20:51.293807 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:51.294202 kubelet[2712]: W0813 07:20:51.293812 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:51.294202 kubelet[2712]: E0813 07:20:51.293826 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:51.294202 kubelet[2712]: E0813 07:20:51.294099 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:51.294202 kubelet[2712]: W0813 07:20:51.294118 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:51.294202 kubelet[2712]: E0813 07:20:51.294133 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:51.294400 kubelet[2712]: E0813 07:20:51.294240 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:51.294400 kubelet[2712]: W0813 07:20:51.294245 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:51.294400 kubelet[2712]: E0813 07:20:51.294251 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:51.294509 kubelet[2712]: E0813 07:20:51.294485 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:51.294509 kubelet[2712]: W0813 07:20:51.294492 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:51.294509 kubelet[2712]: E0813 07:20:51.294497 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:51.295062 kubelet[2712]: E0813 07:20:51.295052 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:51.295062 kubelet[2712]: W0813 07:20:51.295060 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:51.295062 kubelet[2712]: E0813 07:20:51.295069 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:51.295357 kubelet[2712]: E0813 07:20:51.295262 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:51.295357 kubelet[2712]: W0813 07:20:51.295270 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:51.295357 kubelet[2712]: E0813 07:20:51.295282 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:51.295547 kubelet[2712]: E0813 07:20:51.295475 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:51.295547 kubelet[2712]: W0813 07:20:51.295481 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:51.295547 kubelet[2712]: E0813 07:20:51.295491 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:51.295674 kubelet[2712]: E0813 07:20:51.295645 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:51.295674 kubelet[2712]: W0813 07:20:51.295652 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:51.295674 kubelet[2712]: E0813 07:20:51.295658 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:51.295803 kubelet[2712]: E0813 07:20:51.295754 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:51.295803 kubelet[2712]: W0813 07:20:51.295760 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:51.295803 kubelet[2712]: E0813 07:20:51.295766 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:51.295939 kubelet[2712]: E0813 07:20:51.295857 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:51.295939 kubelet[2712]: W0813 07:20:51.295862 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:51.295939 kubelet[2712]: E0813 07:20:51.295869 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:51.296018 kubelet[2712]: E0813 07:20:51.295958 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:51.296018 kubelet[2712]: W0813 07:20:51.295963 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:51.296018 kubelet[2712]: E0813 07:20:51.295970 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:51.296176 kubelet[2712]: E0813 07:20:51.296170 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:51.296240 kubelet[2712]: W0813 07:20:51.296208 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:51.296240 kubelet[2712]: E0813 07:20:51.296224 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:51.296336 kubelet[2712]: E0813 07:20:51.296325 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:51.296336 kubelet[2712]: W0813 07:20:51.296332 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:51.296381 kubelet[2712]: E0813 07:20:51.296339 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:52.253408 kubelet[2712]: I0813 07:20:52.253383 2712 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:20:52.296858 kubelet[2712]: E0813 07:20:52.296809 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:52.296858 kubelet[2712]: W0813 07:20:52.296835 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:52.296858 kubelet[2712]: E0813 07:20:52.296850 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:52.297101 kubelet[2712]: E0813 07:20:52.297089 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:52.297101 kubelet[2712]: W0813 07:20:52.297098 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:52.297164 kubelet[2712]: E0813 07:20:52.297104 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:52.297298 kubelet[2712]: E0813 07:20:52.297289 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:52.297298 kubelet[2712]: W0813 07:20:52.297297 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:52.297349 kubelet[2712]: E0813 07:20:52.297305 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:52.297482 kubelet[2712]: E0813 07:20:52.297473 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:52.297482 kubelet[2712]: W0813 07:20:52.297480 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:52.297555 kubelet[2712]: E0813 07:20:52.297486 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:52.297589 kubelet[2712]: E0813 07:20:52.297579 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:52.297589 kubelet[2712]: W0813 07:20:52.297584 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:52.297589 kubelet[2712]: E0813 07:20:52.297589 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:52.297720 kubelet[2712]: E0813 07:20:52.297673 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:52.297720 kubelet[2712]: W0813 07:20:52.297677 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:52.297720 kubelet[2712]: E0813 07:20:52.297682 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:52.297787 kubelet[2712]: E0813 07:20:52.297774 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:52.297787 kubelet[2712]: W0813 07:20:52.297779 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:52.297787 kubelet[2712]: E0813 07:20:52.297784 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:52.297901 kubelet[2712]: E0813 07:20:52.297875 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:52.297901 kubelet[2712]: W0813 07:20:52.297880 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:52.297901 kubelet[2712]: E0813 07:20:52.297885 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:52.297993 kubelet[2712]: E0813 07:20:52.297983 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:52.297993 kubelet[2712]: W0813 07:20:52.297993 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:52.298042 kubelet[2712]: E0813 07:20:52.297999 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:52.298091 kubelet[2712]: E0813 07:20:52.298075 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:52.298091 kubelet[2712]: W0813 07:20:52.298080 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:52.298091 kubelet[2712]: E0813 07:20:52.298084 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:52.298169 kubelet[2712]: E0813 07:20:52.298162 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:52.298169 kubelet[2712]: W0813 07:20:52.298168 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:52.298222 kubelet[2712]: E0813 07:20:52.298173 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:52.298258 kubelet[2712]: E0813 07:20:52.298254 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:52.298281 kubelet[2712]: W0813 07:20:52.298259 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:52.298281 kubelet[2712]: E0813 07:20:52.298264 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:52.298364 kubelet[2712]: E0813 07:20:52.298354 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:52.298364 kubelet[2712]: W0813 07:20:52.298362 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:52.298412 kubelet[2712]: E0813 07:20:52.298367 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:52.298457 kubelet[2712]: E0813 07:20:52.298451 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:52.298477 kubelet[2712]: W0813 07:20:52.298458 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:52.298477 kubelet[2712]: E0813 07:20:52.298464 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:52.298567 kubelet[2712]: E0813 07:20:52.298557 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:52.298567 kubelet[2712]: W0813 07:20:52.298564 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:52.298608 kubelet[2712]: E0813 07:20:52.298569 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:52.299740 kubelet[2712]: E0813 07:20:52.299727 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:52.299740 kubelet[2712]: W0813 07:20:52.299738 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:52.299804 kubelet[2712]: E0813 07:20:52.299748 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:52.299891 kubelet[2712]: E0813 07:20:52.299879 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:52.299891 kubelet[2712]: W0813 07:20:52.299889 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:52.299942 kubelet[2712]: E0813 07:20:52.299897 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:52.300009 kubelet[2712]: E0813 07:20:52.299998 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:52.300009 kubelet[2712]: W0813 07:20:52.300006 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:52.300074 kubelet[2712]: E0813 07:20:52.300014 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:52.300133 kubelet[2712]: E0813 07:20:52.300122 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:52.300133 kubelet[2712]: W0813 07:20:52.300131 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:52.300190 kubelet[2712]: E0813 07:20:52.300139 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:52.300242 kubelet[2712]: E0813 07:20:52.300230 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:52.300242 kubelet[2712]: W0813 07:20:52.300238 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:52.300403 kubelet[2712]: E0813 07:20:52.300246 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:52.300403 kubelet[2712]: E0813 07:20:52.300333 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:52.300403 kubelet[2712]: W0813 07:20:52.300338 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:52.300403 kubelet[2712]: E0813 07:20:52.300350 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:52.300506 kubelet[2712]: E0813 07:20:52.300468 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:52.300506 kubelet[2712]: W0813 07:20:52.300475 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:52.300506 kubelet[2712]: E0813 07:20:52.300495 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:52.300942 kubelet[2712]: E0813 07:20:52.300737 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:52.300942 kubelet[2712]: W0813 07:20:52.300745 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:52.300942 kubelet[2712]: E0813 07:20:52.300754 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:52.300942 kubelet[2712]: E0813 07:20:52.300881 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:52.300942 kubelet[2712]: W0813 07:20:52.300888 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:52.300942 kubelet[2712]: E0813 07:20:52.300895 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:52.301409 kubelet[2712]: E0813 07:20:52.301352 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:52.301409 kubelet[2712]: W0813 07:20:52.301359 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:52.301409 kubelet[2712]: E0813 07:20:52.301378 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:52.301581 kubelet[2712]: E0813 07:20:52.301520 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:52.301581 kubelet[2712]: W0813 07:20:52.301527 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:52.301666 kubelet[2712]: E0813 07:20:52.301648 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:52.301739 kubelet[2712]: E0813 07:20:52.301707 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:52.301739 kubelet[2712]: W0813 07:20:52.301713 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:52.301739 kubelet[2712]: E0813 07:20:52.301723 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:52.301867 kubelet[2712]: E0813 07:20:52.301853 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:52.301867 kubelet[2712]: W0813 07:20:52.301861 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:52.301917 kubelet[2712]: E0813 07:20:52.301872 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:52.302157 kubelet[2712]: E0813 07:20:52.302151 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:52.302223 kubelet[2712]: W0813 07:20:52.302190 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:52.302223 kubelet[2712]: E0813 07:20:52.302203 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:52.302798 kubelet[2712]: E0813 07:20:52.302653 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:52.302798 kubelet[2712]: W0813 07:20:52.302750 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:52.302798 kubelet[2712]: E0813 07:20:52.302762 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:52.303299 kubelet[2712]: E0813 07:20:52.303082 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:52.303299 kubelet[2712]: W0813 07:20:52.303089 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:52.303299 kubelet[2712]: E0813 07:20:52.303100 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:52.303299 kubelet[2712]: E0813 07:20:52.303203 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:52.303299 kubelet[2712]: W0813 07:20:52.303211 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:52.303299 kubelet[2712]: E0813 07:20:52.303220 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:52.303520 kubelet[2712]: E0813 07:20:52.303514 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:20:52.303561 kubelet[2712]: W0813 07:20:52.303555 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:20:52.303601 kubelet[2712]: E0813 07:20:52.303588 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:20:52.452325 containerd[1542]: time="2025-08-13T07:20:52.452294434Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:52.453489 containerd[1542]: time="2025-08-13T07:20:52.453386133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Aug 13 07:20:52.453853 containerd[1542]: time="2025-08-13T07:20:52.453838867Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:52.455753 containerd[1542]: time="2025-08-13T07:20:52.455721345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:52.456859 containerd[1542]: time="2025-08-13T07:20:52.456833201Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.665994761s" Aug 13 07:20:52.456859 containerd[1542]: time="2025-08-13T07:20:52.456854899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 07:20:52.470451 containerd[1542]: time="2025-08-13T07:20:52.470428916Z" level=info msg="CreateContainer within sandbox \"b4e980c143140453871e104f3f7966de756cccb067652d687d83bccabc781ded\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 07:20:52.506453 containerd[1542]: time="2025-08-13T07:20:52.505995741Z" level=info msg="CreateContainer within sandbox \"b4e980c143140453871e104f3f7966de756cccb067652d687d83bccabc781ded\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a4963b7ebd0aa8ee873197c197edaf7b1eb64eb870600a5412b08eeb5c9b4d6e\"" Aug 13 07:20:52.508671 containerd[1542]: time="2025-08-13T07:20:52.507400844Z" level=info msg="StartContainer for \"a4963b7ebd0aa8ee873197c197edaf7b1eb64eb870600a5412b08eeb5c9b4d6e\"" Aug 13 07:20:52.560106 systemd[1]: Started cri-containerd-a4963b7ebd0aa8ee873197c197edaf7b1eb64eb870600a5412b08eeb5c9b4d6e.scope - libcontainer container a4963b7ebd0aa8ee873197c197edaf7b1eb64eb870600a5412b08eeb5c9b4d6e. Aug 13 07:20:52.586427 containerd[1542]: time="2025-08-13T07:20:52.586360776Z" level=info msg="StartContainer for \"a4963b7ebd0aa8ee873197c197edaf7b1eb64eb870600a5412b08eeb5c9b4d6e\" returns successfully" Aug 13 07:20:52.593539 systemd[1]: cri-containerd-a4963b7ebd0aa8ee873197c197edaf7b1eb64eb870600a5412b08eeb5c9b4d6e.scope: Deactivated successfully. Aug 13 07:20:52.608038 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4963b7ebd0aa8ee873197c197edaf7b1eb64eb870600a5412b08eeb5c9b4d6e-rootfs.mount: Deactivated successfully. Aug 13 07:20:53.123175 containerd[1542]: time="2025-08-13T07:20:53.102693523Z" level=info msg="shim disconnected" id=a4963b7ebd0aa8ee873197c197edaf7b1eb64eb870600a5412b08eeb5c9b4d6e namespace=k8s.io Aug 13 07:20:53.123306 containerd[1542]: time="2025-08-13T07:20:53.123174589Z" level=warning msg="cleaning up after shim disconnected" id=a4963b7ebd0aa8ee873197c197edaf7b1eb64eb870600a5412b08eeb5c9b4d6e namespace=k8s.io Aug 13 07:20:53.123306 containerd[1542]: time="2025-08-13T07:20:53.123195028Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:20:53.171709 kubelet[2712]: E0813 07:20:53.171490 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cs2xn" podUID="58b74c31-6d05-4f11-8c94-9d85e9d65a22" Aug 13 07:20:53.258485 containerd[1542]: time="2025-08-13T07:20:53.258454006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 07:20:55.171754 kubelet[2712]: E0813 07:20:55.171725 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cs2xn" podUID="58b74c31-6d05-4f11-8c94-9d85e9d65a22" Aug 13 07:20:56.876336 containerd[1542]: time="2025-08-13T07:20:56.875634380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 07:20:56.884317 containerd[1542]: time="2025-08-13T07:20:56.884292660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:56.885875 containerd[1542]: time="2025-08-13T07:20:56.885859019Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:56.886390 containerd[1542]: time="2025-08-13T07:20:56.886373867Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.627866113s" Aug 13 07:20:56.886447 containerd[1542]: time="2025-08-13T07:20:56.886436865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 07:20:56.886917 containerd[1542]: time="2025-08-13T07:20:56.886876237Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:56.888158 containerd[1542]: time="2025-08-13T07:20:56.888144795Z" level=info msg="CreateContainer within sandbox \"b4e980c143140453871e104f3f7966de756cccb067652d687d83bccabc781ded\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 07:20:56.897612 containerd[1542]: time="2025-08-13T07:20:56.897591455Z" level=info msg="CreateContainer within sandbox \"b4e980c143140453871e104f3f7966de756cccb067652d687d83bccabc781ded\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ceca2a40eee6858a562958bb20934356d3588906b08d5a84e327583ddce2c129\"" Aug 13 07:20:56.898084 containerd[1542]: time="2025-08-13T07:20:56.898068178Z" level=info msg="StartContainer for \"ceca2a40eee6858a562958bb20934356d3588906b08d5a84e327583ddce2c129\"" Aug 13 07:20:56.917907 systemd[1]: Started cri-containerd-ceca2a40eee6858a562958bb20934356d3588906b08d5a84e327583ddce2c129.scope - libcontainer container ceca2a40eee6858a562958bb20934356d3588906b08d5a84e327583ddce2c129. Aug 13 07:20:56.934847 containerd[1542]: time="2025-08-13T07:20:56.934795050Z" level=info msg="StartContainer for \"ceca2a40eee6858a562958bb20934356d3588906b08d5a84e327583ddce2c129\" returns successfully" Aug 13 07:20:57.171423 kubelet[2712]: E0813 07:20:57.171317 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cs2xn" podUID="58b74c31-6d05-4f11-8c94-9d85e9d65a22" Aug 13 07:20:57.751323 systemd[1]: cri-containerd-ceca2a40eee6858a562958bb20934356d3588906b08d5a84e327583ddce2c129.scope: Deactivated successfully. Aug 13 07:20:57.788729 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ceca2a40eee6858a562958bb20934356d3588906b08d5a84e327583ddce2c129-rootfs.mount: Deactivated successfully. Aug 13 07:20:57.826647 kubelet[2712]: I0813 07:20:57.826632 2712 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 07:20:57.973383 systemd[1]: Created slice kubepods-burstable-podea5dce70_7db5_4a88_9ece_439f7b00c36d.slice - libcontainer container kubepods-burstable-podea5dce70_7db5_4a88_9ece_439f7b00c36d.slice. Aug 13 07:20:57.980201 systemd[1]: Created slice kubepods-burstable-podeb2e06ca_79db_46b7_a389_34bc602b3a47.slice - libcontainer container kubepods-burstable-podeb2e06ca_79db_46b7_a389_34bc602b3a47.slice. Aug 13 07:20:57.984809 systemd[1]: Created slice kubepods-besteffort-pod8b049e0e_6f6a_4b4a_ac77_b6115705ff30.slice - libcontainer container kubepods-besteffort-pod8b049e0e_6f6a_4b4a_ac77_b6115705ff30.slice. Aug 13 07:20:57.988302 systemd[1]: Created slice kubepods-besteffort-podb14a1626_6ffd_46ab_a2c5_1f786b38698c.slice - libcontainer container kubepods-besteffort-podb14a1626_6ffd_46ab_a2c5_1f786b38698c.slice. Aug 13 07:20:57.991093 systemd[1]: Created slice kubepods-besteffort-podbec5125a_0b32_4b9c_9b75_c6827db5f9b6.slice - libcontainer container kubepods-besteffort-podbec5125a_0b32_4b9c_9b75_c6827db5f9b6.slice. Aug 13 07:20:57.995958 systemd[1]: Created slice kubepods-besteffort-pod1308b7c2_9bc6_44fd_8d86_34c607162a5d.slice - libcontainer container kubepods-besteffort-pod1308b7c2_9bc6_44fd_8d86_34c607162a5d.slice. Aug 13 07:20:58.001297 systemd[1]: Created slice kubepods-besteffort-podf1b8ec24_0eba_4c4b_be6f_8021be0e5ebc.slice - libcontainer container kubepods-besteffort-podf1b8ec24_0eba_4c4b_be6f_8021be0e5ebc.slice. Aug 13 07:20:58.007724 kubelet[2712]: W0813 07:20:57.998760 2712 reflector.go:561] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Aug 13 07:20:58.007724 kubelet[2712]: E0813 07:20:57.998784 2712 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Aug 13 07:20:58.007724 kubelet[2712]: W0813 07:20:58.000699 2712 reflector.go:561] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Aug 13 07:20:58.007724 kubelet[2712]: E0813 07:20:58.000717 2712 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Aug 13 07:20:58.007724 kubelet[2712]: W0813 07:20:58.000835 2712 reflector.go:561] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Aug 13 07:20:58.007882 kubelet[2712]: E0813 07:20:58.000846 2712 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Aug 13 07:20:58.007882 kubelet[2712]: W0813 07:20:58.001034 2712 reflector.go:561] object-"calico-system"/"goldmane": failed to list *v1.ConfigMap: configmaps "goldmane" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Aug 13 07:20:58.007882 kubelet[2712]: E0813 07:20:58.001045 2712 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Aug 13 07:20:58.007882 kubelet[2712]: W0813 07:20:58.001461 2712 reflector.go:561] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object Aug 13 07:20:58.007882 kubelet[2712]: E0813 07:20:58.001475 2712 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Aug 13 07:20:58.008049 kubelet[2712]: W0813 07:20:58.001504 2712 reflector.go:561] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object Aug 13 07:20:58.008049 kubelet[2712]: E0813 07:20:58.001511 2712 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Aug 13 07:20:58.008049 kubelet[2712]: W0813 07:20:58.005669 2712 reflector.go:561] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Aug 13 07:20:58.008049 kubelet[2712]: E0813 07:20:58.005686 2712 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Aug 13 07:20:58.139850 kubelet[2712]: I0813 07:20:58.139750 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8b049e0e-6f6a-4b4a-ac77-b6115705ff30-whisker-backend-key-pair\") pod \"whisker-6685cc6896-jpz9d\" (UID: \"8b049e0e-6f6a-4b4a-ac77-b6115705ff30\") " pod="calico-system/whisker-6685cc6896-jpz9d" Aug 13 07:20:58.139850 kubelet[2712]: I0813 07:20:58.139780 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea5dce70-7db5-4a88-9ece-439f7b00c36d-config-volume\") pod \"coredns-7c65d6cfc9-m6rn2\" (UID: \"ea5dce70-7db5-4a88-9ece-439f7b00c36d\") " pod="kube-system/coredns-7c65d6cfc9-m6rn2" Aug 13 07:20:58.139850 kubelet[2712]: I0813 07:20:58.139792 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb25k\" (UniqueName: \"kubernetes.io/projected/1308b7c2-9bc6-44fd-8d86-34c607162a5d-kube-api-access-nb25k\") pod \"calico-kube-controllers-66794f6b97-5xnbz\" (UID: \"1308b7c2-9bc6-44fd-8d86-34c607162a5d\") " pod="calico-system/calico-kube-controllers-66794f6b97-5xnbz" Aug 13 07:20:58.139850 kubelet[2712]: I0813 07:20:58.139804 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhthv\" (UniqueName: \"kubernetes.io/projected/f1b8ec24-0eba-4c4b-be6f-8021be0e5ebc-kube-api-access-xhthv\") pod \"goldmane-58fd7646b9-nwh9w\" (UID: \"f1b8ec24-0eba-4c4b-be6f-8021be0e5ebc\") " pod="calico-system/goldmane-58fd7646b9-nwh9w" Aug 13 07:20:58.139850 kubelet[2712]: I0813 07:20:58.139815 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7jls\" (UniqueName: \"kubernetes.io/projected/b14a1626-6ffd-46ab-a2c5-1f786b38698c-kube-api-access-b7jls\") pod \"calico-apiserver-649d79c576-kmjq6\" (UID: \"b14a1626-6ffd-46ab-a2c5-1f786b38698c\") " pod="calico-apiserver/calico-apiserver-649d79c576-kmjq6" Aug 13 07:20:58.140024 kubelet[2712]: I0813 07:20:58.139836 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzwfh\" (UniqueName: \"kubernetes.io/projected/bec5125a-0b32-4b9c-9b75-c6827db5f9b6-kube-api-access-jzwfh\") pod \"calico-apiserver-649d79c576-jsplw\" (UID: \"bec5125a-0b32-4b9c-9b75-c6827db5f9b6\") " pod="calico-apiserver/calico-apiserver-649d79c576-jsplw" Aug 13 07:20:58.140024 kubelet[2712]: I0813 07:20:58.139847 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f1b8ec24-0eba-4c4b-be6f-8021be0e5ebc-goldmane-key-pair\") pod \"goldmane-58fd7646b9-nwh9w\" (UID: \"f1b8ec24-0eba-4c4b-be6f-8021be0e5ebc\") " pod="calico-system/goldmane-58fd7646b9-nwh9w" Aug 13 07:20:58.140024 kubelet[2712]: I0813 07:20:58.139856 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dskv9\" (UniqueName: \"kubernetes.io/projected/8b049e0e-6f6a-4b4a-ac77-b6115705ff30-kube-api-access-dskv9\") pod \"whisker-6685cc6896-jpz9d\" (UID: \"8b049e0e-6f6a-4b4a-ac77-b6115705ff30\") " pod="calico-system/whisker-6685cc6896-jpz9d" Aug 13 07:20:58.140024 kubelet[2712]: I0813 07:20:58.139865 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b049e0e-6f6a-4b4a-ac77-b6115705ff30-whisker-ca-bundle\") pod \"whisker-6685cc6896-jpz9d\" (UID: \"8b049e0e-6f6a-4b4a-ac77-b6115705ff30\") " pod="calico-system/whisker-6685cc6896-jpz9d" Aug 13 07:20:58.140024 kubelet[2712]: I0813 07:20:58.139877 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bec5125a-0b32-4b9c-9b75-c6827db5f9b6-calico-apiserver-certs\") pod \"calico-apiserver-649d79c576-jsplw\" (UID: \"bec5125a-0b32-4b9c-9b75-c6827db5f9b6\") " pod="calico-apiserver/calico-apiserver-649d79c576-jsplw" Aug 13 07:20:58.140110 kubelet[2712]: I0813 07:20:58.139887 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1308b7c2-9bc6-44fd-8d86-34c607162a5d-tigera-ca-bundle\") pod \"calico-kube-controllers-66794f6b97-5xnbz\" (UID: \"1308b7c2-9bc6-44fd-8d86-34c607162a5d\") " pod="calico-system/calico-kube-controllers-66794f6b97-5xnbz" Aug 13 07:20:58.140110 kubelet[2712]: I0813 07:20:58.139898 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb2e06ca-79db-46b7-a389-34bc602b3a47-config-volume\") pod \"coredns-7c65d6cfc9-w2f57\" (UID: \"eb2e06ca-79db-46b7-a389-34bc602b3a47\") " pod="kube-system/coredns-7c65d6cfc9-w2f57" Aug 13 07:20:58.140110 kubelet[2712]: I0813 07:20:58.139907 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55jrh\" (UniqueName: \"kubernetes.io/projected/eb2e06ca-79db-46b7-a389-34bc602b3a47-kube-api-access-55jrh\") pod \"coredns-7c65d6cfc9-w2f57\" (UID: \"eb2e06ca-79db-46b7-a389-34bc602b3a47\") " pod="kube-system/coredns-7c65d6cfc9-w2f57" Aug 13 07:20:58.140110 kubelet[2712]: I0813 07:20:58.139916 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1b8ec24-0eba-4c4b-be6f-8021be0e5ebc-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-nwh9w\" (UID: \"f1b8ec24-0eba-4c4b-be6f-8021be0e5ebc\") " pod="calico-system/goldmane-58fd7646b9-nwh9w" Aug 13 07:20:58.140110 kubelet[2712]: I0813 07:20:58.139926 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b14a1626-6ffd-46ab-a2c5-1f786b38698c-calico-apiserver-certs\") pod \"calico-apiserver-649d79c576-kmjq6\" (UID: \"b14a1626-6ffd-46ab-a2c5-1f786b38698c\") " pod="calico-apiserver/calico-apiserver-649d79c576-kmjq6" Aug 13 07:20:58.140233 kubelet[2712]: I0813 07:20:58.139935 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwrcq\" (UniqueName: \"kubernetes.io/projected/ea5dce70-7db5-4a88-9ece-439f7b00c36d-kube-api-access-cwrcq\") pod \"coredns-7c65d6cfc9-m6rn2\" (UID: \"ea5dce70-7db5-4a88-9ece-439f7b00c36d\") " pod="kube-system/coredns-7c65d6cfc9-m6rn2" Aug 13 07:20:58.140233 kubelet[2712]: I0813 07:20:58.139947 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1b8ec24-0eba-4c4b-be6f-8021be0e5ebc-config\") pod \"goldmane-58fd7646b9-nwh9w\" (UID: \"f1b8ec24-0eba-4c4b-be6f-8021be0e5ebc\") " pod="calico-system/goldmane-58fd7646b9-nwh9w" Aug 13 07:20:58.220763 containerd[1542]: time="2025-08-13T07:20:58.219242832Z" level=info msg="shim disconnected" id=ceca2a40eee6858a562958bb20934356d3588906b08d5a84e327583ddce2c129 namespace=k8s.io Aug 13 07:20:58.220763 containerd[1542]: time="2025-08-13T07:20:58.219284453Z" level=warning msg="cleaning up after shim disconnected" id=ceca2a40eee6858a562958bb20934356d3588906b08d5a84e327583ddce2c129 namespace=k8s.io Aug 13 07:20:58.220763 containerd[1542]: time="2025-08-13T07:20:58.219292132Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:20:58.268939 containerd[1542]: time="2025-08-13T07:20:58.268462894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 07:20:58.281929 containerd[1542]: time="2025-08-13T07:20:58.281903219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-m6rn2,Uid:ea5dce70-7db5-4a88-9ece-439f7b00c36d,Namespace:kube-system,Attempt:0,}" Aug 13 07:20:58.283789 containerd[1542]: time="2025-08-13T07:20:58.283672957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w2f57,Uid:eb2e06ca-79db-46b7-a389-34bc602b3a47,Namespace:kube-system,Attempt:0,}" Aug 13 07:20:58.311328 containerd[1542]: time="2025-08-13T07:20:58.311092247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66794f6b97-5xnbz,Uid:1308b7c2-9bc6-44fd-8d86-34c607162a5d,Namespace:calico-system,Attempt:0,}" Aug 13 07:20:58.550293 containerd[1542]: time="2025-08-13T07:20:58.550053014Z" level=error msg="Failed to destroy network for sandbox \"0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:20:58.551813 containerd[1542]: time="2025-08-13T07:20:58.551793187Z" level=error msg="Failed to destroy network for sandbox \"b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:20:58.553449 containerd[1542]: time="2025-08-13T07:20:58.553341362Z" level=error msg="encountered an error cleaning up failed sandbox \"0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:20:58.553449 containerd[1542]: time="2025-08-13T07:20:58.553382619Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-m6rn2,Uid:ea5dce70-7db5-4a88-9ece-439f7b00c36d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:20:58.553449 containerd[1542]: time="2025-08-13T07:20:58.553405121Z" level=error msg="Failed to destroy network for sandbox \"80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:20:58.555583 containerd[1542]: time="2025-08-13T07:20:58.553779422Z" level=error msg="encountered an error cleaning up failed sandbox \"80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:20:58.555583 containerd[1542]: time="2025-08-13T07:20:58.553814666Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w2f57,Uid:eb2e06ca-79db-46b7-a389-34bc602b3a47,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:20:58.558628 containerd[1542]: time="2025-08-13T07:20:58.553341579Z" level=error msg="encountered an error cleaning up failed sandbox \"b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:20:58.558628 containerd[1542]: time="2025-08-13T07:20:58.558601391Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66794f6b97-5xnbz,Uid:1308b7c2-9bc6-44fd-8d86-34c607162a5d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:20:58.559926 kubelet[2712]: E0813 07:20:58.558834 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:20:58.559926 kubelet[2712]: E0813 07:20:58.558894 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-m6rn2" Aug 13 07:20:58.559926 kubelet[2712]: E0813 07:20:58.558909 2712 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-m6rn2" Aug 13 07:20:58.560265 kubelet[2712]: E0813 07:20:58.558937 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-m6rn2_kube-system(ea5dce70-7db5-4a88-9ece-439f7b00c36d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-m6rn2_kube-system(ea5dce70-7db5-4a88-9ece-439f7b00c36d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-m6rn2" podUID="ea5dce70-7db5-4a88-9ece-439f7b00c36d" Aug 13 07:20:58.560265 kubelet[2712]: E0813 07:20:58.559072 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:20:58.560265 kubelet[2712]: E0813 07:20:58.559086 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-w2f57" Aug 13 07:20:58.560342 kubelet[2712]: E0813 07:20:58.559094 2712 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-w2f57" Aug 13 07:20:58.560342 kubelet[2712]: E0813 07:20:58.559109 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-w2f57_kube-system(eb2e06ca-79db-46b7-a389-34bc602b3a47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-w2f57_kube-system(eb2e06ca-79db-46b7-a389-34bc602b3a47)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-w2f57" podUID="eb2e06ca-79db-46b7-a389-34bc602b3a47" Aug 13 07:20:58.560342 kubelet[2712]: E0813 07:20:58.559124 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:20:58.560528 kubelet[2712]: E0813 07:20:58.559134 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66794f6b97-5xnbz" Aug 13 07:20:58.560528 kubelet[2712]: E0813 07:20:58.559144 2712 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66794f6b97-5xnbz" Aug 13 07:20:58.560528 kubelet[2712]: E0813 07:20:58.559155 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66794f6b97-5xnbz_calico-system(1308b7c2-9bc6-44fd-8d86-34c607162a5d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66794f6b97-5xnbz_calico-system(1308b7c2-9bc6-44fd-8d86-34c607162a5d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66794f6b97-5xnbz" podUID="1308b7c2-9bc6-44fd-8d86-34c607162a5d" Aug 13 07:20:59.175192 systemd[1]: Created slice kubepods-besteffort-pod58b74c31_6d05_4f11_8c94_9d85e9d65a22.slice - libcontainer container kubepods-besteffort-pod58b74c31_6d05_4f11_8c94_9d85e9d65a22.slice. Aug 13 07:20:59.176925 containerd[1542]: time="2025-08-13T07:20:59.176889464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cs2xn,Uid:58b74c31-6d05-4f11-8c94-9d85e9d65a22,Namespace:calico-system,Attempt:0,}" Aug 13 07:20:59.246192 kubelet[2712]: E0813 07:20:59.246168 2712 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Aug 13 07:20:59.247007 kubelet[2712]: E0813 07:20:59.246438 2712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b14a1626-6ffd-46ab-a2c5-1f786b38698c-calico-apiserver-certs podName:b14a1626-6ffd-46ab-a2c5-1f786b38698c nodeName:}" failed. No retries permitted until 2025-08-13 07:20:59.74641792 +0000 UTC m=+27.715850061 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/b14a1626-6ffd-46ab-a2c5-1f786b38698c-calico-apiserver-certs") pod "calico-apiserver-649d79c576-kmjq6" (UID: "b14a1626-6ffd-46ab-a2c5-1f786b38698c") : failed to sync secret cache: timed out waiting for the condition Aug 13 07:20:59.247007 kubelet[2712]: E0813 07:20:59.246248 2712 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Aug 13 07:20:59.247007 kubelet[2712]: E0813 07:20:59.246511 2712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bec5125a-0b32-4b9c-9b75-c6827db5f9b6-calico-apiserver-certs podName:bec5125a-0b32-4b9c-9b75-c6827db5f9b6 nodeName:}" failed. No retries permitted until 2025-08-13 07:20:59.746502714 +0000 UTC m=+27.715934855 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/bec5125a-0b32-4b9c-9b75-c6827db5f9b6-calico-apiserver-certs") pod "calico-apiserver-649d79c576-jsplw" (UID: "bec5125a-0b32-4b9c-9b75-c6827db5f9b6") : failed to sync secret cache: timed out waiting for the condition Aug 13 07:20:59.247007 kubelet[2712]: E0813 07:20:59.246271 2712 configmap.go:193] Couldn't get configMap calico-system/goldmane: failed to sync configmap cache: timed out waiting for the condition Aug 13 07:20:59.247145 kubelet[2712]: E0813 07:20:59.246535 2712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f1b8ec24-0eba-4c4b-be6f-8021be0e5ebc-config podName:f1b8ec24-0eba-4c4b-be6f-8021be0e5ebc nodeName:}" failed. No retries permitted until 2025-08-13 07:20:59.746529596 +0000 UTC m=+27.715961735 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f1b8ec24-0eba-4c4b-be6f-8021be0e5ebc-config") pod "goldmane-58fd7646b9-nwh9w" (UID: "f1b8ec24-0eba-4c4b-be6f-8021be0e5ebc") : failed to sync configmap cache: timed out waiting for the condition Aug 13 07:20:59.255450 kubelet[2712]: E0813 07:20:59.249808 2712 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Aug 13 07:20:59.255450 kubelet[2712]: E0813 07:20:59.249894 2712 projected.go:194] Error preparing data for projected volume kube-api-access-b7jls for pod calico-apiserver/calico-apiserver-649d79c576-kmjq6: failed to sync configmap cache: timed out waiting for the condition Aug 13 07:20:59.255450 kubelet[2712]: E0813 07:20:59.249940 2712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b14a1626-6ffd-46ab-a2c5-1f786b38698c-kube-api-access-b7jls podName:b14a1626-6ffd-46ab-a2c5-1f786b38698c nodeName:}" failed. No retries permitted until 2025-08-13 07:20:59.749924442 +0000 UTC m=+27.719356583 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-b7jls" (UniqueName: "kubernetes.io/projected/b14a1626-6ffd-46ab-a2c5-1f786b38698c-kube-api-access-b7jls") pod "calico-apiserver-649d79c576-kmjq6" (UID: "b14a1626-6ffd-46ab-a2c5-1f786b38698c") : failed to sync configmap cache: timed out waiting for the condition Aug 13 07:20:59.255450 kubelet[2712]: E0813 07:20:59.249875 2712 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Aug 13 07:20:59.255450 kubelet[2712]: E0813 07:20:59.249955 2712 projected.go:194] Error preparing data for projected volume kube-api-access-jzwfh for pod calico-apiserver/calico-apiserver-649d79c576-jsplw: failed to sync configmap cache: timed out waiting for the condition Aug 13 07:20:59.256087 kubelet[2712]: E0813 07:20:59.249970 2712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bec5125a-0b32-4b9c-9b75-c6827db5f9b6-kube-api-access-jzwfh podName:bec5125a-0b32-4b9c-9b75-c6827db5f9b6 nodeName:}" failed. No retries permitted until 2025-08-13 07:20:59.749965412 +0000 UTC m=+27.719397554 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jzwfh" (UniqueName: "kubernetes.io/projected/bec5125a-0b32-4b9c-9b75-c6827db5f9b6-kube-api-access-jzwfh") pod "calico-apiserver-649d79c576-jsplw" (UID: "bec5125a-0b32-4b9c-9b75-c6827db5f9b6") : failed to sync configmap cache: timed out waiting for the condition Aug 13 07:20:59.258865 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db-shm.mount: Deactivated successfully. Aug 13 07:20:59.258993 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855-shm.mount: Deactivated successfully. Aug 13 07:20:59.268816 kubelet[2712]: I0813 07:20:59.268419 2712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" Aug 13 07:20:59.269312 kubelet[2712]: I0813 07:20:59.269177 2712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" Aug 13 07:20:59.274971 containerd[1542]: time="2025-08-13T07:20:59.273920086Z" level=error msg="Failed to destroy network for sandbox \"cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:20:59.284850 containerd[1542]: time="2025-08-13T07:20:59.278875490Z" level=error msg="encountered an error cleaning up failed sandbox \"cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:20:59.284850 containerd[1542]: time="2025-08-13T07:20:59.278910364Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cs2xn,Uid:58b74c31-6d05-4f11-8c94-9d85e9d65a22,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:20:59.276544 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3-shm.mount: Deactivated successfully. Aug 13 07:20:59.289694 kubelet[2712]: E0813 07:20:59.289664 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:20:59.289754 kubelet[2712]: E0813 07:20:59.289701 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cs2xn" Aug 13 07:20:59.289754 kubelet[2712]: E0813 07:20:59.289715 2712 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cs2xn" Aug 13 07:20:59.289754 kubelet[2712]: E0813 07:20:59.289741 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cs2xn_calico-system(58b74c31-6d05-4f11-8c94-9d85e9d65a22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cs2xn_calico-system(58b74c31-6d05-4f11-8c94-9d85e9d65a22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cs2xn" podUID="58b74c31-6d05-4f11-8c94-9d85e9d65a22" Aug 13 07:20:59.293133 kubelet[2712]: I0813 07:20:59.292704 2712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" Aug 13 07:20:59.300804 containerd[1542]: time="2025-08-13T07:20:59.300758306Z" level=info msg="StopPodSandbox for \"0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855\"" Aug 13 07:20:59.301504 containerd[1542]: time="2025-08-13T07:20:59.301345963Z" level=info msg="StopPodSandbox for \"80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db\"" Aug 13 07:20:59.302196 containerd[1542]: time="2025-08-13T07:20:59.302153410Z" level=info msg="Ensure that sandbox 80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db in task-service has been cleanup successfully" Aug 13 07:20:59.302267 containerd[1542]: time="2025-08-13T07:20:59.302154055Z" level=info msg="Ensure that sandbox 0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855 in task-service has been cleanup successfully" Aug 13 07:20:59.302333 containerd[1542]: time="2025-08-13T07:20:59.302196768Z" level=info msg="StopPodSandbox for \"b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d\"" Aug 13 07:20:59.302418 containerd[1542]: time="2025-08-13T07:20:59.302402774Z" level=info msg="Ensure that sandbox b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d in task-service has been cleanup successfully" Aug 13 07:20:59.328332 containerd[1542]: time="2025-08-13T07:20:59.328290482Z" level=error msg="StopPodSandbox for \"0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855\" failed" error="failed to destroy network for sandbox \"0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:20:59.328526 kubelet[2712]: E0813 07:20:59.328502 2712 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" Aug 13 07:20:59.329713 kubelet[2712]: E0813 07:20:59.328598 2712 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855"} Aug 13 07:20:59.329713 kubelet[2712]: E0813 07:20:59.329655 2712 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ea5dce70-7db5-4a88-9ece-439f7b00c36d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:20:59.329713 kubelet[2712]: E0813 07:20:59.329674 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ea5dce70-7db5-4a88-9ece-439f7b00c36d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-m6rn2" podUID="ea5dce70-7db5-4a88-9ece-439f7b00c36d" Aug 13 07:20:59.332996 containerd[1542]: time="2025-08-13T07:20:59.332969622Z" level=error msg="StopPodSandbox for \"80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db\" failed" error="failed to destroy network for sandbox \"80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:20:59.333222 kubelet[2712]: E0813 07:20:59.333200 2712 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" Aug 13 07:20:59.333345 kubelet[2712]: E0813 07:20:59.333285 2712 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db"} Aug 13 07:20:59.333345 kubelet[2712]: E0813 07:20:59.333309 2712 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eb2e06ca-79db-46b7-a389-34bc602b3a47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:20:59.333345 kubelet[2712]: E0813 07:20:59.333327 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eb2e06ca-79db-46b7-a389-34bc602b3a47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-w2f57" podUID="eb2e06ca-79db-46b7-a389-34bc602b3a47" Aug 13 07:20:59.335300 containerd[1542]: time="2025-08-13T07:20:59.335264219Z" level=error msg="StopPodSandbox for \"b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d\" failed" error="failed to destroy network for sandbox \"b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:20:59.335553 kubelet[2712]: E0813 07:20:59.335420 2712 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" Aug 13 07:20:59.335553 kubelet[2712]: E0813 07:20:59.335451 2712 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d"} Aug 13 07:20:59.335553 kubelet[2712]: E0813 07:20:59.335473 2712 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1308b7c2-9bc6-44fd-8d86-34c607162a5d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:20:59.335553 kubelet[2712]: E0813 07:20:59.335487 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1308b7c2-9bc6-44fd-8d86-34c607162a5d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66794f6b97-5xnbz" podUID="1308b7c2-9bc6-44fd-8d86-34c607162a5d" Aug 13 07:20:59.486811 containerd[1542]: time="2025-08-13T07:20:59.486731145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6685cc6896-jpz9d,Uid:8b049e0e-6f6a-4b4a-ac77-b6115705ff30,Namespace:calico-system,Attempt:0,}" Aug 13 07:20:59.528261 containerd[1542]: time="2025-08-13T07:20:59.528216600Z" level=error msg="Failed to destroy network for sandbox \"d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:20:59.528470 containerd[1542]: time="2025-08-13T07:20:59.528452777Z" level=error msg="encountered an error cleaning up failed sandbox \"d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:20:59.528504 containerd[1542]: time="2025-08-13T07:20:59.528482828Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6685cc6896-jpz9d,Uid:8b049e0e-6f6a-4b4a-ac77-b6115705ff30,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:20:59.528675 kubelet[2712]: E0813 07:20:59.528648 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:20:59.528713 kubelet[2712]: E0813 07:20:59.528687 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6685cc6896-jpz9d" Aug 13 07:20:59.528713 kubelet[2712]: E0813 07:20:59.528699 2712 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6685cc6896-jpz9d" Aug 13 07:20:59.528765 kubelet[2712]: E0813 07:20:59.528732 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6685cc6896-jpz9d_calico-system(8b049e0e-6f6a-4b4a-ac77-b6115705ff30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6685cc6896-jpz9d_calico-system(8b049e0e-6f6a-4b4a-ac77-b6115705ff30)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6685cc6896-jpz9d" podUID="8b049e0e-6f6a-4b4a-ac77-b6115705ff30" Aug 13 07:20:59.529977 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355-shm.mount: Deactivated successfully. Aug 13 07:20:59.790619 containerd[1542]: time="2025-08-13T07:20:59.790569925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-649d79c576-kmjq6,Uid:b14a1626-6ffd-46ab-a2c5-1f786b38698c,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:20:59.794681 containerd[1542]: time="2025-08-13T07:20:59.794585728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-649d79c576-jsplw,Uid:bec5125a-0b32-4b9c-9b75-c6827db5f9b6,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:20:59.805009 containerd[1542]: time="2025-08-13T07:20:59.804107181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-nwh9w,Uid:f1b8ec24-0eba-4c4b-be6f-8021be0e5ebc,Namespace:calico-system,Attempt:0,}" Aug 13 07:20:59.849613 containerd[1542]: time="2025-08-13T07:20:59.849576981Z" level=error msg="Failed to destroy network for sandbox \"470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:20:59.849848 containerd[1542]: time="2025-08-13T07:20:59.849832587Z" level=error msg="encountered an error cleaning up failed sandbox \"470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:20:59.849890 containerd[1542]: time="2025-08-13T07:20:59.849867708Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-649d79c576-kmjq6,Uid:b14a1626-6ffd-46ab-a2c5-1f786b38698c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:20:59.851063 kubelet[2712]: E0813 07:20:59.850076 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:20:59.851063 kubelet[2712]: E0813 07:20:59.850197 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-649d79c576-kmjq6" Aug 13 07:20:59.851063 kubelet[2712]: E0813 07:20:59.850214 2712 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-649d79c576-kmjq6" Aug 13 07:20:59.851369 kubelet[2712]: E0813 07:20:59.850247 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-649d79c576-kmjq6_calico-apiserver(b14a1626-6ffd-46ab-a2c5-1f786b38698c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-649d79c576-kmjq6_calico-apiserver(b14a1626-6ffd-46ab-a2c5-1f786b38698c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-649d79c576-kmjq6" podUID="b14a1626-6ffd-46ab-a2c5-1f786b38698c" Aug 13 07:20:59.861140 containerd[1542]: time="2025-08-13T07:20:59.860968301Z" level=error msg="Failed to destroy network for sandbox \"0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:20:59.861832 containerd[1542]: time="2025-08-13T07:20:59.861806565Z" level=error msg="encountered an error cleaning up failed sandbox \"0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:20:59.861908 containerd[1542]: time="2025-08-13T07:20:59.861894915Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-649d79c576-jsplw,Uid:bec5125a-0b32-4b9c-9b75-c6827db5f9b6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:20:59.862270 kubelet[2712]: E0813 07:20:59.862238 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:20:59.862316 kubelet[2712]: E0813 07:20:59.862279 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-649d79c576-jsplw" Aug 13 07:20:59.862316 kubelet[2712]: E0813 07:20:59.862294 2712 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-649d79c576-jsplw" Aug 13 07:20:59.862356 kubelet[2712]: E0813 07:20:59.862323 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-649d79c576-jsplw_calico-apiserver(bec5125a-0b32-4b9c-9b75-c6827db5f9b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-649d79c576-jsplw_calico-apiserver(bec5125a-0b32-4b9c-9b75-c6827db5f9b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-649d79c576-jsplw" podUID="bec5125a-0b32-4b9c-9b75-c6827db5f9b6" Aug 13 07:20:59.875655 containerd[1542]: time="2025-08-13T07:20:59.875626183Z" level=error msg="Failed to destroy network for sandbox \"a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:20:59.875934 containerd[1542]: time="2025-08-13T07:20:59.875918100Z" level=error msg="encountered an error cleaning up failed sandbox \"a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:20:59.876017 containerd[1542]: time="2025-08-13T07:20:59.876003795Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-nwh9w,Uid:f1b8ec24-0eba-4c4b-be6f-8021be0e5ebc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:20:59.876208 kubelet[2712]: E0813 07:20:59.876189 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:20:59.876289 kubelet[2712]: E0813 07:20:59.876276 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-nwh9w" Aug 13 07:20:59.876336 kubelet[2712]: E0813 07:20:59.876328 2712 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-nwh9w" Aug 13 07:20:59.876418 kubelet[2712]: E0813 07:20:59.876404 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-nwh9w_calico-system(f1b8ec24-0eba-4c4b-be6f-8021be0e5ebc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-nwh9w_calico-system(f1b8ec24-0eba-4c4b-be6f-8021be0e5ebc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-nwh9w" podUID="f1b8ec24-0eba-4c4b-be6f-8021be0e5ebc" Aug 13 07:21:00.294705 kubelet[2712]: I0813 07:21:00.294679 2712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" Aug 13 07:21:00.296869 containerd[1542]: time="2025-08-13T07:21:00.296139014Z" level=info msg="StopPodSandbox for \"d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355\"" Aug 13 07:21:00.296869 containerd[1542]: time="2025-08-13T07:21:00.296244610Z" level=info msg="Ensure that sandbox d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355 in task-service has been cleanup successfully" Aug 13 07:21:00.298605 kubelet[2712]: I0813 07:21:00.298585 2712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" Aug 13 07:21:00.299352 containerd[1542]: time="2025-08-13T07:21:00.299339628Z" level=info msg="StopPodSandbox for \"0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc\"" Aug 13 07:21:00.299699 containerd[1542]: time="2025-08-13T07:21:00.299687657Z" level=info msg="Ensure that sandbox 0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc in task-service has been cleanup successfully" Aug 13 07:21:00.299996 kubelet[2712]: I0813 07:21:00.299983 2712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" Aug 13 07:21:00.301402 containerd[1542]: time="2025-08-13T07:21:00.301382953Z" level=info msg="StopPodSandbox for \"470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91\"" Aug 13 07:21:00.302237 containerd[1542]: time="2025-08-13T07:21:00.302218197Z" level=info msg="Ensure that sandbox 470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91 in task-service has been cleanup successfully" Aug 13 07:21:00.306114 kubelet[2712]: I0813 07:21:00.305736 2712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" Aug 13 07:21:00.307055 containerd[1542]: time="2025-08-13T07:21:00.307040068Z" level=info msg="StopPodSandbox for \"a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284\"" Aug 13 07:21:00.307243 containerd[1542]: time="2025-08-13T07:21:00.307211429Z" level=info msg="Ensure that sandbox a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284 in task-service has been cleanup successfully" Aug 13 07:21:00.310099 kubelet[2712]: I0813 07:21:00.310082 2712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" Aug 13 07:21:00.311620 containerd[1542]: time="2025-08-13T07:21:00.311291217Z" level=info msg="StopPodSandbox for \"cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3\"" Aug 13 07:21:00.312269 containerd[1542]: time="2025-08-13T07:21:00.312225582Z" level=info msg="Ensure that sandbox cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3 in task-service has been cleanup successfully" Aug 13 07:21:00.349856 containerd[1542]: time="2025-08-13T07:21:00.349041279Z" level=error msg="StopPodSandbox for \"0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc\" failed" error="failed to destroy network for sandbox \"0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:21:00.349937 kubelet[2712]: E0813 07:21:00.349180 2712 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" Aug 13 07:21:00.349937 kubelet[2712]: E0813 07:21:00.349211 2712 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc"} Aug 13 07:21:00.349937 kubelet[2712]: E0813 07:21:00.349232 2712 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bec5125a-0b32-4b9c-9b75-c6827db5f9b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:21:00.349937 kubelet[2712]: E0813 07:21:00.349250 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bec5125a-0b32-4b9c-9b75-c6827db5f9b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-649d79c576-jsplw" podUID="bec5125a-0b32-4b9c-9b75-c6827db5f9b6" Aug 13 07:21:00.358186 containerd[1542]: time="2025-08-13T07:21:00.358160904Z" level=error msg="StopPodSandbox for \"470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91\" failed" error="failed to destroy network for sandbox \"470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:21:00.358400 kubelet[2712]: E0813 07:21:00.358378 2712 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" Aug 13 07:21:00.358444 kubelet[2712]: E0813 07:21:00.358405 2712 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91"} Aug 13 07:21:00.358444 kubelet[2712]: E0813 07:21:00.358424 2712 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b14a1626-6ffd-46ab-a2c5-1f786b38698c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:21:00.358519 kubelet[2712]: E0813 07:21:00.358437 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b14a1626-6ffd-46ab-a2c5-1f786b38698c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-649d79c576-kmjq6" podUID="b14a1626-6ffd-46ab-a2c5-1f786b38698c" Aug 13 07:21:00.359276 containerd[1542]: time="2025-08-13T07:21:00.359262340Z" level=error msg="StopPodSandbox for \"d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355\" failed" error="failed to destroy network for sandbox \"d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:21:00.359395 kubelet[2712]: E0813 07:21:00.359378 2712 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" Aug 13 07:21:00.359425 kubelet[2712]: E0813 07:21:00.359397 2712 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355"} Aug 13 07:21:00.359425 kubelet[2712]: E0813 07:21:00.359411 2712 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8b049e0e-6f6a-4b4a-ac77-b6115705ff30\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:21:00.359425 kubelet[2712]: E0813 07:21:00.359420 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8b049e0e-6f6a-4b4a-ac77-b6115705ff30\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6685cc6896-jpz9d" podUID="8b049e0e-6f6a-4b4a-ac77-b6115705ff30" Aug 13 07:21:00.359816 containerd[1542]: time="2025-08-13T07:21:00.359802933Z" level=error msg="StopPodSandbox for \"cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3\" failed" error="failed to destroy network for sandbox \"cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:21:00.359992 kubelet[2712]: E0813 07:21:00.359978 2712 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" Aug 13 07:21:00.360022 kubelet[2712]: E0813 07:21:00.359993 2712 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3"} Aug 13 07:21:00.360022 kubelet[2712]: E0813 07:21:00.360005 2712 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"58b74c31-6d05-4f11-8c94-9d85e9d65a22\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:21:00.360152 kubelet[2712]: E0813 07:21:00.360014 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"58b74c31-6d05-4f11-8c94-9d85e9d65a22\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cs2xn" podUID="58b74c31-6d05-4f11-8c94-9d85e9d65a22" Aug 13 07:21:00.363039 containerd[1542]: time="2025-08-13T07:21:00.363015315Z" level=error msg="StopPodSandbox for \"a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284\" failed" error="failed to destroy network for sandbox \"a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:21:00.363122 kubelet[2712]: E0813 07:21:00.363110 2712 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" Aug 13 07:21:00.363147 kubelet[2712]: E0813 07:21:00.363130 2712 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284"} Aug 13 07:21:00.363165 kubelet[2712]: E0813 07:21:00.363148 2712 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f1b8ec24-0eba-4c4b-be6f-8021be0e5ebc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:21:00.363206 kubelet[2712]: E0813 07:21:00.363161 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f1b8ec24-0eba-4c4b-be6f-8021be0e5ebc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-nwh9w" podUID="f1b8ec24-0eba-4c4b-be6f-8021be0e5ebc" Aug 13 07:21:00.576061 kubelet[2712]: I0813 07:21:00.575905 2712 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:21:04.796858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount676458596.mount: Deactivated successfully. Aug 13 07:21:04.973102 containerd[1542]: time="2025-08-13T07:21:04.963429787Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:21:04.982664 containerd[1542]: time="2025-08-13T07:21:04.982606267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 07:21:05.013564 containerd[1542]: time="2025-08-13T07:21:05.013528193Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:21:05.022074 containerd[1542]: time="2025-08-13T07:21:05.022060119Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:21:05.022397 containerd[1542]: time="2025-08-13T07:21:05.022288897Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 6.75379794s" Aug 13 07:21:05.022397 containerd[1542]: time="2025-08-13T07:21:05.022308199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 07:21:05.055748 containerd[1542]: time="2025-08-13T07:21:05.055644508Z" level=info msg="CreateContainer within sandbox \"b4e980c143140453871e104f3f7966de756cccb067652d687d83bccabc781ded\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 07:21:05.134955 containerd[1542]: time="2025-08-13T07:21:05.134915115Z" level=info msg="CreateContainer within sandbox \"b4e980c143140453871e104f3f7966de756cccb067652d687d83bccabc781ded\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"381bd18713d3a6106bf61aa9ec5d5f67b53a987d46dcbbb590b5d1ab9aeff05b\"" Aug 13 07:21:05.135524 containerd[1542]: time="2025-08-13T07:21:05.135486695Z" level=info msg="StartContainer for \"381bd18713d3a6106bf61aa9ec5d5f67b53a987d46dcbbb590b5d1ab9aeff05b\"" Aug 13 07:21:05.259916 systemd[1]: Started cri-containerd-381bd18713d3a6106bf61aa9ec5d5f67b53a987d46dcbbb590b5d1ab9aeff05b.scope - libcontainer container 381bd18713d3a6106bf61aa9ec5d5f67b53a987d46dcbbb590b5d1ab9aeff05b. Aug 13 07:21:05.283969 containerd[1542]: time="2025-08-13T07:21:05.283935138Z" level=info msg="StartContainer for \"381bd18713d3a6106bf61aa9ec5d5f67b53a987d46dcbbb590b5d1ab9aeff05b\" returns successfully" Aug 13 07:21:05.839870 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 07:21:05.842192 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 07:21:06.232320 kubelet[2712]: I0813 07:21:06.224520 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xg459" podStartSLOduration=2.285168219 podStartE2EDuration="19.197948831s" podCreationTimestamp="2025-08-13 07:20:47 +0000 UTC" firstStartedPulling="2025-08-13 07:20:48.111589907 +0000 UTC m=+16.081022048" lastFinishedPulling="2025-08-13 07:21:05.024370518 +0000 UTC m=+32.993802660" observedRunningTime="2025-08-13 07:21:05.375937549 +0000 UTC m=+33.345369695" watchObservedRunningTime="2025-08-13 07:21:06.197948831 +0000 UTC m=+34.167380982" Aug 13 07:21:06.233373 containerd[1542]: time="2025-08-13T07:21:06.232919532Z" level=info msg="StopPodSandbox for \"d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355\"" Aug 13 07:21:06.360532 kubelet[2712]: I0813 07:21:06.360509 2712 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:21:07.823351 containerd[1542]: 2025-08-13 07:21:06.651 [INFO][3890] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" Aug 13 07:21:07.823351 containerd[1542]: 2025-08-13 07:21:06.666 [INFO][3890] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" iface="eth0" netns="/var/run/netns/cni-b52f2f86-b38d-bdca-2c6c-56c47e6ba3b0" Aug 13 07:21:07.823351 containerd[1542]: 2025-08-13 07:21:06.666 [INFO][3890] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" iface="eth0" netns="/var/run/netns/cni-b52f2f86-b38d-bdca-2c6c-56c47e6ba3b0" Aug 13 07:21:07.823351 containerd[1542]: 2025-08-13 07:21:06.676 [INFO][3890] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" iface="eth0" netns="/var/run/netns/cni-b52f2f86-b38d-bdca-2c6c-56c47e6ba3b0" Aug 13 07:21:07.823351 containerd[1542]: 2025-08-13 07:21:06.676 [INFO][3890] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" Aug 13 07:21:07.823351 containerd[1542]: 2025-08-13 07:21:06.676 [INFO][3890] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" Aug 13 07:21:07.823351 containerd[1542]: 2025-08-13 07:21:07.802 [INFO][3908] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" HandleID="k8s-pod-network.d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" Workload="localhost-k8s-whisker--6685cc6896--jpz9d-eth0" Aug 13 07:21:07.823351 containerd[1542]: 2025-08-13 07:21:07.804 [INFO][3908] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:21:07.823351 containerd[1542]: 2025-08-13 07:21:07.806 [INFO][3908] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:21:07.823351 containerd[1542]: 2025-08-13 07:21:07.818 [WARNING][3908] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" HandleID="k8s-pod-network.d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" Workload="localhost-k8s-whisker--6685cc6896--jpz9d-eth0" Aug 13 07:21:07.823351 containerd[1542]: 2025-08-13 07:21:07.818 [INFO][3908] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" HandleID="k8s-pod-network.d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" Workload="localhost-k8s-whisker--6685cc6896--jpz9d-eth0" Aug 13 07:21:07.823351 containerd[1542]: 2025-08-13 07:21:07.820 [INFO][3908] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:21:07.823351 containerd[1542]: 2025-08-13 07:21:07.821 [INFO][3890] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" Aug 13 07:21:07.828306 systemd[1]: run-netns-cni\x2db52f2f86\x2db38d\x2dbdca\x2d2c6c\x2d56c47e6ba3b0.mount: Deactivated successfully. Aug 13 07:21:07.834272 containerd[1542]: time="2025-08-13T07:21:07.834247210Z" level=info msg="TearDown network for sandbox \"d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355\" successfully" Aug 13 07:21:07.835140 containerd[1542]: time="2025-08-13T07:21:07.834358702Z" level=info msg="StopPodSandbox for \"d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355\" returns successfully" Aug 13 07:21:07.850857 kernel: bpftool[4021]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 13 07:21:08.043390 systemd-networkd[1349]: vxlan.calico: Link UP Aug 13 07:21:08.043395 systemd-networkd[1349]: vxlan.calico: Gained carrier Aug 13 07:21:08.113676 kubelet[2712]: I0813 07:21:08.053763 2712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b049e0e-6f6a-4b4a-ac77-b6115705ff30-whisker-ca-bundle\") pod \"8b049e0e-6f6a-4b4a-ac77-b6115705ff30\" (UID: \"8b049e0e-6f6a-4b4a-ac77-b6115705ff30\") " Aug 13 07:21:08.113676 kubelet[2712]: I0813 07:21:08.053830 2712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8b049e0e-6f6a-4b4a-ac77-b6115705ff30-whisker-backend-key-pair\") pod \"8b049e0e-6f6a-4b4a-ac77-b6115705ff30\" (UID: \"8b049e0e-6f6a-4b4a-ac77-b6115705ff30\") " Aug 13 07:21:08.113676 kubelet[2712]: I0813 07:21:08.053854 2712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dskv9\" (UniqueName: \"kubernetes.io/projected/8b049e0e-6f6a-4b4a-ac77-b6115705ff30-kube-api-access-dskv9\") pod \"8b049e0e-6f6a-4b4a-ac77-b6115705ff30\" (UID: \"8b049e0e-6f6a-4b4a-ac77-b6115705ff30\") " Aug 13 07:21:08.129839 kubelet[2712]: I0813 07:21:08.114461 2712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b049e0e-6f6a-4b4a-ac77-b6115705ff30-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8b049e0e-6f6a-4b4a-ac77-b6115705ff30" (UID: "8b049e0e-6f6a-4b4a-ac77-b6115705ff30"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 07:21:08.154430 kubelet[2712]: I0813 07:21:08.154406 2712 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b049e0e-6f6a-4b4a-ac77-b6115705ff30-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Aug 13 07:21:08.159227 systemd[1]: var-lib-kubelet-pods-8b049e0e\x2d6f6a\x2d4b4a\x2dac77\x2db6115705ff30-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddskv9.mount: Deactivated successfully. Aug 13 07:21:08.162482 kubelet[2712]: I0813 07:21:08.162457 2712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b049e0e-6f6a-4b4a-ac77-b6115705ff30-kube-api-access-dskv9" (OuterVolumeSpecName: "kube-api-access-dskv9") pod "8b049e0e-6f6a-4b4a-ac77-b6115705ff30" (UID: "8b049e0e-6f6a-4b4a-ac77-b6115705ff30"). InnerVolumeSpecName "kube-api-access-dskv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 07:21:08.163857 kubelet[2712]: I0813 07:21:08.163793 2712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b049e0e-6f6a-4b4a-ac77-b6115705ff30-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8b049e0e-6f6a-4b4a-ac77-b6115705ff30" (UID: "8b049e0e-6f6a-4b4a-ac77-b6115705ff30"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 07:21:08.164694 systemd[1]: var-lib-kubelet-pods-8b049e0e\x2d6f6a\x2d4b4a\x2dac77\x2db6115705ff30-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 07:21:08.220385 systemd[1]: Removed slice kubepods-besteffort-pod8b049e0e_6f6a_4b4a_ac77_b6115705ff30.slice - libcontainer container kubepods-besteffort-pod8b049e0e_6f6a_4b4a_ac77_b6115705ff30.slice. Aug 13 07:21:08.255323 kubelet[2712]: I0813 07:21:08.255218 2712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dskv9\" (UniqueName: \"kubernetes.io/projected/8b049e0e-6f6a-4b4a-ac77-b6115705ff30-kube-api-access-dskv9\") on node \"localhost\" DevicePath \"\"" Aug 13 07:21:08.255323 kubelet[2712]: I0813 07:21:08.255239 2712 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8b049e0e-6f6a-4b4a-ac77-b6115705ff30-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Aug 13 07:21:08.895608 systemd[1]: Created slice kubepods-besteffort-pod155374c2_a438_4233_8509_2a4773dc2fe5.slice - libcontainer container kubepods-besteffort-pod155374c2_a438_4233_8509_2a4773dc2fe5.slice. Aug 13 07:21:09.032529 kubelet[2712]: I0813 07:21:09.032412 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/155374c2-a438-4233-8509-2a4773dc2fe5-whisker-backend-key-pair\") pod \"whisker-658f645c7b-kcmts\" (UID: \"155374c2-a438-4233-8509-2a4773dc2fe5\") " pod="calico-system/whisker-658f645c7b-kcmts" Aug 13 07:21:09.032529 kubelet[2712]: I0813 07:21:09.032443 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/155374c2-a438-4233-8509-2a4773dc2fe5-whisker-ca-bundle\") pod \"whisker-658f645c7b-kcmts\" (UID: \"155374c2-a438-4233-8509-2a4773dc2fe5\") " pod="calico-system/whisker-658f645c7b-kcmts" Aug 13 07:21:09.032529 kubelet[2712]: I0813 07:21:09.032463 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsf6h\" (UniqueName: \"kubernetes.io/projected/155374c2-a438-4233-8509-2a4773dc2fe5-kube-api-access-jsf6h\") pod \"whisker-658f645c7b-kcmts\" (UID: \"155374c2-a438-4233-8509-2a4773dc2fe5\") " pod="calico-system/whisker-658f645c7b-kcmts" Aug 13 07:21:09.215343 containerd[1542]: time="2025-08-13T07:21:09.215226667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-658f645c7b-kcmts,Uid:155374c2-a438-4233-8509-2a4773dc2fe5,Namespace:calico-system,Attempt:0,}" Aug 13 07:21:09.361897 systemd-networkd[1349]: cali029d76dedfb: Link UP Aug 13 07:21:09.362038 systemd-networkd[1349]: cali029d76dedfb: Gained carrier Aug 13 07:21:09.381949 containerd[1542]: 2025-08-13 07:21:09.256 [INFO][4120] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--658f645c7b--kcmts-eth0 whisker-658f645c7b- calico-system 155374c2-a438-4233-8509-2a4773dc2fe5 911 0 2025-08-13 07:21:08 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:658f645c7b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-658f645c7b-kcmts eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali029d76dedfb [] [] }} ContainerID="cf1f6be4ec7357893ce644a574d84542d5538c13a5034cd28869a69daeb933d6" Namespace="calico-system" Pod="whisker-658f645c7b-kcmts" WorkloadEndpoint="localhost-k8s-whisker--658f645c7b--kcmts-" Aug 13 07:21:09.381949 containerd[1542]: 2025-08-13 07:21:09.256 [INFO][4120] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cf1f6be4ec7357893ce644a574d84542d5538c13a5034cd28869a69daeb933d6" Namespace="calico-system" Pod="whisker-658f645c7b-kcmts" WorkloadEndpoint="localhost-k8s-whisker--658f645c7b--kcmts-eth0" Aug 13 07:21:09.381949 containerd[1542]: 2025-08-13 07:21:09.283 [INFO][4133] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf1f6be4ec7357893ce644a574d84542d5538c13a5034cd28869a69daeb933d6" HandleID="k8s-pod-network.cf1f6be4ec7357893ce644a574d84542d5538c13a5034cd28869a69daeb933d6" Workload="localhost-k8s-whisker--658f645c7b--kcmts-eth0" Aug 13 07:21:09.381949 containerd[1542]: 2025-08-13 07:21:09.284 [INFO][4133] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cf1f6be4ec7357893ce644a574d84542d5538c13a5034cd28869a69daeb933d6" HandleID="k8s-pod-network.cf1f6be4ec7357893ce644a574d84542d5538c13a5034cd28869a69daeb933d6" Workload="localhost-k8s-whisker--658f645c7b--kcmts-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad4a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-658f645c7b-kcmts", "timestamp":"2025-08-13 07:21:09.283844524 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:21:09.381949 containerd[1542]: 2025-08-13 07:21:09.284 [INFO][4133] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:21:09.381949 containerd[1542]: 2025-08-13 07:21:09.284 [INFO][4133] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:21:09.381949 containerd[1542]: 2025-08-13 07:21:09.284 [INFO][4133] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:21:09.381949 containerd[1542]: 2025-08-13 07:21:09.293 [INFO][4133] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cf1f6be4ec7357893ce644a574d84542d5538c13a5034cd28869a69daeb933d6" host="localhost" Aug 13 07:21:09.381949 containerd[1542]: 2025-08-13 07:21:09.327 [INFO][4133] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:21:09.381949 containerd[1542]: 2025-08-13 07:21:09.330 [INFO][4133] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:21:09.381949 containerd[1542]: 2025-08-13 07:21:09.332 [INFO][4133] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:21:09.381949 containerd[1542]: 2025-08-13 07:21:09.333 [INFO][4133] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:21:09.381949 containerd[1542]: 2025-08-13 07:21:09.333 [INFO][4133] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cf1f6be4ec7357893ce644a574d84542d5538c13a5034cd28869a69daeb933d6" host="localhost" Aug 13 07:21:09.381949 containerd[1542]: 2025-08-13 07:21:09.333 [INFO][4133] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cf1f6be4ec7357893ce644a574d84542d5538c13a5034cd28869a69daeb933d6 Aug 13 07:21:09.381949 containerd[1542]: 2025-08-13 07:21:09.336 [INFO][4133] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cf1f6be4ec7357893ce644a574d84542d5538c13a5034cd28869a69daeb933d6" host="localhost" Aug 13 07:21:09.381949 containerd[1542]: 2025-08-13 07:21:09.342 [INFO][4133] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.cf1f6be4ec7357893ce644a574d84542d5538c13a5034cd28869a69daeb933d6" host="localhost" Aug 13 07:21:09.381949 containerd[1542]: 2025-08-13 07:21:09.342 [INFO][4133] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.cf1f6be4ec7357893ce644a574d84542d5538c13a5034cd28869a69daeb933d6" host="localhost" Aug 13 07:21:09.381949 containerd[1542]: 2025-08-13 07:21:09.342 [INFO][4133] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:21:09.381949 containerd[1542]: 2025-08-13 07:21:09.342 [INFO][4133] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="cf1f6be4ec7357893ce644a574d84542d5538c13a5034cd28869a69daeb933d6" HandleID="k8s-pod-network.cf1f6be4ec7357893ce644a574d84542d5538c13a5034cd28869a69daeb933d6" Workload="localhost-k8s-whisker--658f645c7b--kcmts-eth0" Aug 13 07:21:09.393578 containerd[1542]: 2025-08-13 07:21:09.344 [INFO][4120] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cf1f6be4ec7357893ce644a574d84542d5538c13a5034cd28869a69daeb933d6" Namespace="calico-system" Pod="whisker-658f645c7b-kcmts" WorkloadEndpoint="localhost-k8s-whisker--658f645c7b--kcmts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--658f645c7b--kcmts-eth0", GenerateName:"whisker-658f645c7b-", Namespace:"calico-system", SelfLink:"", UID:"155374c2-a438-4233-8509-2a4773dc2fe5", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"658f645c7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-658f645c7b-kcmts", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali029d76dedfb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:21:09.393578 containerd[1542]: 2025-08-13 07:21:09.344 [INFO][4120] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="cf1f6be4ec7357893ce644a574d84542d5538c13a5034cd28869a69daeb933d6" Namespace="calico-system" Pod="whisker-658f645c7b-kcmts" WorkloadEndpoint="localhost-k8s-whisker--658f645c7b--kcmts-eth0" Aug 13 07:21:09.393578 containerd[1542]: 2025-08-13 07:21:09.344 [INFO][4120] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali029d76dedfb ContainerID="cf1f6be4ec7357893ce644a574d84542d5538c13a5034cd28869a69daeb933d6" Namespace="calico-system" Pod="whisker-658f645c7b-kcmts" WorkloadEndpoint="localhost-k8s-whisker--658f645c7b--kcmts-eth0" Aug 13 07:21:09.393578 containerd[1542]: 2025-08-13 07:21:09.367 [INFO][4120] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf1f6be4ec7357893ce644a574d84542d5538c13a5034cd28869a69daeb933d6" Namespace="calico-system" Pod="whisker-658f645c7b-kcmts" WorkloadEndpoint="localhost-k8s-whisker--658f645c7b--kcmts-eth0" Aug 13 07:21:09.393578 containerd[1542]: 2025-08-13 07:21:09.367 [INFO][4120] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cf1f6be4ec7357893ce644a574d84542d5538c13a5034cd28869a69daeb933d6" Namespace="calico-system" Pod="whisker-658f645c7b-kcmts" WorkloadEndpoint="localhost-k8s-whisker--658f645c7b--kcmts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--658f645c7b--kcmts-eth0", GenerateName:"whisker-658f645c7b-", Namespace:"calico-system", SelfLink:"", UID:"155374c2-a438-4233-8509-2a4773dc2fe5", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"658f645c7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cf1f6be4ec7357893ce644a574d84542d5538c13a5034cd28869a69daeb933d6", Pod:"whisker-658f645c7b-kcmts", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali029d76dedfb", MAC:"1a:eb:a6:b1:4b:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:21:09.393578 containerd[1542]: 2025-08-13 07:21:09.378 [INFO][4120] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cf1f6be4ec7357893ce644a574d84542d5538c13a5034cd28869a69daeb933d6" Namespace="calico-system" Pod="whisker-658f645c7b-kcmts" WorkloadEndpoint="localhost-k8s-whisker--658f645c7b--kcmts-eth0" Aug 13 07:21:09.402335 containerd[1542]: time="2025-08-13T07:21:09.401529144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:21:09.402335 containerd[1542]: time="2025-08-13T07:21:09.402113921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:21:09.402335 containerd[1542]: time="2025-08-13T07:21:09.402144410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:21:09.402549 containerd[1542]: time="2025-08-13T07:21:09.402302041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:21:09.428959 systemd[1]: Started cri-containerd-cf1f6be4ec7357893ce644a574d84542d5538c13a5034cd28869a69daeb933d6.scope - libcontainer container cf1f6be4ec7357893ce644a574d84542d5538c13a5034cd28869a69daeb933d6. Aug 13 07:21:09.441361 systemd-resolved[1450]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:21:09.473569 containerd[1542]: time="2025-08-13T07:21:09.473244152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-658f645c7b-kcmts,Uid:155374c2-a438-4233-8509-2a4773dc2fe5,Namespace:calico-system,Attempt:0,} returns sandbox id \"cf1f6be4ec7357893ce644a574d84542d5538c13a5034cd28869a69daeb933d6\"" Aug 13 07:21:09.621615 containerd[1542]: time="2025-08-13T07:21:09.621536179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 07:21:09.935971 systemd-networkd[1349]: vxlan.calico: Gained IPv6LL Aug 13 07:21:10.173059 containerd[1542]: time="2025-08-13T07:21:10.172907672Z" level=info msg="StopPodSandbox for \"b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d\"" Aug 13 07:21:10.189407 kubelet[2712]: I0813 07:21:10.189143 2712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b049e0e-6f6a-4b4a-ac77-b6115705ff30" path="/var/lib/kubelet/pods/8b049e0e-6f6a-4b4a-ac77-b6115705ff30/volumes" Aug 13 07:21:10.257087 containerd[1542]: 2025-08-13 07:21:10.213 [INFO][4203] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" Aug 13 07:21:10.257087 containerd[1542]: 2025-08-13 07:21:10.213 [INFO][4203] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" iface="eth0" netns="/var/run/netns/cni-7971d8b7-6084-1031-c3d6-a3bd5e9260f3" Aug 13 07:21:10.257087 containerd[1542]: 2025-08-13 07:21:10.213 [INFO][4203] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" iface="eth0" netns="/var/run/netns/cni-7971d8b7-6084-1031-c3d6-a3bd5e9260f3" Aug 13 07:21:10.257087 containerd[1542]: 2025-08-13 07:21:10.213 [INFO][4203] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" iface="eth0" netns="/var/run/netns/cni-7971d8b7-6084-1031-c3d6-a3bd5e9260f3" Aug 13 07:21:10.257087 containerd[1542]: 2025-08-13 07:21:10.213 [INFO][4203] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" Aug 13 07:21:10.257087 containerd[1542]: 2025-08-13 07:21:10.213 [INFO][4203] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" Aug 13 07:21:10.257087 containerd[1542]: 2025-08-13 07:21:10.250 [INFO][4210] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" HandleID="k8s-pod-network.b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" Workload="localhost-k8s-calico--kube--controllers--66794f6b97--5xnbz-eth0" Aug 13 07:21:10.257087 containerd[1542]: 2025-08-13 07:21:10.250 [INFO][4210] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:21:10.257087 containerd[1542]: 2025-08-13 07:21:10.250 [INFO][4210] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:21:10.257087 containerd[1542]: 2025-08-13 07:21:10.253 [WARNING][4210] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" HandleID="k8s-pod-network.b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" Workload="localhost-k8s-calico--kube--controllers--66794f6b97--5xnbz-eth0" Aug 13 07:21:10.257087 containerd[1542]: 2025-08-13 07:21:10.253 [INFO][4210] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" HandleID="k8s-pod-network.b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" Workload="localhost-k8s-calico--kube--controllers--66794f6b97--5xnbz-eth0" Aug 13 07:21:10.257087 containerd[1542]: 2025-08-13 07:21:10.254 [INFO][4210] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:21:10.257087 containerd[1542]: 2025-08-13 07:21:10.255 [INFO][4203] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" Aug 13 07:21:10.258944 containerd[1542]: time="2025-08-13T07:21:10.258913700Z" level=info msg="TearDown network for sandbox \"b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d\" successfully" Aug 13 07:21:10.258944 containerd[1542]: time="2025-08-13T07:21:10.258941090Z" level=info msg="StopPodSandbox for \"b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d\" returns successfully" Aug 13 07:21:10.259285 systemd[1]: run-netns-cni\x2d7971d8b7\x2d6084\x2d1031\x2dc3d6\x2da3bd5e9260f3.mount: Deactivated successfully. Aug 13 07:21:10.276289 containerd[1542]: time="2025-08-13T07:21:10.276266365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66794f6b97-5xnbz,Uid:1308b7c2-9bc6-44fd-8d86-34c607162a5d,Namespace:calico-system,Attempt:1,}" Aug 13 07:21:10.510800 systemd-networkd[1349]: cali46b0f5135aa: Link UP Aug 13 07:21:10.512006 systemd-networkd[1349]: cali46b0f5135aa: Gained carrier Aug 13 07:21:10.512919 systemd-networkd[1349]: cali029d76dedfb: Gained IPv6LL Aug 13 07:21:10.530477 containerd[1542]: 2025-08-13 07:21:10.455 [INFO][4217] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--66794f6b97--5xnbz-eth0 calico-kube-controllers-66794f6b97- calico-system 1308b7c2-9bc6-44fd-8d86-34c607162a5d 921 0 2025-08-13 07:20:47 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:66794f6b97 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-66794f6b97-5xnbz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali46b0f5135aa [] [] }} ContainerID="537e4504d54a154f756f111b3dc2199c13107e814f5bfd44c598c832b42051ed" Namespace="calico-system" Pod="calico-kube-controllers-66794f6b97-5xnbz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66794f6b97--5xnbz-" Aug 13 07:21:10.530477 containerd[1542]: 2025-08-13 07:21:10.455 [INFO][4217] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="537e4504d54a154f756f111b3dc2199c13107e814f5bfd44c598c832b42051ed" Namespace="calico-system" Pod="calico-kube-controllers-66794f6b97-5xnbz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66794f6b97--5xnbz-eth0" Aug 13 07:21:10.530477 containerd[1542]: 2025-08-13 07:21:10.472 [INFO][4228] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="537e4504d54a154f756f111b3dc2199c13107e814f5bfd44c598c832b42051ed" HandleID="k8s-pod-network.537e4504d54a154f756f111b3dc2199c13107e814f5bfd44c598c832b42051ed" Workload="localhost-k8s-calico--kube--controllers--66794f6b97--5xnbz-eth0" Aug 13 07:21:10.530477 containerd[1542]: 2025-08-13 07:21:10.472 [INFO][4228] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="537e4504d54a154f756f111b3dc2199c13107e814f5bfd44c598c832b42051ed" HandleID="k8s-pod-network.537e4504d54a154f756f111b3dc2199c13107e814f5bfd44c598c832b42051ed" Workload="localhost-k8s-calico--kube--controllers--66794f6b97--5xnbz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f730), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-66794f6b97-5xnbz", "timestamp":"2025-08-13 07:21:10.472405044 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:21:10.530477 containerd[1542]: 2025-08-13 07:21:10.472 [INFO][4228] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:21:10.530477 containerd[1542]: 2025-08-13 07:21:10.472 [INFO][4228] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:21:10.530477 containerd[1542]: 2025-08-13 07:21:10.472 [INFO][4228] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:21:10.530477 containerd[1542]: 2025-08-13 07:21:10.477 [INFO][4228] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.537e4504d54a154f756f111b3dc2199c13107e814f5bfd44c598c832b42051ed" host="localhost" Aug 13 07:21:10.530477 containerd[1542]: 2025-08-13 07:21:10.479 [INFO][4228] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:21:10.530477 containerd[1542]: 2025-08-13 07:21:10.480 [INFO][4228] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:21:10.530477 containerd[1542]: 2025-08-13 07:21:10.481 [INFO][4228] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:21:10.530477 containerd[1542]: 2025-08-13 07:21:10.482 [INFO][4228] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:21:10.530477 containerd[1542]: 2025-08-13 07:21:10.482 [INFO][4228] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.537e4504d54a154f756f111b3dc2199c13107e814f5bfd44c598c832b42051ed" host="localhost" Aug 13 07:21:10.530477 containerd[1542]: 2025-08-13 07:21:10.483 [INFO][4228] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.537e4504d54a154f756f111b3dc2199c13107e814f5bfd44c598c832b42051ed Aug 13 07:21:10.530477 containerd[1542]: 2025-08-13 07:21:10.490 [INFO][4228] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.537e4504d54a154f756f111b3dc2199c13107e814f5bfd44c598c832b42051ed" host="localhost" Aug 13 07:21:10.530477 containerd[1542]: 2025-08-13 07:21:10.506 [INFO][4228] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.537e4504d54a154f756f111b3dc2199c13107e814f5bfd44c598c832b42051ed" host="localhost" Aug 13 07:21:10.530477 containerd[1542]: 2025-08-13 07:21:10.506 [INFO][4228] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.537e4504d54a154f756f111b3dc2199c13107e814f5bfd44c598c832b42051ed" host="localhost" Aug 13 07:21:10.530477 containerd[1542]: 2025-08-13 07:21:10.506 [INFO][4228] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:21:10.530477 containerd[1542]: 2025-08-13 07:21:10.506 [INFO][4228] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="537e4504d54a154f756f111b3dc2199c13107e814f5bfd44c598c832b42051ed" HandleID="k8s-pod-network.537e4504d54a154f756f111b3dc2199c13107e814f5bfd44c598c832b42051ed" Workload="localhost-k8s-calico--kube--controllers--66794f6b97--5xnbz-eth0" Aug 13 07:21:10.532801 containerd[1542]: 2025-08-13 07:21:10.507 [INFO][4217] cni-plugin/k8s.go 418: Populated endpoint ContainerID="537e4504d54a154f756f111b3dc2199c13107e814f5bfd44c598c832b42051ed" Namespace="calico-system" Pod="calico-kube-controllers-66794f6b97-5xnbz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66794f6b97--5xnbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66794f6b97--5xnbz-eth0", GenerateName:"calico-kube-controllers-66794f6b97-", Namespace:"calico-system", SelfLink:"", UID:"1308b7c2-9bc6-44fd-8d86-34c607162a5d", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 20, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66794f6b97", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-66794f6b97-5xnbz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali46b0f5135aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:21:10.532801 containerd[1542]: 2025-08-13 07:21:10.508 [INFO][4217] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="537e4504d54a154f756f111b3dc2199c13107e814f5bfd44c598c832b42051ed" Namespace="calico-system" Pod="calico-kube-controllers-66794f6b97-5xnbz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66794f6b97--5xnbz-eth0" Aug 13 07:21:10.532801 containerd[1542]: 2025-08-13 07:21:10.508 [INFO][4217] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali46b0f5135aa ContainerID="537e4504d54a154f756f111b3dc2199c13107e814f5bfd44c598c832b42051ed" Namespace="calico-system" Pod="calico-kube-controllers-66794f6b97-5xnbz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66794f6b97--5xnbz-eth0" Aug 13 07:21:10.532801 containerd[1542]: 2025-08-13 07:21:10.511 [INFO][4217] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="537e4504d54a154f756f111b3dc2199c13107e814f5bfd44c598c832b42051ed" Namespace="calico-system" Pod="calico-kube-controllers-66794f6b97-5xnbz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66794f6b97--5xnbz-eth0" Aug 13 07:21:10.532801 containerd[1542]: 2025-08-13 07:21:10.513 [INFO][4217] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="537e4504d54a154f756f111b3dc2199c13107e814f5bfd44c598c832b42051ed" Namespace="calico-system" Pod="calico-kube-controllers-66794f6b97-5xnbz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66794f6b97--5xnbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66794f6b97--5xnbz-eth0", GenerateName:"calico-kube-controllers-66794f6b97-", Namespace:"calico-system", SelfLink:"", UID:"1308b7c2-9bc6-44fd-8d86-34c607162a5d", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 20, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66794f6b97", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"537e4504d54a154f756f111b3dc2199c13107e814f5bfd44c598c832b42051ed", Pod:"calico-kube-controllers-66794f6b97-5xnbz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali46b0f5135aa", MAC:"3e:9c:f9:71:6c:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:21:10.532801 containerd[1542]: 2025-08-13 07:21:10.528 [INFO][4217] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="537e4504d54a154f756f111b3dc2199c13107e814f5bfd44c598c832b42051ed" Namespace="calico-system" Pod="calico-kube-controllers-66794f6b97-5xnbz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66794f6b97--5xnbz-eth0" Aug 13 07:21:10.546980 containerd[1542]: time="2025-08-13T07:21:10.546690805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:21:10.546980 containerd[1542]: time="2025-08-13T07:21:10.546886889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:21:10.546980 containerd[1542]: time="2025-08-13T07:21:10.546895466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:21:10.547801 containerd[1542]: time="2025-08-13T07:21:10.547279315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:21:10.561926 systemd[1]: Started cri-containerd-537e4504d54a154f756f111b3dc2199c13107e814f5bfd44c598c832b42051ed.scope - libcontainer container 537e4504d54a154f756f111b3dc2199c13107e814f5bfd44c598c832b42051ed. Aug 13 07:21:10.570322 systemd-resolved[1450]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:21:10.590472 containerd[1542]: time="2025-08-13T07:21:10.590447197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66794f6b97-5xnbz,Uid:1308b7c2-9bc6-44fd-8d86-34c607162a5d,Namespace:calico-system,Attempt:1,} returns sandbox id \"537e4504d54a154f756f111b3dc2199c13107e814f5bfd44c598c832b42051ed\"" Aug 13 07:21:11.172765 containerd[1542]: time="2025-08-13T07:21:11.172574317Z" level=info msg="StopPodSandbox for \"470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91\"" Aug 13 07:21:11.243891 containerd[1542]: 2025-08-13 07:21:11.205 [INFO][4294] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" Aug 13 07:21:11.243891 containerd[1542]: 2025-08-13 07:21:11.205 [INFO][4294] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" iface="eth0" netns="/var/run/netns/cni-cac0ffc9-a5e8-2b16-dfcb-b9bce1120b3f" Aug 13 07:21:11.243891 containerd[1542]: 2025-08-13 07:21:11.206 [INFO][4294] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" iface="eth0" netns="/var/run/netns/cni-cac0ffc9-a5e8-2b16-dfcb-b9bce1120b3f" Aug 13 07:21:11.243891 containerd[1542]: 2025-08-13 07:21:11.206 [INFO][4294] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" iface="eth0" netns="/var/run/netns/cni-cac0ffc9-a5e8-2b16-dfcb-b9bce1120b3f" Aug 13 07:21:11.243891 containerd[1542]: 2025-08-13 07:21:11.206 [INFO][4294] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" Aug 13 07:21:11.243891 containerd[1542]: 2025-08-13 07:21:11.206 [INFO][4294] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" Aug 13 07:21:11.243891 containerd[1542]: 2025-08-13 07:21:11.236 [INFO][4301] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" HandleID="k8s-pod-network.470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" Workload="localhost-k8s-calico--apiserver--649d79c576--kmjq6-eth0" Aug 13 07:21:11.243891 containerd[1542]: 2025-08-13 07:21:11.237 [INFO][4301] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:21:11.243891 containerd[1542]: 2025-08-13 07:21:11.237 [INFO][4301] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:21:11.243891 containerd[1542]: 2025-08-13 07:21:11.240 [WARNING][4301] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" HandleID="k8s-pod-network.470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" Workload="localhost-k8s-calico--apiserver--649d79c576--kmjq6-eth0" Aug 13 07:21:11.243891 containerd[1542]: 2025-08-13 07:21:11.240 [INFO][4301] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" HandleID="k8s-pod-network.470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" Workload="localhost-k8s-calico--apiserver--649d79c576--kmjq6-eth0" Aug 13 07:21:11.243891 containerd[1542]: 2025-08-13 07:21:11.241 [INFO][4301] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:21:11.243891 containerd[1542]: 2025-08-13 07:21:11.242 [INFO][4294] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" Aug 13 07:21:11.250465 containerd[1542]: time="2025-08-13T07:21:11.245304617Z" level=info msg="TearDown network for sandbox \"470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91\" successfully" Aug 13 07:21:11.250465 containerd[1542]: time="2025-08-13T07:21:11.245325513Z" level=info msg="StopPodSandbox for \"470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91\" returns successfully" Aug 13 07:21:11.250465 containerd[1542]: time="2025-08-13T07:21:11.245721390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-649d79c576-kmjq6,Uid:b14a1626-6ffd-46ab-a2c5-1f786b38698c,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:21:11.246119 systemd[1]: run-netns-cni\x2dcac0ffc9\x2da5e8\x2d2b16\x2ddfcb\x2db9bce1120b3f.mount: Deactivated successfully. Aug 13 07:21:11.500188 containerd[1542]: time="2025-08-13T07:21:11.499611961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:21:11.502775 containerd[1542]: time="2025-08-13T07:21:11.502449872Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Aug 13 07:21:11.510519 containerd[1542]: time="2025-08-13T07:21:11.510476998Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:21:11.517101 systemd-networkd[1349]: cali0ec092aa222: Link UP Aug 13 07:21:11.517440 systemd-networkd[1349]: cali0ec092aa222: Gained carrier Aug 13 07:21:11.519751 containerd[1542]: time="2025-08-13T07:21:11.519720775Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:21:11.522101 containerd[1542]: time="2025-08-13T07:21:11.522077537Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.900510976s" Aug 13 07:21:11.522192 containerd[1542]: time="2025-08-13T07:21:11.522103156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 13 07:21:11.523112 containerd[1542]: time="2025-08-13T07:21:11.522959224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 07:21:11.531813 containerd[1542]: time="2025-08-13T07:21:11.523282670Z" level=info msg="CreateContainer within sandbox \"cf1f6be4ec7357893ce644a574d84542d5538c13a5034cd28869a69daeb933d6\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 07:21:11.535833 containerd[1542]: 2025-08-13 07:21:11.464 [INFO][4313] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--649d79c576--kmjq6-eth0 calico-apiserver-649d79c576- calico-apiserver b14a1626-6ffd-46ab-a2c5-1f786b38698c 928 0 2025-08-13 07:20:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:649d79c576 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-649d79c576-kmjq6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0ec092aa222 [] [] }} ContainerID="0201bfe4a8f47b43cdef5bf893321402a0e3febb3060e66fa8d4efea2d063071" Namespace="calico-apiserver" Pod="calico-apiserver-649d79c576-kmjq6" WorkloadEndpoint="localhost-k8s-calico--apiserver--649d79c576--kmjq6-" Aug 13 07:21:11.535833 containerd[1542]: 2025-08-13 07:21:11.464 [INFO][4313] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0201bfe4a8f47b43cdef5bf893321402a0e3febb3060e66fa8d4efea2d063071" Namespace="calico-apiserver" Pod="calico-apiserver-649d79c576-kmjq6" WorkloadEndpoint="localhost-k8s-calico--apiserver--649d79c576--kmjq6-eth0" Aug 13 07:21:11.535833 containerd[1542]: 2025-08-13 07:21:11.479 [INFO][4326] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0201bfe4a8f47b43cdef5bf893321402a0e3febb3060e66fa8d4efea2d063071" HandleID="k8s-pod-network.0201bfe4a8f47b43cdef5bf893321402a0e3febb3060e66fa8d4efea2d063071" Workload="localhost-k8s-calico--apiserver--649d79c576--kmjq6-eth0" Aug 13 07:21:11.535833 containerd[1542]: 2025-08-13 07:21:11.479 [INFO][4326] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0201bfe4a8f47b43cdef5bf893321402a0e3febb3060e66fa8d4efea2d063071" HandleID="k8s-pod-network.0201bfe4a8f47b43cdef5bf893321402a0e3febb3060e66fa8d4efea2d063071" Workload="localhost-k8s-calico--apiserver--649d79c576--kmjq6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024eff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-649d79c576-kmjq6", "timestamp":"2025-08-13 07:21:11.479395857 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:21:11.535833 containerd[1542]: 2025-08-13 07:21:11.479 [INFO][4326] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:21:11.535833 containerd[1542]: 2025-08-13 07:21:11.479 [INFO][4326] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:21:11.535833 containerd[1542]: 2025-08-13 07:21:11.479 [INFO][4326] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:21:11.535833 containerd[1542]: 2025-08-13 07:21:11.490 [INFO][4326] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0201bfe4a8f47b43cdef5bf893321402a0e3febb3060e66fa8d4efea2d063071" host="localhost" Aug 13 07:21:11.535833 containerd[1542]: 2025-08-13 07:21:11.492 [INFO][4326] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:21:11.535833 containerd[1542]: 2025-08-13 07:21:11.496 [INFO][4326] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:21:11.535833 containerd[1542]: 2025-08-13 07:21:11.497 [INFO][4326] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:21:11.535833 containerd[1542]: 2025-08-13 07:21:11.498 [INFO][4326] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:21:11.535833 containerd[1542]: 2025-08-13 07:21:11.498 [INFO][4326] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0201bfe4a8f47b43cdef5bf893321402a0e3febb3060e66fa8d4efea2d063071" host="localhost" Aug 13 07:21:11.535833 containerd[1542]: 2025-08-13 07:21:11.499 [INFO][4326] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0201bfe4a8f47b43cdef5bf893321402a0e3febb3060e66fa8d4efea2d063071 Aug 13 07:21:11.535833 containerd[1542]: 2025-08-13 07:21:11.503 [INFO][4326] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0201bfe4a8f47b43cdef5bf893321402a0e3febb3060e66fa8d4efea2d063071" host="localhost" Aug 13 07:21:11.535833 containerd[1542]: 2025-08-13 07:21:11.514 [INFO][4326] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.0201bfe4a8f47b43cdef5bf893321402a0e3febb3060e66fa8d4efea2d063071" host="localhost" Aug 13 07:21:11.535833 containerd[1542]: 2025-08-13 07:21:11.514 [INFO][4326] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.0201bfe4a8f47b43cdef5bf893321402a0e3febb3060e66fa8d4efea2d063071" host="localhost" Aug 13 07:21:11.535833 containerd[1542]: 2025-08-13 07:21:11.514 [INFO][4326] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:21:11.535833 containerd[1542]: 2025-08-13 07:21:11.514 [INFO][4326] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="0201bfe4a8f47b43cdef5bf893321402a0e3febb3060e66fa8d4efea2d063071" HandleID="k8s-pod-network.0201bfe4a8f47b43cdef5bf893321402a0e3febb3060e66fa8d4efea2d063071" Workload="localhost-k8s-calico--apiserver--649d79c576--kmjq6-eth0" Aug 13 07:21:11.536256 containerd[1542]: 2025-08-13 07:21:11.515 [INFO][4313] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0201bfe4a8f47b43cdef5bf893321402a0e3febb3060e66fa8d4efea2d063071" Namespace="calico-apiserver" Pod="calico-apiserver-649d79c576-kmjq6" WorkloadEndpoint="localhost-k8s-calico--apiserver--649d79c576--kmjq6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--649d79c576--kmjq6-eth0", GenerateName:"calico-apiserver-649d79c576-", Namespace:"calico-apiserver", SelfLink:"", UID:"b14a1626-6ffd-46ab-a2c5-1f786b38698c", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 20, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"649d79c576", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-649d79c576-kmjq6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ec092aa222", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:21:11.536256 containerd[1542]: 2025-08-13 07:21:11.515 [INFO][4313] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="0201bfe4a8f47b43cdef5bf893321402a0e3febb3060e66fa8d4efea2d063071" Namespace="calico-apiserver" Pod="calico-apiserver-649d79c576-kmjq6" WorkloadEndpoint="localhost-k8s-calico--apiserver--649d79c576--kmjq6-eth0" Aug 13 07:21:11.536256 containerd[1542]: 2025-08-13 07:21:11.515 [INFO][4313] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0ec092aa222 ContainerID="0201bfe4a8f47b43cdef5bf893321402a0e3febb3060e66fa8d4efea2d063071" Namespace="calico-apiserver" Pod="calico-apiserver-649d79c576-kmjq6" WorkloadEndpoint="localhost-k8s-calico--apiserver--649d79c576--kmjq6-eth0" Aug 13 07:21:11.536256 containerd[1542]: 2025-08-13 07:21:11.517 [INFO][4313] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0201bfe4a8f47b43cdef5bf893321402a0e3febb3060e66fa8d4efea2d063071" Namespace="calico-apiserver" Pod="calico-apiserver-649d79c576-kmjq6" WorkloadEndpoint="localhost-k8s-calico--apiserver--649d79c576--kmjq6-eth0" Aug 13 07:21:11.536256 containerd[1542]: 2025-08-13 07:21:11.517 [INFO][4313] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0201bfe4a8f47b43cdef5bf893321402a0e3febb3060e66fa8d4efea2d063071" Namespace="calico-apiserver" Pod="calico-apiserver-649d79c576-kmjq6" WorkloadEndpoint="localhost-k8s-calico--apiserver--649d79c576--kmjq6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--649d79c576--kmjq6-eth0", GenerateName:"calico-apiserver-649d79c576-", Namespace:"calico-apiserver", SelfLink:"", UID:"b14a1626-6ffd-46ab-a2c5-1f786b38698c", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 20, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"649d79c576", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0201bfe4a8f47b43cdef5bf893321402a0e3febb3060e66fa8d4efea2d063071", Pod:"calico-apiserver-649d79c576-kmjq6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ec092aa222", MAC:"5e:26:ef:71:59:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:21:11.536256 containerd[1542]: 2025-08-13 07:21:11.529 [INFO][4313] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0201bfe4a8f47b43cdef5bf893321402a0e3febb3060e66fa8d4efea2d063071" Namespace="calico-apiserver" Pod="calico-apiserver-649d79c576-kmjq6" WorkloadEndpoint="localhost-k8s-calico--apiserver--649d79c576--kmjq6-eth0" Aug 13 07:21:11.577243 containerd[1542]: time="2025-08-13T07:21:11.577079221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:21:11.577243 containerd[1542]: time="2025-08-13T07:21:11.577107363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:21:11.577243 containerd[1542]: time="2025-08-13T07:21:11.577114104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:21:11.577243 containerd[1542]: time="2025-08-13T07:21:11.577151311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:21:11.592908 systemd[1]: Started cri-containerd-0201bfe4a8f47b43cdef5bf893321402a0e3febb3060e66fa8d4efea2d063071.scope - libcontainer container 0201bfe4a8f47b43cdef5bf893321402a0e3febb3060e66fa8d4efea2d063071. Aug 13 07:21:11.602106 systemd-resolved[1450]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:21:11.624378 containerd[1542]: time="2025-08-13T07:21:11.624358258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-649d79c576-kmjq6,Uid:b14a1626-6ffd-46ab-a2c5-1f786b38698c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0201bfe4a8f47b43cdef5bf893321402a0e3febb3060e66fa8d4efea2d063071\"" Aug 13 07:21:11.642936 containerd[1542]: time="2025-08-13T07:21:11.642905303Z" level=info msg="CreateContainer within sandbox \"cf1f6be4ec7357893ce644a574d84542d5538c13a5034cd28869a69daeb933d6\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"a5d560a20e14a1940b69d9b9445a67273f54f31068090e9d5e3b42878188d71e\"" Aug 13 07:21:11.643525 containerd[1542]: time="2025-08-13T07:21:11.643497277Z" level=info msg="StartContainer for \"a5d560a20e14a1940b69d9b9445a67273f54f31068090e9d5e3b42878188d71e\"" Aug 13 07:21:11.663922 systemd[1]: Started cri-containerd-a5d560a20e14a1940b69d9b9445a67273f54f31068090e9d5e3b42878188d71e.scope - libcontainer container a5d560a20e14a1940b69d9b9445a67273f54f31068090e9d5e3b42878188d71e. Aug 13 07:21:11.663955 systemd-networkd[1349]: cali46b0f5135aa: Gained IPv6LL Aug 13 07:21:11.696524 containerd[1542]: time="2025-08-13T07:21:11.696495772Z" level=info msg="StartContainer for \"a5d560a20e14a1940b69d9b9445a67273f54f31068090e9d5e3b42878188d71e\" returns successfully" Aug 13 07:21:12.174076 containerd[1542]: time="2025-08-13T07:21:12.173403038Z" level=info msg="StopPodSandbox for \"0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855\"" Aug 13 07:21:12.174076 containerd[1542]: time="2025-08-13T07:21:12.173510818Z" level=info msg="StopPodSandbox for \"80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db\"" Aug 13 07:21:12.343705 containerd[1542]: 2025-08-13 07:21:12.306 [INFO][4439] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" Aug 13 07:21:12.343705 containerd[1542]: 2025-08-13 07:21:12.306 [INFO][4439] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" iface="eth0" netns="/var/run/netns/cni-731f2d5c-b375-e3c6-9d01-dc6c752abf21" Aug 13 07:21:12.343705 containerd[1542]: 2025-08-13 07:21:12.306 [INFO][4439] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" iface="eth0" netns="/var/run/netns/cni-731f2d5c-b375-e3c6-9d01-dc6c752abf21" Aug 13 07:21:12.343705 containerd[1542]: 2025-08-13 07:21:12.306 [INFO][4439] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" iface="eth0" netns="/var/run/netns/cni-731f2d5c-b375-e3c6-9d01-dc6c752abf21" Aug 13 07:21:12.343705 containerd[1542]: 2025-08-13 07:21:12.306 [INFO][4439] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" Aug 13 07:21:12.343705 containerd[1542]: 2025-08-13 07:21:12.306 [INFO][4439] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" Aug 13 07:21:12.343705 containerd[1542]: 2025-08-13 07:21:12.332 [INFO][4451] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" HandleID="k8s-pod-network.80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" Workload="localhost-k8s-coredns--7c65d6cfc9--w2f57-eth0" Aug 13 07:21:12.343705 containerd[1542]: 2025-08-13 07:21:12.333 [INFO][4451] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:21:12.343705 containerd[1542]: 2025-08-13 07:21:12.333 [INFO][4451] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:21:12.343705 containerd[1542]: 2025-08-13 07:21:12.337 [WARNING][4451] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" HandleID="k8s-pod-network.80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" Workload="localhost-k8s-coredns--7c65d6cfc9--w2f57-eth0" Aug 13 07:21:12.343705 containerd[1542]: 2025-08-13 07:21:12.337 [INFO][4451] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" HandleID="k8s-pod-network.80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" Workload="localhost-k8s-coredns--7c65d6cfc9--w2f57-eth0" Aug 13 07:21:12.343705 containerd[1542]: 2025-08-13 07:21:12.338 [INFO][4451] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:21:12.343705 containerd[1542]: 2025-08-13 07:21:12.340 [INFO][4439] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" Aug 13 07:21:12.343705 containerd[1542]: time="2025-08-13T07:21:12.341769377Z" level=info msg="TearDown network for sandbox \"80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db\" successfully" Aug 13 07:21:12.343705 containerd[1542]: time="2025-08-13T07:21:12.341784492Z" level=info msg="StopPodSandbox for \"80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db\" returns successfully" Aug 13 07:21:12.343705 containerd[1542]: time="2025-08-13T07:21:12.342158055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w2f57,Uid:eb2e06ca-79db-46b7-a389-34bc602b3a47,Namespace:kube-system,Attempt:1,}" Aug 13 07:21:12.343458 systemd[1]: run-netns-cni\x2d731f2d5c\x2db375\x2de3c6\x2d9d01\x2ddc6c752abf21.mount: Deactivated successfully. Aug 13 07:21:12.352251 containerd[1542]: 2025-08-13 07:21:12.315 [INFO][4438] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" Aug 13 07:21:12.352251 containerd[1542]: 2025-08-13 07:21:12.315 [INFO][4438] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" iface="eth0" netns="/var/run/netns/cni-b8acccb4-eed4-eaec-b941-ec4ed81baf6d" Aug 13 07:21:12.352251 containerd[1542]: 2025-08-13 07:21:12.316 [INFO][4438] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" iface="eth0" netns="/var/run/netns/cni-b8acccb4-eed4-eaec-b941-ec4ed81baf6d" Aug 13 07:21:12.352251 containerd[1542]: 2025-08-13 07:21:12.316 [INFO][4438] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" iface="eth0" netns="/var/run/netns/cni-b8acccb4-eed4-eaec-b941-ec4ed81baf6d" Aug 13 07:21:12.352251 containerd[1542]: 2025-08-13 07:21:12.316 [INFO][4438] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" Aug 13 07:21:12.352251 containerd[1542]: 2025-08-13 07:21:12.316 [INFO][4438] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" Aug 13 07:21:12.352251 containerd[1542]: 2025-08-13 07:21:12.334 [INFO][4456] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" HandleID="k8s-pod-network.0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" Workload="localhost-k8s-coredns--7c65d6cfc9--m6rn2-eth0" Aug 13 07:21:12.352251 containerd[1542]: 2025-08-13 07:21:12.334 [INFO][4456] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:21:12.352251 containerd[1542]: 2025-08-13 07:21:12.338 [INFO][4456] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:21:12.352251 containerd[1542]: 2025-08-13 07:21:12.346 [WARNING][4456] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" HandleID="k8s-pod-network.0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" Workload="localhost-k8s-coredns--7c65d6cfc9--m6rn2-eth0" Aug 13 07:21:12.352251 containerd[1542]: 2025-08-13 07:21:12.346 [INFO][4456] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" HandleID="k8s-pod-network.0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" Workload="localhost-k8s-coredns--7c65d6cfc9--m6rn2-eth0" Aug 13 07:21:12.352251 containerd[1542]: 2025-08-13 07:21:12.346 [INFO][4456] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:21:12.352251 containerd[1542]: 2025-08-13 07:21:12.348 [INFO][4438] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" Aug 13 07:21:12.352251 containerd[1542]: time="2025-08-13T07:21:12.349297652Z" level=info msg="TearDown network for sandbox \"0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855\" successfully" Aug 13 07:21:12.352251 containerd[1542]: time="2025-08-13T07:21:12.349314224Z" level=info msg="StopPodSandbox for \"0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855\" returns successfully" Aug 13 07:21:12.352251 containerd[1542]: time="2025-08-13T07:21:12.349712914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-m6rn2,Uid:ea5dce70-7db5-4a88-9ece-439f7b00c36d,Namespace:kube-system,Attempt:1,}" Aug 13 07:21:12.350915 systemd[1]: run-netns-cni\x2db8acccb4\x2deed4\x2deaec\x2db941\x2dec4ed81baf6d.mount: Deactivated successfully. Aug 13 07:21:12.687946 systemd-networkd[1349]: cali0ec092aa222: Gained IPv6LL Aug 13 07:21:12.721767 systemd-networkd[1349]: cali6f01ad1a1ee: Link UP Aug 13 07:21:12.721897 systemd-networkd[1349]: cali6f01ad1a1ee: Gained carrier Aug 13 07:21:12.747598 containerd[1542]: 2025-08-13 07:21:12.652 [INFO][4467] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--m6rn2-eth0 coredns-7c65d6cfc9- kube-system ea5dce70-7db5-4a88-9ece-439f7b00c36d 943 0 2025-08-13 07:20:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-m6rn2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6f01ad1a1ee [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e72f156ffd27c92e514f02479c8d6cfb6fbe607da13e1bf143fc544f11b79e10" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m6rn2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--m6rn2-" Aug 13 07:21:12.747598 containerd[1542]: 2025-08-13 07:21:12.652 [INFO][4467] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e72f156ffd27c92e514f02479c8d6cfb6fbe607da13e1bf143fc544f11b79e10" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m6rn2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--m6rn2-eth0" Aug 13 07:21:12.747598 containerd[1542]: 2025-08-13 07:21:12.675 [INFO][4489] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e72f156ffd27c92e514f02479c8d6cfb6fbe607da13e1bf143fc544f11b79e10" HandleID="k8s-pod-network.e72f156ffd27c92e514f02479c8d6cfb6fbe607da13e1bf143fc544f11b79e10" Workload="localhost-k8s-coredns--7c65d6cfc9--m6rn2-eth0" Aug 13 07:21:12.747598 containerd[1542]: 2025-08-13 07:21:12.675 [INFO][4489] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e72f156ffd27c92e514f02479c8d6cfb6fbe607da13e1bf143fc544f11b79e10" HandleID="k8s-pod-network.e72f156ffd27c92e514f02479c8d6cfb6fbe607da13e1bf143fc544f11b79e10" Workload="localhost-k8s-coredns--7c65d6cfc9--m6rn2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5730), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-m6rn2", "timestamp":"2025-08-13 07:21:12.675474854 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:21:12.747598 containerd[1542]: 2025-08-13 07:21:12.675 [INFO][4489] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:21:12.747598 containerd[1542]: 2025-08-13 07:21:12.675 [INFO][4489] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:21:12.747598 containerd[1542]: 2025-08-13 07:21:12.675 [INFO][4489] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:21:12.747598 containerd[1542]: 2025-08-13 07:21:12.680 [INFO][4489] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e72f156ffd27c92e514f02479c8d6cfb6fbe607da13e1bf143fc544f11b79e10" host="localhost" Aug 13 07:21:12.747598 containerd[1542]: 2025-08-13 07:21:12.683 [INFO][4489] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:21:12.747598 containerd[1542]: 2025-08-13 07:21:12.688 [INFO][4489] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:21:12.747598 containerd[1542]: 2025-08-13 07:21:12.689 [INFO][4489] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:21:12.747598 containerd[1542]: 2025-08-13 07:21:12.690 [INFO][4489] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:21:12.747598 containerd[1542]: 2025-08-13 07:21:12.691 [INFO][4489] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e72f156ffd27c92e514f02479c8d6cfb6fbe607da13e1bf143fc544f11b79e10" host="localhost" Aug 13 07:21:12.747598 containerd[1542]: 2025-08-13 07:21:12.692 [INFO][4489] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e72f156ffd27c92e514f02479c8d6cfb6fbe607da13e1bf143fc544f11b79e10 Aug 13 07:21:12.747598 containerd[1542]: 2025-08-13 07:21:12.694 [INFO][4489] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e72f156ffd27c92e514f02479c8d6cfb6fbe607da13e1bf143fc544f11b79e10" host="localhost" Aug 13 07:21:12.747598 containerd[1542]: 2025-08-13 07:21:12.702 [INFO][4489] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.e72f156ffd27c92e514f02479c8d6cfb6fbe607da13e1bf143fc544f11b79e10" host="localhost" Aug 13 07:21:12.747598 containerd[1542]: 2025-08-13 07:21:12.702 [INFO][4489] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.e72f156ffd27c92e514f02479c8d6cfb6fbe607da13e1bf143fc544f11b79e10" host="localhost" Aug 13 07:21:12.747598 containerd[1542]: 2025-08-13 07:21:12.702 [INFO][4489] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:21:12.747598 containerd[1542]: 2025-08-13 07:21:12.702 [INFO][4489] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="e72f156ffd27c92e514f02479c8d6cfb6fbe607da13e1bf143fc544f11b79e10" HandleID="k8s-pod-network.e72f156ffd27c92e514f02479c8d6cfb6fbe607da13e1bf143fc544f11b79e10" Workload="localhost-k8s-coredns--7c65d6cfc9--m6rn2-eth0" Aug 13 07:21:12.752246 containerd[1542]: 2025-08-13 07:21:12.714 [INFO][4467] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e72f156ffd27c92e514f02479c8d6cfb6fbe607da13e1bf143fc544f11b79e10" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m6rn2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--m6rn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--m6rn2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ea5dce70-7db5-4a88-9ece-439f7b00c36d", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 20, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-m6rn2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f01ad1a1ee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:21:12.752246 containerd[1542]: 2025-08-13 07:21:12.719 [INFO][4467] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="e72f156ffd27c92e514f02479c8d6cfb6fbe607da13e1bf143fc544f11b79e10" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m6rn2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--m6rn2-eth0" Aug 13 07:21:12.752246 containerd[1542]: 2025-08-13 07:21:12.719 [INFO][4467] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f01ad1a1ee ContainerID="e72f156ffd27c92e514f02479c8d6cfb6fbe607da13e1bf143fc544f11b79e10" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m6rn2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--m6rn2-eth0" Aug 13 07:21:12.752246 containerd[1542]: 2025-08-13 07:21:12.722 [INFO][4467] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e72f156ffd27c92e514f02479c8d6cfb6fbe607da13e1bf143fc544f11b79e10" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m6rn2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--m6rn2-eth0" Aug 13 07:21:12.752246 containerd[1542]: 2025-08-13 07:21:12.722 [INFO][4467] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e72f156ffd27c92e514f02479c8d6cfb6fbe607da13e1bf143fc544f11b79e10" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m6rn2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--m6rn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--m6rn2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ea5dce70-7db5-4a88-9ece-439f7b00c36d", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 20, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e72f156ffd27c92e514f02479c8d6cfb6fbe607da13e1bf143fc544f11b79e10", Pod:"coredns-7c65d6cfc9-m6rn2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f01ad1a1ee", MAC:"b2:5f:e5:36:ac:43", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:21:12.752246 containerd[1542]: 2025-08-13 07:21:12.741 [INFO][4467] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e72f156ffd27c92e514f02479c8d6cfb6fbe607da13e1bf143fc544f11b79e10" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m6rn2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--m6rn2-eth0" Aug 13 07:21:12.767233 containerd[1542]: time="2025-08-13T07:21:12.767170272Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:21:12.767839 containerd[1542]: time="2025-08-13T07:21:12.767724152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:21:12.767839 containerd[1542]: time="2025-08-13T07:21:12.767759325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:21:12.768015 containerd[1542]: time="2025-08-13T07:21:12.767954414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:21:12.782979 systemd[1]: Started cri-containerd-e72f156ffd27c92e514f02479c8d6cfb6fbe607da13e1bf143fc544f11b79e10.scope - libcontainer container e72f156ffd27c92e514f02479c8d6cfb6fbe607da13e1bf143fc544f11b79e10. Aug 13 07:21:12.796623 systemd-resolved[1450]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:21:12.804248 systemd-networkd[1349]: cali70258bc21f5: Link UP Aug 13 07:21:12.805266 systemd-networkd[1349]: cali70258bc21f5: Gained carrier Aug 13 07:21:12.816036 containerd[1542]: 2025-08-13 07:21:12.670 [INFO][4478] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--w2f57-eth0 coredns-7c65d6cfc9- kube-system eb2e06ca-79db-46b7-a389-34bc602b3a47 942 0 2025-08-13 07:20:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-w2f57 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali70258bc21f5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b31ad8eaacc9e63e79b3e9ed4fa4e99dda3d1f241bbe6e5fd09f066711e18970" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w2f57" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--w2f57-" Aug 13 07:21:12.816036 containerd[1542]: 2025-08-13 07:21:12.670 [INFO][4478] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b31ad8eaacc9e63e79b3e9ed4fa4e99dda3d1f241bbe6e5fd09f066711e18970" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w2f57" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--w2f57-eth0" Aug 13 07:21:12.816036 containerd[1542]: 2025-08-13 07:21:12.694 [INFO][4497] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b31ad8eaacc9e63e79b3e9ed4fa4e99dda3d1f241bbe6e5fd09f066711e18970" HandleID="k8s-pod-network.b31ad8eaacc9e63e79b3e9ed4fa4e99dda3d1f241bbe6e5fd09f066711e18970" Workload="localhost-k8s-coredns--7c65d6cfc9--w2f57-eth0" Aug 13 07:21:12.816036 containerd[1542]: 2025-08-13 07:21:12.694 [INFO][4497] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b31ad8eaacc9e63e79b3e9ed4fa4e99dda3d1f241bbe6e5fd09f066711e18970" HandleID="k8s-pod-network.b31ad8eaacc9e63e79b3e9ed4fa4e99dda3d1f241bbe6e5fd09f066711e18970" Workload="localhost-k8s-coredns--7c65d6cfc9--w2f57-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024eff0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-w2f57", "timestamp":"2025-08-13 07:21:12.694311426 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:21:12.816036 containerd[1542]: 2025-08-13 07:21:12.694 [INFO][4497] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:21:12.816036 containerd[1542]: 2025-08-13 07:21:12.702 [INFO][4497] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:21:12.816036 containerd[1542]: 2025-08-13 07:21:12.702 [INFO][4497] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:21:12.816036 containerd[1542]: 2025-08-13 07:21:12.781 [INFO][4497] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b31ad8eaacc9e63e79b3e9ed4fa4e99dda3d1f241bbe6e5fd09f066711e18970" host="localhost" Aug 13 07:21:12.816036 containerd[1542]: 2025-08-13 07:21:12.784 [INFO][4497] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:21:12.816036 containerd[1542]: 2025-08-13 07:21:12.788 [INFO][4497] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:21:12.816036 containerd[1542]: 2025-08-13 07:21:12.789 [INFO][4497] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:21:12.816036 containerd[1542]: 2025-08-13 07:21:12.791 [INFO][4497] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:21:12.816036 containerd[1542]: 2025-08-13 07:21:12.791 [INFO][4497] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b31ad8eaacc9e63e79b3e9ed4fa4e99dda3d1f241bbe6e5fd09f066711e18970" host="localhost" Aug 13 07:21:12.816036 containerd[1542]: 2025-08-13 07:21:12.792 [INFO][4497] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b31ad8eaacc9e63e79b3e9ed4fa4e99dda3d1f241bbe6e5fd09f066711e18970 Aug 13 07:21:12.816036 containerd[1542]: 2025-08-13 07:21:12.794 [INFO][4497] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b31ad8eaacc9e63e79b3e9ed4fa4e99dda3d1f241bbe6e5fd09f066711e18970" host="localhost" Aug 13 07:21:12.816036 containerd[1542]: 2025-08-13 07:21:12.798 [INFO][4497] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.b31ad8eaacc9e63e79b3e9ed4fa4e99dda3d1f241bbe6e5fd09f066711e18970" host="localhost" Aug 13 07:21:12.816036 containerd[1542]: 2025-08-13 07:21:12.798 [INFO][4497] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.b31ad8eaacc9e63e79b3e9ed4fa4e99dda3d1f241bbe6e5fd09f066711e18970" host="localhost" Aug 13 07:21:12.816036 containerd[1542]: 2025-08-13 07:21:12.798 [INFO][4497] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:21:12.816036 containerd[1542]: 2025-08-13 07:21:12.798 [INFO][4497] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="b31ad8eaacc9e63e79b3e9ed4fa4e99dda3d1f241bbe6e5fd09f066711e18970" HandleID="k8s-pod-network.b31ad8eaacc9e63e79b3e9ed4fa4e99dda3d1f241bbe6e5fd09f066711e18970" Workload="localhost-k8s-coredns--7c65d6cfc9--w2f57-eth0" Aug 13 07:21:12.816622 containerd[1542]: 2025-08-13 07:21:12.802 [INFO][4478] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b31ad8eaacc9e63e79b3e9ed4fa4e99dda3d1f241bbe6e5fd09f066711e18970" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w2f57" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--w2f57-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--w2f57-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"eb2e06ca-79db-46b7-a389-34bc602b3a47", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 20, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-w2f57", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali70258bc21f5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:21:12.816622 containerd[1542]: 2025-08-13 07:21:12.802 [INFO][4478] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="b31ad8eaacc9e63e79b3e9ed4fa4e99dda3d1f241bbe6e5fd09f066711e18970" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w2f57" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--w2f57-eth0" Aug 13 07:21:12.816622 containerd[1542]: 2025-08-13 07:21:12.802 [INFO][4478] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali70258bc21f5 ContainerID="b31ad8eaacc9e63e79b3e9ed4fa4e99dda3d1f241bbe6e5fd09f066711e18970" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w2f57" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--w2f57-eth0" Aug 13 07:21:12.816622 containerd[1542]: 2025-08-13 07:21:12.805 [INFO][4478] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b31ad8eaacc9e63e79b3e9ed4fa4e99dda3d1f241bbe6e5fd09f066711e18970" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w2f57" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--w2f57-eth0" Aug 13 07:21:12.816622 containerd[1542]: 2025-08-13 07:21:12.806 [INFO][4478] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b31ad8eaacc9e63e79b3e9ed4fa4e99dda3d1f241bbe6e5fd09f066711e18970" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w2f57" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--w2f57-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--w2f57-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"eb2e06ca-79db-46b7-a389-34bc602b3a47", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 20, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b31ad8eaacc9e63e79b3e9ed4fa4e99dda3d1f241bbe6e5fd09f066711e18970", Pod:"coredns-7c65d6cfc9-w2f57", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali70258bc21f5", MAC:"fe:02:aa:0e:95:04", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:21:12.816622 containerd[1542]: 2025-08-13 07:21:12.814 [INFO][4478] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b31ad8eaacc9e63e79b3e9ed4fa4e99dda3d1f241bbe6e5fd09f066711e18970" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w2f57" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--w2f57-eth0" Aug 13 07:21:12.835047 containerd[1542]: time="2025-08-13T07:21:12.834969574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-m6rn2,Uid:ea5dce70-7db5-4a88-9ece-439f7b00c36d,Namespace:kube-system,Attempt:1,} returns sandbox id \"e72f156ffd27c92e514f02479c8d6cfb6fbe607da13e1bf143fc544f11b79e10\"" Aug 13 07:21:12.838846 containerd[1542]: time="2025-08-13T07:21:12.838680937Z" level=info msg="CreateContainer within sandbox \"e72f156ffd27c92e514f02479c8d6cfb6fbe607da13e1bf143fc544f11b79e10\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:21:12.842094 containerd[1542]: time="2025-08-13T07:21:12.841372722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:21:12.842094 containerd[1542]: time="2025-08-13T07:21:12.841403905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:21:12.842094 containerd[1542]: time="2025-08-13T07:21:12.841410417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:21:12.842094 containerd[1542]: time="2025-08-13T07:21:12.841455854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:21:12.853951 systemd[1]: Started cri-containerd-b31ad8eaacc9e63e79b3e9ed4fa4e99dda3d1f241bbe6e5fd09f066711e18970.scope - libcontainer container b31ad8eaacc9e63e79b3e9ed4fa4e99dda3d1f241bbe6e5fd09f066711e18970. Aug 13 07:21:12.859226 containerd[1542]: time="2025-08-13T07:21:12.859153381Z" level=info msg="CreateContainer within sandbox \"e72f156ffd27c92e514f02479c8d6cfb6fbe607da13e1bf143fc544f11b79e10\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7d0044253190aca07588b293c816b565d3c5dcc1a5c795c07134ba872bd877a1\"" Aug 13 07:21:12.860393 containerd[1542]: time="2025-08-13T07:21:12.859656656Z" level=info msg="StartContainer for \"7d0044253190aca07588b293c816b565d3c5dcc1a5c795c07134ba872bd877a1\"" Aug 13 07:21:12.866720 systemd-resolved[1450]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:21:12.880963 systemd[1]: Started cri-containerd-7d0044253190aca07588b293c816b565d3c5dcc1a5c795c07134ba872bd877a1.scope - libcontainer container 7d0044253190aca07588b293c816b565d3c5dcc1a5c795c07134ba872bd877a1. Aug 13 07:21:12.898473 containerd[1542]: time="2025-08-13T07:21:12.898452582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w2f57,Uid:eb2e06ca-79db-46b7-a389-34bc602b3a47,Namespace:kube-system,Attempt:1,} returns sandbox id \"b31ad8eaacc9e63e79b3e9ed4fa4e99dda3d1f241bbe6e5fd09f066711e18970\"" Aug 13 07:21:12.909294 containerd[1542]: time="2025-08-13T07:21:12.909266167Z" level=info msg="CreateContainer within sandbox \"b31ad8eaacc9e63e79b3e9ed4fa4e99dda3d1f241bbe6e5fd09f066711e18970\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:21:12.918476 containerd[1542]: time="2025-08-13T07:21:12.918427868Z" level=info msg="CreateContainer within sandbox \"b31ad8eaacc9e63e79b3e9ed4fa4e99dda3d1f241bbe6e5fd09f066711e18970\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2c29bf7d7a50b8991d0bddb1689151c7a70252ff0bdc918e1dfba4ec82d686fc\"" Aug 13 07:21:12.919155 containerd[1542]: time="2025-08-13T07:21:12.918919368Z" level=info msg="StartContainer for \"2c29bf7d7a50b8991d0bddb1689151c7a70252ff0bdc918e1dfba4ec82d686fc\"" Aug 13 07:21:12.929539 containerd[1542]: time="2025-08-13T07:21:12.929518322Z" level=info msg="StartContainer for \"7d0044253190aca07588b293c816b565d3c5dcc1a5c795c07134ba872bd877a1\" returns successfully" Aug 13 07:21:12.942944 systemd[1]: Started cri-containerd-2c29bf7d7a50b8991d0bddb1689151c7a70252ff0bdc918e1dfba4ec82d686fc.scope - libcontainer container 2c29bf7d7a50b8991d0bddb1689151c7a70252ff0bdc918e1dfba4ec82d686fc. Aug 13 07:21:12.966582 containerd[1542]: time="2025-08-13T07:21:12.966557439Z" level=info msg="StartContainer for \"2c29bf7d7a50b8991d0bddb1689151c7a70252ff0bdc918e1dfba4ec82d686fc\" returns successfully" Aug 13 07:21:13.171723 containerd[1542]: time="2025-08-13T07:21:13.171682943Z" level=info msg="StopPodSandbox for \"a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284\"" Aug 13 07:21:13.172055 containerd[1542]: time="2025-08-13T07:21:13.171912242Z" level=info msg="StopPodSandbox for \"cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3\"" Aug 13 07:21:13.335660 containerd[1542]: 2025-08-13 07:21:13.289 [INFO][4687] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" Aug 13 07:21:13.335660 containerd[1542]: 2025-08-13 07:21:13.289 [INFO][4687] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" iface="eth0" netns="/var/run/netns/cni-0e04f0bb-679c-3ae0-901e-db012a925c38" Aug 13 07:21:13.335660 containerd[1542]: 2025-08-13 07:21:13.289 [INFO][4687] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" iface="eth0" netns="/var/run/netns/cni-0e04f0bb-679c-3ae0-901e-db012a925c38" Aug 13 07:21:13.335660 containerd[1542]: 2025-08-13 07:21:13.290 [INFO][4687] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" iface="eth0" netns="/var/run/netns/cni-0e04f0bb-679c-3ae0-901e-db012a925c38" Aug 13 07:21:13.335660 containerd[1542]: 2025-08-13 07:21:13.291 [INFO][4687] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" Aug 13 07:21:13.335660 containerd[1542]: 2025-08-13 07:21:13.291 [INFO][4687] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" Aug 13 07:21:13.335660 containerd[1542]: 2025-08-13 07:21:13.324 [INFO][4713] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" HandleID="k8s-pod-network.a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" Workload="localhost-k8s-goldmane--58fd7646b9--nwh9w-eth0" Aug 13 07:21:13.335660 containerd[1542]: 2025-08-13 07:21:13.324 [INFO][4713] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:21:13.335660 containerd[1542]: 2025-08-13 07:21:13.324 [INFO][4713] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:21:13.335660 containerd[1542]: 2025-08-13 07:21:13.329 [WARNING][4713] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" HandleID="k8s-pod-network.a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" Workload="localhost-k8s-goldmane--58fd7646b9--nwh9w-eth0" Aug 13 07:21:13.335660 containerd[1542]: 2025-08-13 07:21:13.329 [INFO][4713] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" HandleID="k8s-pod-network.a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" Workload="localhost-k8s-goldmane--58fd7646b9--nwh9w-eth0" Aug 13 07:21:13.335660 containerd[1542]: 2025-08-13 07:21:13.330 [INFO][4713] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:21:13.335660 containerd[1542]: 2025-08-13 07:21:13.334 [INFO][4687] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" Aug 13 07:21:13.335995 containerd[1542]: time="2025-08-13T07:21:13.335742109Z" level=info msg="TearDown network for sandbox \"a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284\" successfully" Aug 13 07:21:13.335995 containerd[1542]: time="2025-08-13T07:21:13.335759451Z" level=info msg="StopPodSandbox for \"a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284\" returns successfully" Aug 13 07:21:13.338187 containerd[1542]: time="2025-08-13T07:21:13.336176590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-nwh9w,Uid:f1b8ec24-0eba-4c4b-be6f-8021be0e5ebc,Namespace:calico-system,Attempt:1,}" Aug 13 07:21:13.338187 containerd[1542]: 2025-08-13 07:21:13.281 [INFO][4686] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" Aug 13 07:21:13.338187 containerd[1542]: 2025-08-13 07:21:13.285 [INFO][4686] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" iface="eth0" netns="/var/run/netns/cni-6600a929-d7e0-6466-0b48-5ba732f5b502" Aug 13 07:21:13.338187 containerd[1542]: 2025-08-13 07:21:13.285 [INFO][4686] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" iface="eth0" netns="/var/run/netns/cni-6600a929-d7e0-6466-0b48-5ba732f5b502" Aug 13 07:21:13.338187 containerd[1542]: 2025-08-13 07:21:13.285 [INFO][4686] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" iface="eth0" netns="/var/run/netns/cni-6600a929-d7e0-6466-0b48-5ba732f5b502" Aug 13 07:21:13.338187 containerd[1542]: 2025-08-13 07:21:13.285 [INFO][4686] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" Aug 13 07:21:13.338187 containerd[1542]: 2025-08-13 07:21:13.285 [INFO][4686] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" Aug 13 07:21:13.338187 containerd[1542]: 2025-08-13 07:21:13.314 [INFO][4707] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" HandleID="k8s-pod-network.cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" Workload="localhost-k8s-csi--node--driver--cs2xn-eth0" Aug 13 07:21:13.338187 containerd[1542]: 2025-08-13 07:21:13.315 [INFO][4707] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:21:13.338187 containerd[1542]: 2025-08-13 07:21:13.315 [INFO][4707] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:21:13.338187 containerd[1542]: 2025-08-13 07:21:13.322 [WARNING][4707] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" HandleID="k8s-pod-network.cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" Workload="localhost-k8s-csi--node--driver--cs2xn-eth0" Aug 13 07:21:13.338187 containerd[1542]: 2025-08-13 07:21:13.322 [INFO][4707] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" HandleID="k8s-pod-network.cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" Workload="localhost-k8s-csi--node--driver--cs2xn-eth0" Aug 13 07:21:13.338187 containerd[1542]: 2025-08-13 07:21:13.323 [INFO][4707] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:21:13.338187 containerd[1542]: 2025-08-13 07:21:13.334 [INFO][4686] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" Aug 13 07:21:13.337955 systemd[1]: run-netns-cni\x2d0e04f0bb\x2d679c\x2d3ae0\x2d901e\x2ddb012a925c38.mount: Deactivated successfully. Aug 13 07:21:13.342862 containerd[1542]: time="2025-08-13T07:21:13.338526584Z" level=info msg="TearDown network for sandbox \"cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3\" successfully" Aug 13 07:21:13.342862 containerd[1542]: time="2025-08-13T07:21:13.338538692Z" level=info msg="StopPodSandbox for \"cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3\" returns successfully" Aug 13 07:21:13.342862 containerd[1542]: time="2025-08-13T07:21:13.339650973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cs2xn,Uid:58b74c31-6d05-4f11-8c94-9d85e9d65a22,Namespace:calico-system,Attempt:1,}" Aug 13 07:21:13.342765 systemd[1]: run-netns-cni\x2d6600a929\x2dd7e0\x2d6466\x2d0b48\x2d5ba732f5b502.mount: Deactivated successfully. Aug 13 07:21:13.428759 kubelet[2712]: I0813 07:21:13.428713 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-m6rn2" podStartSLOduration=37.427603506 podStartE2EDuration="37.427603506s" podCreationTimestamp="2025-08-13 07:20:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:21:13.427519723 +0000 UTC m=+41.396951868" watchObservedRunningTime="2025-08-13 07:21:13.427603506 +0000 UTC m=+41.397035651" Aug 13 07:21:13.457141 kubelet[2712]: I0813 07:21:13.457055 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-w2f57" podStartSLOduration=37.456949524 podStartE2EDuration="37.456949524s" podCreationTimestamp="2025-08-13 07:20:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:21:13.456619736 +0000 UTC m=+41.426051886" watchObservedRunningTime="2025-08-13 07:21:13.456949524 +0000 UTC m=+41.426381669" Aug 13 07:21:13.621111 systemd-networkd[1349]: calie6e01a11ed9: Link UP Aug 13 07:21:13.626296 systemd-networkd[1349]: calie6e01a11ed9: Gained carrier Aug 13 07:21:13.635932 containerd[1542]: 2025-08-13 07:21:13.537 [INFO][4736] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--nwh9w-eth0 goldmane-58fd7646b9- calico-system f1b8ec24-0eba-4c4b-be6f-8021be0e5ebc 960 0 2025-08-13 07:20:47 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-nwh9w eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie6e01a11ed9 [] [] }} ContainerID="0a0e1ee3f37d49a3eb0c1cfd7b8972a1e448d406be1c6106faec04f406b06e42" Namespace="calico-system" Pod="goldmane-58fd7646b9-nwh9w" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--nwh9w-" Aug 13 07:21:13.635932 containerd[1542]: 2025-08-13 07:21:13.537 [INFO][4736] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0a0e1ee3f37d49a3eb0c1cfd7b8972a1e448d406be1c6106faec04f406b06e42" Namespace="calico-system" Pod="goldmane-58fd7646b9-nwh9w" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--nwh9w-eth0" Aug 13 07:21:13.635932 containerd[1542]: 2025-08-13 07:21:13.579 [INFO][4751] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a0e1ee3f37d49a3eb0c1cfd7b8972a1e448d406be1c6106faec04f406b06e42" HandleID="k8s-pod-network.0a0e1ee3f37d49a3eb0c1cfd7b8972a1e448d406be1c6106faec04f406b06e42" Workload="localhost-k8s-goldmane--58fd7646b9--nwh9w-eth0" Aug 13 07:21:13.635932 containerd[1542]: 2025-08-13 07:21:13.579 [INFO][4751] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0a0e1ee3f37d49a3eb0c1cfd7b8972a1e448d406be1c6106faec04f406b06e42" HandleID="k8s-pod-network.0a0e1ee3f37d49a3eb0c1cfd7b8972a1e448d406be1c6106faec04f406b06e42" Workload="localhost-k8s-goldmane--58fd7646b9--nwh9w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5010), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-nwh9w", "timestamp":"2025-08-13 07:21:13.579020591 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:21:13.635932 containerd[1542]: 2025-08-13 07:21:13.579 [INFO][4751] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:21:13.635932 containerd[1542]: 2025-08-13 07:21:13.579 [INFO][4751] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:21:13.635932 containerd[1542]: 2025-08-13 07:21:13.579 [INFO][4751] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:21:13.635932 containerd[1542]: 2025-08-13 07:21:13.584 [INFO][4751] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0a0e1ee3f37d49a3eb0c1cfd7b8972a1e448d406be1c6106faec04f406b06e42" host="localhost" Aug 13 07:21:13.635932 containerd[1542]: 2025-08-13 07:21:13.596 [INFO][4751] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:21:13.635932 containerd[1542]: 2025-08-13 07:21:13.600 [INFO][4751] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:21:13.635932 containerd[1542]: 2025-08-13 07:21:13.604 [INFO][4751] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:21:13.635932 containerd[1542]: 2025-08-13 07:21:13.607 [INFO][4751] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:21:13.635932 containerd[1542]: 2025-08-13 07:21:13.607 [INFO][4751] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0a0e1ee3f37d49a3eb0c1cfd7b8972a1e448d406be1c6106faec04f406b06e42" host="localhost" Aug 13 07:21:13.635932 containerd[1542]: 2025-08-13 07:21:13.608 [INFO][4751] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0a0e1ee3f37d49a3eb0c1cfd7b8972a1e448d406be1c6106faec04f406b06e42 Aug 13 07:21:13.635932 containerd[1542]: 2025-08-13 07:21:13.611 [INFO][4751] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0a0e1ee3f37d49a3eb0c1cfd7b8972a1e448d406be1c6106faec04f406b06e42" host="localhost" Aug 13 07:21:13.635932 containerd[1542]: 2025-08-13 07:21:13.614 [INFO][4751] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.0a0e1ee3f37d49a3eb0c1cfd7b8972a1e448d406be1c6106faec04f406b06e42" host="localhost" Aug 13 07:21:13.635932 containerd[1542]: 2025-08-13 07:21:13.614 [INFO][4751] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.0a0e1ee3f37d49a3eb0c1cfd7b8972a1e448d406be1c6106faec04f406b06e42" host="localhost" Aug 13 07:21:13.635932 containerd[1542]: 2025-08-13 07:21:13.614 [INFO][4751] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:21:13.635932 containerd[1542]: 2025-08-13 07:21:13.614 [INFO][4751] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="0a0e1ee3f37d49a3eb0c1cfd7b8972a1e448d406be1c6106faec04f406b06e42" HandleID="k8s-pod-network.0a0e1ee3f37d49a3eb0c1cfd7b8972a1e448d406be1c6106faec04f406b06e42" Workload="localhost-k8s-goldmane--58fd7646b9--nwh9w-eth0" Aug 13 07:21:13.637346 containerd[1542]: 2025-08-13 07:21:13.616 [INFO][4736] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0a0e1ee3f37d49a3eb0c1cfd7b8972a1e448d406be1c6106faec04f406b06e42" Namespace="calico-system" Pod="goldmane-58fd7646b9-nwh9w" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--nwh9w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--nwh9w-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"f1b8ec24-0eba-4c4b-be6f-8021be0e5ebc", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 20, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-nwh9w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie6e01a11ed9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:21:13.637346 containerd[1542]: 2025-08-13 07:21:13.616 [INFO][4736] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="0a0e1ee3f37d49a3eb0c1cfd7b8972a1e448d406be1c6106faec04f406b06e42" Namespace="calico-system" Pod="goldmane-58fd7646b9-nwh9w" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--nwh9w-eth0" Aug 13 07:21:13.637346 containerd[1542]: 2025-08-13 07:21:13.616 [INFO][4736] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie6e01a11ed9 ContainerID="0a0e1ee3f37d49a3eb0c1cfd7b8972a1e448d406be1c6106faec04f406b06e42" Namespace="calico-system" Pod="goldmane-58fd7646b9-nwh9w" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--nwh9w-eth0" Aug 13 07:21:13.637346 containerd[1542]: 2025-08-13 07:21:13.624 [INFO][4736] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a0e1ee3f37d49a3eb0c1cfd7b8972a1e448d406be1c6106faec04f406b06e42" Namespace="calico-system" Pod="goldmane-58fd7646b9-nwh9w" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--nwh9w-eth0" Aug 13 07:21:13.637346 containerd[1542]: 2025-08-13 07:21:13.625 [INFO][4736] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0a0e1ee3f37d49a3eb0c1cfd7b8972a1e448d406be1c6106faec04f406b06e42" Namespace="calico-system" Pod="goldmane-58fd7646b9-nwh9w" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--nwh9w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--nwh9w-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"f1b8ec24-0eba-4c4b-be6f-8021be0e5ebc", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 20, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0a0e1ee3f37d49a3eb0c1cfd7b8972a1e448d406be1c6106faec04f406b06e42", Pod:"goldmane-58fd7646b9-nwh9w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie6e01a11ed9", MAC:"3e:83:25:29:e3:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:21:13.637346 containerd[1542]: 2025-08-13 07:21:13.633 [INFO][4736] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0a0e1ee3f37d49a3eb0c1cfd7b8972a1e448d406be1c6106faec04f406b06e42" Namespace="calico-system" Pod="goldmane-58fd7646b9-nwh9w" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--nwh9w-eth0" Aug 13 07:21:13.658881 containerd[1542]: time="2025-08-13T07:21:13.658461279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:21:13.658881 containerd[1542]: time="2025-08-13T07:21:13.658522456Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:21:13.658881 containerd[1542]: time="2025-08-13T07:21:13.658536163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:21:13.658881 containerd[1542]: time="2025-08-13T07:21:13.658838545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:21:13.683945 systemd[1]: Started cri-containerd-0a0e1ee3f37d49a3eb0c1cfd7b8972a1e448d406be1c6106faec04f406b06e42.scope - libcontainer container 0a0e1ee3f37d49a3eb0c1cfd7b8972a1e448d406be1c6106faec04f406b06e42. Aug 13 07:21:13.701782 systemd-resolved[1450]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:21:13.723757 systemd-networkd[1349]: cali9dd9bc586f4: Link UP Aug 13 07:21:13.724598 systemd-networkd[1349]: cali9dd9bc586f4: Gained carrier Aug 13 07:21:13.758965 containerd[1542]: 2025-08-13 07:21:13.545 [INFO][4726] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--cs2xn-eth0 csi-node-driver- calico-system 58b74c31-6d05-4f11-8c94-9d85e9d65a22 959 0 2025-08-13 07:20:47 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-cs2xn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9dd9bc586f4 [] [] }} ContainerID="aac1f3e35c90e6e79cfe1bc4bbb7a08564f369fea0f7c6a250b25ef81f01bd35" Namespace="calico-system" Pod="csi-node-driver-cs2xn" WorkloadEndpoint="localhost-k8s-csi--node--driver--cs2xn-" Aug 13 07:21:13.758965 containerd[1542]: 2025-08-13 07:21:13.545 [INFO][4726] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aac1f3e35c90e6e79cfe1bc4bbb7a08564f369fea0f7c6a250b25ef81f01bd35" Namespace="calico-system" Pod="csi-node-driver-cs2xn" WorkloadEndpoint="localhost-k8s-csi--node--driver--cs2xn-eth0" Aug 13 07:21:13.758965 containerd[1542]: 2025-08-13 07:21:13.597 [INFO][4756] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aac1f3e35c90e6e79cfe1bc4bbb7a08564f369fea0f7c6a250b25ef81f01bd35" HandleID="k8s-pod-network.aac1f3e35c90e6e79cfe1bc4bbb7a08564f369fea0f7c6a250b25ef81f01bd35" Workload="localhost-k8s-csi--node--driver--cs2xn-eth0" Aug 13 07:21:13.758965 containerd[1542]: 2025-08-13 07:21:13.597 [INFO][4756] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aac1f3e35c90e6e79cfe1bc4bbb7a08564f369fea0f7c6a250b25ef81f01bd35" HandleID="k8s-pod-network.aac1f3e35c90e6e79cfe1bc4bbb7a08564f369fea0f7c6a250b25ef81f01bd35" Workload="localhost-k8s-csi--node--driver--cs2xn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-cs2xn", "timestamp":"2025-08-13 07:21:13.597540421 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:21:13.758965 containerd[1542]: 2025-08-13 07:21:13.598 [INFO][4756] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:21:13.758965 containerd[1542]: 2025-08-13 07:21:13.614 [INFO][4756] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:21:13.758965 containerd[1542]: 2025-08-13 07:21:13.614 [INFO][4756] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:21:13.758965 containerd[1542]: 2025-08-13 07:21:13.685 [INFO][4756] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aac1f3e35c90e6e79cfe1bc4bbb7a08564f369fea0f7c6a250b25ef81f01bd35" host="localhost" Aug 13 07:21:13.758965 containerd[1542]: 2025-08-13 07:21:13.692 [INFO][4756] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:21:13.758965 containerd[1542]: 2025-08-13 07:21:13.701 [INFO][4756] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:21:13.758965 containerd[1542]: 2025-08-13 07:21:13.703 [INFO][4756] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:21:13.758965 containerd[1542]: 2025-08-13 07:21:13.705 [INFO][4756] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:21:13.758965 containerd[1542]: 2025-08-13 07:21:13.705 [INFO][4756] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aac1f3e35c90e6e79cfe1bc4bbb7a08564f369fea0f7c6a250b25ef81f01bd35" host="localhost" Aug 13 07:21:13.758965 containerd[1542]: 2025-08-13 07:21:13.706 [INFO][4756] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.aac1f3e35c90e6e79cfe1bc4bbb7a08564f369fea0f7c6a250b25ef81f01bd35 Aug 13 07:21:13.758965 containerd[1542]: 2025-08-13 07:21:13.709 [INFO][4756] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aac1f3e35c90e6e79cfe1bc4bbb7a08564f369fea0f7c6a250b25ef81f01bd35" host="localhost" Aug 13 07:21:13.758965 containerd[1542]: 2025-08-13 07:21:13.715 [INFO][4756] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.aac1f3e35c90e6e79cfe1bc4bbb7a08564f369fea0f7c6a250b25ef81f01bd35" host="localhost" Aug 13 07:21:13.758965 containerd[1542]: 2025-08-13 07:21:13.715 [INFO][4756] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.aac1f3e35c90e6e79cfe1bc4bbb7a08564f369fea0f7c6a250b25ef81f01bd35" host="localhost" Aug 13 07:21:13.758965 containerd[1542]: 2025-08-13 07:21:13.715 [INFO][4756] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:21:13.758965 containerd[1542]: 2025-08-13 07:21:13.715 [INFO][4756] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="aac1f3e35c90e6e79cfe1bc4bbb7a08564f369fea0f7c6a250b25ef81f01bd35" HandleID="k8s-pod-network.aac1f3e35c90e6e79cfe1bc4bbb7a08564f369fea0f7c6a250b25ef81f01bd35" Workload="localhost-k8s-csi--node--driver--cs2xn-eth0" Aug 13 07:21:13.765568 containerd[1542]: 2025-08-13 07:21:13.718 [INFO][4726] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aac1f3e35c90e6e79cfe1bc4bbb7a08564f369fea0f7c6a250b25ef81f01bd35" Namespace="calico-system" Pod="csi-node-driver-cs2xn" WorkloadEndpoint="localhost-k8s-csi--node--driver--cs2xn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cs2xn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"58b74c31-6d05-4f11-8c94-9d85e9d65a22", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 20, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-cs2xn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9dd9bc586f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:21:13.765568 containerd[1542]: 2025-08-13 07:21:13.718 [INFO][4726] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="aac1f3e35c90e6e79cfe1bc4bbb7a08564f369fea0f7c6a250b25ef81f01bd35" Namespace="calico-system" Pod="csi-node-driver-cs2xn" WorkloadEndpoint="localhost-k8s-csi--node--driver--cs2xn-eth0" Aug 13 07:21:13.765568 containerd[1542]: 2025-08-13 07:21:13.718 [INFO][4726] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9dd9bc586f4 ContainerID="aac1f3e35c90e6e79cfe1bc4bbb7a08564f369fea0f7c6a250b25ef81f01bd35" Namespace="calico-system" Pod="csi-node-driver-cs2xn" WorkloadEndpoint="localhost-k8s-csi--node--driver--cs2xn-eth0" Aug 13 07:21:13.765568 containerd[1542]: 2025-08-13 07:21:13.724 [INFO][4726] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aac1f3e35c90e6e79cfe1bc4bbb7a08564f369fea0f7c6a250b25ef81f01bd35" Namespace="calico-system" Pod="csi-node-driver-cs2xn" WorkloadEndpoint="localhost-k8s-csi--node--driver--cs2xn-eth0" Aug 13 07:21:13.765568 containerd[1542]: 2025-08-13 07:21:13.729 [INFO][4726] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aac1f3e35c90e6e79cfe1bc4bbb7a08564f369fea0f7c6a250b25ef81f01bd35" Namespace="calico-system" Pod="csi-node-driver-cs2xn" WorkloadEndpoint="localhost-k8s-csi--node--driver--cs2xn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cs2xn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"58b74c31-6d05-4f11-8c94-9d85e9d65a22", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 20, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aac1f3e35c90e6e79cfe1bc4bbb7a08564f369fea0f7c6a250b25ef81f01bd35", Pod:"csi-node-driver-cs2xn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9dd9bc586f4", MAC:"f6:b4:72:34:28:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:21:13.765568 containerd[1542]: 2025-08-13 07:21:13.750 [INFO][4726] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aac1f3e35c90e6e79cfe1bc4bbb7a08564f369fea0f7c6a250b25ef81f01bd35" Namespace="calico-system" Pod="csi-node-driver-cs2xn" WorkloadEndpoint="localhost-k8s-csi--node--driver--cs2xn-eth0" Aug 13 07:21:13.765568 containerd[1542]: time="2025-08-13T07:21:13.764857037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-nwh9w,Uid:f1b8ec24-0eba-4c4b-be6f-8021be0e5ebc,Namespace:calico-system,Attempt:1,} returns sandbox id \"0a0e1ee3f37d49a3eb0c1cfd7b8972a1e448d406be1c6106faec04f406b06e42\"" Aug 13 07:21:13.797248 containerd[1542]: time="2025-08-13T07:21:13.793906534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:21:13.797248 containerd[1542]: time="2025-08-13T07:21:13.793953632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:21:13.797248 containerd[1542]: time="2025-08-13T07:21:13.793964274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:21:13.797248 containerd[1542]: time="2025-08-13T07:21:13.794021377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:21:13.806967 systemd[1]: Started cri-containerd-aac1f3e35c90e6e79cfe1bc4bbb7a08564f369fea0f7c6a250b25ef81f01bd35.scope - libcontainer container aac1f3e35c90e6e79cfe1bc4bbb7a08564f369fea0f7c6a250b25ef81f01bd35. Aug 13 07:21:13.823192 systemd-resolved[1450]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:21:13.852757 containerd[1542]: time="2025-08-13T07:21:13.852584355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cs2xn,Uid:58b74c31-6d05-4f11-8c94-9d85e9d65a22,Namespace:calico-system,Attempt:1,} returns sandbox id \"aac1f3e35c90e6e79cfe1bc4bbb7a08564f369fea0f7c6a250b25ef81f01bd35\"" Aug 13 07:21:14.415962 systemd-networkd[1349]: cali70258bc21f5: Gained IPv6LL Aug 13 07:21:14.481035 systemd-networkd[1349]: cali6f01ad1a1ee: Gained IPv6LL Aug 13 07:21:15.065396 containerd[1542]: time="2025-08-13T07:21:15.065364358Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:21:15.067844 containerd[1542]: time="2025-08-13T07:21:15.066191209Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Aug 13 07:21:15.068801 containerd[1542]: time="2025-08-13T07:21:15.068773040Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:21:15.070271 containerd[1542]: time="2025-08-13T07:21:15.070223769Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:21:15.071186 containerd[1542]: time="2025-08-13T07:21:15.070840079Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 3.547860579s" Aug 13 07:21:15.071186 containerd[1542]: time="2025-08-13T07:21:15.070868819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Aug 13 07:21:15.071775 containerd[1542]: time="2025-08-13T07:21:15.071751631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:21:15.096367 containerd[1542]: time="2025-08-13T07:21:15.096334435Z" level=info msg="CreateContainer within sandbox \"537e4504d54a154f756f111b3dc2199c13107e814f5bfd44c598c832b42051ed\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 07:21:15.106139 containerd[1542]: time="2025-08-13T07:21:15.106111453Z" level=info msg="CreateContainer within sandbox \"537e4504d54a154f756f111b3dc2199c13107e814f5bfd44c598c832b42051ed\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8debb08f85bb7a37f7ddda7f57fe01b5fcab1e2e42318c0f50f99cd51ffdb5a8\"" Aug 13 07:21:15.107568 containerd[1542]: time="2025-08-13T07:21:15.106669411Z" level=info msg="StartContainer for \"8debb08f85bb7a37f7ddda7f57fe01b5fcab1e2e42318c0f50f99cd51ffdb5a8\"" Aug 13 07:21:15.141904 systemd[1]: Started cri-containerd-8debb08f85bb7a37f7ddda7f57fe01b5fcab1e2e42318c0f50f99cd51ffdb5a8.scope - libcontainer container 8debb08f85bb7a37f7ddda7f57fe01b5fcab1e2e42318c0f50f99cd51ffdb5a8. Aug 13 07:21:15.173886 containerd[1542]: time="2025-08-13T07:21:15.173578148Z" level=info msg="StopPodSandbox for \"0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc\"" Aug 13 07:21:15.183271 containerd[1542]: time="2025-08-13T07:21:15.183244983Z" level=info msg="StartContainer for \"8debb08f85bb7a37f7ddda7f57fe01b5fcab1e2e42318c0f50f99cd51ffdb5a8\" returns successfully" Aug 13 07:21:15.183960 systemd-networkd[1349]: cali9dd9bc586f4: Gained IPv6LL Aug 13 07:21:15.344498 containerd[1542]: 2025-08-13 07:21:15.308 [INFO][4940] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" Aug 13 07:21:15.344498 containerd[1542]: 2025-08-13 07:21:15.309 [INFO][4940] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" iface="eth0" netns="/var/run/netns/cni-2dec3adc-8b47-715c-6c3f-25e95ab8bd38" Aug 13 07:21:15.344498 containerd[1542]: 2025-08-13 07:21:15.309 [INFO][4940] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" iface="eth0" netns="/var/run/netns/cni-2dec3adc-8b47-715c-6c3f-25e95ab8bd38" Aug 13 07:21:15.344498 containerd[1542]: 2025-08-13 07:21:15.309 [INFO][4940] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" iface="eth0" netns="/var/run/netns/cni-2dec3adc-8b47-715c-6c3f-25e95ab8bd38" Aug 13 07:21:15.344498 containerd[1542]: 2025-08-13 07:21:15.309 [INFO][4940] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" Aug 13 07:21:15.344498 containerd[1542]: 2025-08-13 07:21:15.309 [INFO][4940] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" Aug 13 07:21:15.344498 containerd[1542]: 2025-08-13 07:21:15.336 [INFO][4947] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" HandleID="k8s-pod-network.0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" Workload="localhost-k8s-calico--apiserver--649d79c576--jsplw-eth0" Aug 13 07:21:15.344498 containerd[1542]: 2025-08-13 07:21:15.336 [INFO][4947] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:21:15.344498 containerd[1542]: 2025-08-13 07:21:15.336 [INFO][4947] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:21:15.344498 containerd[1542]: 2025-08-13 07:21:15.340 [WARNING][4947] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" HandleID="k8s-pod-network.0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" Workload="localhost-k8s-calico--apiserver--649d79c576--jsplw-eth0" Aug 13 07:21:15.344498 containerd[1542]: 2025-08-13 07:21:15.340 [INFO][4947] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" HandleID="k8s-pod-network.0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" Workload="localhost-k8s-calico--apiserver--649d79c576--jsplw-eth0" Aug 13 07:21:15.344498 containerd[1542]: 2025-08-13 07:21:15.341 [INFO][4947] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:21:15.344498 containerd[1542]: 2025-08-13 07:21:15.343 [INFO][4940] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" Aug 13 07:21:15.346080 containerd[1542]: time="2025-08-13T07:21:15.344660884Z" level=info msg="TearDown network for sandbox \"0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc\" successfully" Aug 13 07:21:15.346080 containerd[1542]: time="2025-08-13T07:21:15.344679669Z" level=info msg="StopPodSandbox for \"0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc\" returns successfully" Aug 13 07:21:15.346080 containerd[1542]: time="2025-08-13T07:21:15.345373297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-649d79c576-jsplw,Uid:bec5125a-0b32-4b9c-9b75-c6827db5f9b6,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:21:15.347880 systemd[1]: run-netns-cni\x2d2dec3adc\x2d8b47\x2d715c\x2d6c3f\x2d25e95ab8bd38.mount: Deactivated successfully. Aug 13 07:21:15.425741 systemd-networkd[1349]: cali793ec9fd11e: Link UP Aug 13 07:21:15.426268 systemd-networkd[1349]: cali793ec9fd11e: Gained carrier Aug 13 07:21:15.433198 kubelet[2712]: I0813 07:21:15.432995 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-66794f6b97-5xnbz" podStartSLOduration=23.952796936 podStartE2EDuration="28.432979436s" podCreationTimestamp="2025-08-13 07:20:47 +0000 UTC" firstStartedPulling="2025-08-13 07:21:10.591350573 +0000 UTC m=+38.560782714" lastFinishedPulling="2025-08-13 07:21:15.071533069 +0000 UTC m=+43.040965214" observedRunningTime="2025-08-13 07:21:15.410097639 +0000 UTC m=+43.379529789" watchObservedRunningTime="2025-08-13 07:21:15.432979436 +0000 UTC m=+43.402411583" Aug 13 07:21:15.440782 containerd[1542]: 2025-08-13 07:21:15.379 [INFO][4953] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--649d79c576--jsplw-eth0 calico-apiserver-649d79c576- calico-apiserver bec5125a-0b32-4b9c-9b75-c6827db5f9b6 998 0 2025-08-13 07:20:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:649d79c576 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-649d79c576-jsplw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali793ec9fd11e [] [] }} ContainerID="dea9ab1aad54da803c3ee99b6ee1cecf0f5cea255e84a1b1a5f02406848f8d90" Namespace="calico-apiserver" Pod="calico-apiserver-649d79c576-jsplw" WorkloadEndpoint="localhost-k8s-calico--apiserver--649d79c576--jsplw-" Aug 13 07:21:15.440782 containerd[1542]: 2025-08-13 07:21:15.379 [INFO][4953] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dea9ab1aad54da803c3ee99b6ee1cecf0f5cea255e84a1b1a5f02406848f8d90" Namespace="calico-apiserver" Pod="calico-apiserver-649d79c576-jsplw" WorkloadEndpoint="localhost-k8s-calico--apiserver--649d79c576--jsplw-eth0" Aug 13 07:21:15.440782 containerd[1542]: 2025-08-13 07:21:15.398 [INFO][4965] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dea9ab1aad54da803c3ee99b6ee1cecf0f5cea255e84a1b1a5f02406848f8d90" HandleID="k8s-pod-network.dea9ab1aad54da803c3ee99b6ee1cecf0f5cea255e84a1b1a5f02406848f8d90" Workload="localhost-k8s-calico--apiserver--649d79c576--jsplw-eth0" Aug 13 07:21:15.440782 containerd[1542]: 2025-08-13 07:21:15.398 [INFO][4965] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dea9ab1aad54da803c3ee99b6ee1cecf0f5cea255e84a1b1a5f02406848f8d90" HandleID="k8s-pod-network.dea9ab1aad54da803c3ee99b6ee1cecf0f5cea255e84a1b1a5f02406848f8d90" Workload="localhost-k8s-calico--apiserver--649d79c576--jsplw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-649d79c576-jsplw", "timestamp":"2025-08-13 07:21:15.398248081 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:21:15.440782 containerd[1542]: 2025-08-13 07:21:15.398 [INFO][4965] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:21:15.440782 containerd[1542]: 2025-08-13 07:21:15.398 [INFO][4965] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:21:15.440782 containerd[1542]: 2025-08-13 07:21:15.398 [INFO][4965] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:21:15.440782 containerd[1542]: 2025-08-13 07:21:15.404 [INFO][4965] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dea9ab1aad54da803c3ee99b6ee1cecf0f5cea255e84a1b1a5f02406848f8d90" host="localhost" Aug 13 07:21:15.440782 containerd[1542]: 2025-08-13 07:21:15.407 [INFO][4965] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:21:15.440782 containerd[1542]: 2025-08-13 07:21:15.411 [INFO][4965] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:21:15.440782 containerd[1542]: 2025-08-13 07:21:15.412 [INFO][4965] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:21:15.440782 containerd[1542]: 2025-08-13 07:21:15.414 [INFO][4965] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:21:15.440782 containerd[1542]: 2025-08-13 07:21:15.414 [INFO][4965] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dea9ab1aad54da803c3ee99b6ee1cecf0f5cea255e84a1b1a5f02406848f8d90" host="localhost" Aug 13 07:21:15.440782 containerd[1542]: 2025-08-13 07:21:15.415 [INFO][4965] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dea9ab1aad54da803c3ee99b6ee1cecf0f5cea255e84a1b1a5f02406848f8d90 Aug 13 07:21:15.440782 containerd[1542]: 2025-08-13 07:21:15.418 [INFO][4965] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dea9ab1aad54da803c3ee99b6ee1cecf0f5cea255e84a1b1a5f02406848f8d90" host="localhost" Aug 13 07:21:15.440782 containerd[1542]: 2025-08-13 07:21:15.421 [INFO][4965] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.dea9ab1aad54da803c3ee99b6ee1cecf0f5cea255e84a1b1a5f02406848f8d90" host="localhost" Aug 13 07:21:15.440782 containerd[1542]: 2025-08-13 07:21:15.422 [INFO][4965] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.dea9ab1aad54da803c3ee99b6ee1cecf0f5cea255e84a1b1a5f02406848f8d90" host="localhost" Aug 13 07:21:15.440782 containerd[1542]: 2025-08-13 07:21:15.422 [INFO][4965] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:21:15.440782 containerd[1542]: 2025-08-13 07:21:15.422 [INFO][4965] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="dea9ab1aad54da803c3ee99b6ee1cecf0f5cea255e84a1b1a5f02406848f8d90" HandleID="k8s-pod-network.dea9ab1aad54da803c3ee99b6ee1cecf0f5cea255e84a1b1a5f02406848f8d90" Workload="localhost-k8s-calico--apiserver--649d79c576--jsplw-eth0" Aug 13 07:21:15.442136 containerd[1542]: 2025-08-13 07:21:15.424 [INFO][4953] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dea9ab1aad54da803c3ee99b6ee1cecf0f5cea255e84a1b1a5f02406848f8d90" Namespace="calico-apiserver" Pod="calico-apiserver-649d79c576-jsplw" WorkloadEndpoint="localhost-k8s-calico--apiserver--649d79c576--jsplw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--649d79c576--jsplw-eth0", GenerateName:"calico-apiserver-649d79c576-", Namespace:"calico-apiserver", SelfLink:"", UID:"bec5125a-0b32-4b9c-9b75-c6827db5f9b6", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 20, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"649d79c576", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-649d79c576-jsplw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali793ec9fd11e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:21:15.442136 containerd[1542]: 2025-08-13 07:21:15.424 [INFO][4953] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="dea9ab1aad54da803c3ee99b6ee1cecf0f5cea255e84a1b1a5f02406848f8d90" Namespace="calico-apiserver" Pod="calico-apiserver-649d79c576-jsplw" WorkloadEndpoint="localhost-k8s-calico--apiserver--649d79c576--jsplw-eth0" Aug 13 07:21:15.442136 containerd[1542]: 2025-08-13 07:21:15.424 [INFO][4953] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali793ec9fd11e ContainerID="dea9ab1aad54da803c3ee99b6ee1cecf0f5cea255e84a1b1a5f02406848f8d90" Namespace="calico-apiserver" Pod="calico-apiserver-649d79c576-jsplw" WorkloadEndpoint="localhost-k8s-calico--apiserver--649d79c576--jsplw-eth0" Aug 13 07:21:15.442136 containerd[1542]: 2025-08-13 07:21:15.426 [INFO][4953] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dea9ab1aad54da803c3ee99b6ee1cecf0f5cea255e84a1b1a5f02406848f8d90" Namespace="calico-apiserver" Pod="calico-apiserver-649d79c576-jsplw" WorkloadEndpoint="localhost-k8s-calico--apiserver--649d79c576--jsplw-eth0" Aug 13 07:21:15.442136 containerd[1542]: 2025-08-13 07:21:15.426 [INFO][4953] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dea9ab1aad54da803c3ee99b6ee1cecf0f5cea255e84a1b1a5f02406848f8d90" Namespace="calico-apiserver" Pod="calico-apiserver-649d79c576-jsplw" WorkloadEndpoint="localhost-k8s-calico--apiserver--649d79c576--jsplw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--649d79c576--jsplw-eth0", GenerateName:"calico-apiserver-649d79c576-", Namespace:"calico-apiserver", SelfLink:"", UID:"bec5125a-0b32-4b9c-9b75-c6827db5f9b6", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 20, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"649d79c576", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dea9ab1aad54da803c3ee99b6ee1cecf0f5cea255e84a1b1a5f02406848f8d90", Pod:"calico-apiserver-649d79c576-jsplw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali793ec9fd11e", MAC:"12:ae:b4:4c:9c:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:21:15.442136 containerd[1542]: 2025-08-13 07:21:15.434 [INFO][4953] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dea9ab1aad54da803c3ee99b6ee1cecf0f5cea255e84a1b1a5f02406848f8d90" Namespace="calico-apiserver" Pod="calico-apiserver-649d79c576-jsplw" WorkloadEndpoint="localhost-k8s-calico--apiserver--649d79c576--jsplw-eth0" Aug 13 07:21:15.464033 kubelet[2712]: I0813 07:21:15.463687 2712 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:21:15.468632 containerd[1542]: time="2025-08-13T07:21:15.468515559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:21:15.468632 containerd[1542]: time="2025-08-13T07:21:15.468555693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:21:15.468632 containerd[1542]: time="2025-08-13T07:21:15.468583016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:21:15.469112 containerd[1542]: time="2025-08-13T07:21:15.469012278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:21:15.493946 systemd[1]: Started cri-containerd-dea9ab1aad54da803c3ee99b6ee1cecf0f5cea255e84a1b1a5f02406848f8d90.scope - libcontainer container dea9ab1aad54da803c3ee99b6ee1cecf0f5cea255e84a1b1a5f02406848f8d90. Aug 13 07:21:15.505634 systemd-networkd[1349]: calie6e01a11ed9: Gained IPv6LL Aug 13 07:21:15.506254 systemd-resolved[1450]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:21:15.544368 containerd[1542]: time="2025-08-13T07:21:15.544343164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-649d79c576-jsplw,Uid:bec5125a-0b32-4b9c-9b75-c6827db5f9b6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"dea9ab1aad54da803c3ee99b6ee1cecf0f5cea255e84a1b1a5f02406848f8d90\"" Aug 13 07:21:16.404337 kubelet[2712]: I0813 07:21:16.404314 2712 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:21:16.975969 systemd-networkd[1349]: cali793ec9fd11e: Gained IPv6LL Aug 13 07:21:18.420851 containerd[1542]: time="2025-08-13T07:21:18.420470359Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:21:18.440589 containerd[1542]: time="2025-08-13T07:21:18.440554769Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Aug 13 07:21:18.451276 containerd[1542]: time="2025-08-13T07:21:18.451249065Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:21:18.462750 containerd[1542]: time="2025-08-13T07:21:18.462704925Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:21:18.463372 containerd[1542]: time="2025-08-13T07:21:18.463179779Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.391404515s" Aug 13 07:21:18.463372 containerd[1542]: time="2025-08-13T07:21:18.463204782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:21:18.464351 containerd[1542]: time="2025-08-13T07:21:18.464329796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 07:21:18.480421 containerd[1542]: time="2025-08-13T07:21:18.480347331Z" level=info msg="CreateContainer within sandbox \"0201bfe4a8f47b43cdef5bf893321402a0e3febb3060e66fa8d4efea2d063071\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:21:18.574289 containerd[1542]: time="2025-08-13T07:21:18.574239909Z" level=info msg="CreateContainer within sandbox \"0201bfe4a8f47b43cdef5bf893321402a0e3febb3060e66fa8d4efea2d063071\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7e294311ac92f336518b674370a85ca4ee1a421e3c145bae0070a795f27ca8db\"" Aug 13 07:21:18.574624 containerd[1542]: time="2025-08-13T07:21:18.574572498Z" level=info msg="StartContainer for \"7e294311ac92f336518b674370a85ca4ee1a421e3c145bae0070a795f27ca8db\"" Aug 13 07:21:18.597003 systemd[1]: Started cri-containerd-7e294311ac92f336518b674370a85ca4ee1a421e3c145bae0070a795f27ca8db.scope - libcontainer container 7e294311ac92f336518b674370a85ca4ee1a421e3c145bae0070a795f27ca8db. Aug 13 07:21:18.636165 containerd[1542]: time="2025-08-13T07:21:18.636128197Z" level=info msg="StartContainer for \"7e294311ac92f336518b674370a85ca4ee1a421e3c145bae0070a795f27ca8db\" returns successfully" Aug 13 07:21:20.463452 kubelet[2712]: I0813 07:21:20.463420 2712 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:21:22.104936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount520670782.mount: Deactivated successfully. Aug 13 07:21:22.272834 containerd[1542]: time="2025-08-13T07:21:22.223448221Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Aug 13 07:21:22.275575 containerd[1542]: time="2025-08-13T07:21:22.226601501Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:21:22.275575 containerd[1542]: time="2025-08-13T07:21:22.242808420Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 3.77845668s" Aug 13 07:21:22.275575 containerd[1542]: time="2025-08-13T07:21:22.275233470Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:21:22.278903 containerd[1542]: time="2025-08-13T07:21:22.278793548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Aug 13 07:21:22.281635 containerd[1542]: time="2025-08-13T07:21:22.281540301Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:21:22.476883 containerd[1542]: time="2025-08-13T07:21:22.476109552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 07:21:22.615048 containerd[1542]: time="2025-08-13T07:21:22.615016161Z" level=info msg="CreateContainer within sandbox \"cf1f6be4ec7357893ce644a574d84542d5538c13a5034cd28869a69daeb933d6\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 07:21:22.645103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4264200167.mount: Deactivated successfully. Aug 13 07:21:22.669338 containerd[1542]: time="2025-08-13T07:21:22.669315963Z" level=info msg="CreateContainer within sandbox \"cf1f6be4ec7357893ce644a574d84542d5538c13a5034cd28869a69daeb933d6\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"b1048d9b0a65fc04de0822ac6f551a6d306d02f222bb597173559775216dc3be\"" Aug 13 07:21:22.686115 containerd[1542]: time="2025-08-13T07:21:22.686027397Z" level=info msg="StartContainer for \"b1048d9b0a65fc04de0822ac6f551a6d306d02f222bb597173559775216dc3be\"" Aug 13 07:21:22.806944 systemd[1]: Started cri-containerd-b1048d9b0a65fc04de0822ac6f551a6d306d02f222bb597173559775216dc3be.scope - libcontainer container b1048d9b0a65fc04de0822ac6f551a6d306d02f222bb597173559775216dc3be. Aug 13 07:21:22.864115 containerd[1542]: time="2025-08-13T07:21:22.864076348Z" level=info msg="StartContainer for \"b1048d9b0a65fc04de0822ac6f551a6d306d02f222bb597173559775216dc3be\" returns successfully" Aug 13 07:21:22.962014 kubelet[2712]: E0813 07:21:22.957610 2712 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod155374c2_a438_4233_8509_2a4773dc2fe5.slice/cri-containerd-b1048d9b0a65fc04de0822ac6f551a6d306d02f222bb597173559775216dc3be.scope\": RecentStats: unable to find data in memory cache]" Aug 13 07:21:23.703457 kubelet[2712]: I0813 07:21:23.692448 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-649d79c576-kmjq6" podStartSLOduration=31.850158952 podStartE2EDuration="38.689023859s" podCreationTimestamp="2025-08-13 07:20:45 +0000 UTC" firstStartedPulling="2025-08-13 07:21:11.62513707 +0000 UTC m=+39.594569211" lastFinishedPulling="2025-08-13 07:21:18.464001971 +0000 UTC m=+46.433434118" observedRunningTime="2025-08-13 07:21:19.48090647 +0000 UTC m=+47.450338612" watchObservedRunningTime="2025-08-13 07:21:23.689023859 +0000 UTC m=+51.658456002" Aug 13 07:21:23.703457 kubelet[2712]: I0813 07:21:23.703277 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-658f645c7b-kcmts" podStartSLOduration=2.8682994920000002 podStartE2EDuration="15.703265104s" podCreationTimestamp="2025-08-13 07:21:08 +0000 UTC" firstStartedPulling="2025-08-13 07:21:09.618323846 +0000 UTC m=+37.587755987" lastFinishedPulling="2025-08-13 07:21:22.453289457 +0000 UTC m=+50.422721599" observedRunningTime="2025-08-13 07:21:23.657210382 +0000 UTC m=+51.626642523" watchObservedRunningTime="2025-08-13 07:21:23.703265104 +0000 UTC m=+51.672697249" Aug 13 07:21:25.373868 kubelet[2712]: I0813 07:21:25.358517 2712 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:21:25.409328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1901360357.mount: Deactivated successfully. Aug 13 07:21:26.426518 kubelet[2712]: I0813 07:21:26.426192 2712 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:21:27.225575 containerd[1542]: time="2025-08-13T07:21:27.225438407Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:21:27.228606 containerd[1542]: time="2025-08-13T07:21:27.228546642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Aug 13 07:21:27.259335 containerd[1542]: time="2025-08-13T07:21:27.259288175Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:21:27.263108 containerd[1542]: time="2025-08-13T07:21:27.262339220Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:21:27.265284 containerd[1542]: time="2025-08-13T07:21:27.265258802Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.785783318s" Aug 13 07:21:27.265371 containerd[1542]: time="2025-08-13T07:21:27.265361893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Aug 13 07:21:27.308889 containerd[1542]: time="2025-08-13T07:21:27.308860026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 07:21:27.325080 containerd[1542]: time="2025-08-13T07:21:27.325050300Z" level=info msg="CreateContainer within sandbox \"0a0e1ee3f37d49a3eb0c1cfd7b8972a1e448d406be1c6106faec04f406b06e42\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 07:21:27.367077 containerd[1542]: time="2025-08-13T07:21:27.366981754Z" level=info msg="CreateContainer within sandbox \"0a0e1ee3f37d49a3eb0c1cfd7b8972a1e448d406be1c6106faec04f406b06e42\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"3a30115b94f078ede5498f78e250441f62e2c86e91b65811b61dbdecf63128fd\"" Aug 13 07:21:27.375358 containerd[1542]: time="2025-08-13T07:21:27.375307925Z" level=info msg="StartContainer for \"3a30115b94f078ede5498f78e250441f62e2c86e91b65811b61dbdecf63128fd\"" Aug 13 07:21:27.494962 systemd[1]: Started cri-containerd-3a30115b94f078ede5498f78e250441f62e2c86e91b65811b61dbdecf63128fd.scope - libcontainer container 3a30115b94f078ede5498f78e250441f62e2c86e91b65811b61dbdecf63128fd. Aug 13 07:21:27.583947 containerd[1542]: time="2025-08-13T07:21:27.583923495Z" level=info msg="StartContainer for \"3a30115b94f078ede5498f78e250441f62e2c86e91b65811b61dbdecf63128fd\" returns successfully" Aug 13 07:21:27.737194 kubelet[2712]: I0813 07:21:27.736994 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-nwh9w" podStartSLOduration=27.205507228 podStartE2EDuration="40.727109642s" podCreationTimestamp="2025-08-13 07:20:47 +0000 UTC" firstStartedPulling="2025-08-13 07:21:13.766974848 +0000 UTC m=+41.736406989" lastFinishedPulling="2025-08-13 07:21:27.288577262 +0000 UTC m=+55.258009403" observedRunningTime="2025-08-13 07:21:27.719116192 +0000 UTC m=+55.688548344" watchObservedRunningTime="2025-08-13 07:21:27.727109642 +0000 UTC m=+55.696541787" Aug 13 07:21:29.731918 containerd[1542]: time="2025-08-13T07:21:29.731888432Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:21:29.733662 containerd[1542]: time="2025-08-13T07:21:29.732505943Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 07:21:29.733662 containerd[1542]: time="2025-08-13T07:21:29.732874502Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:21:29.734422 containerd[1542]: time="2025-08-13T07:21:29.734406194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:21:29.734835 containerd[1542]: time="2025-08-13T07:21:29.734801027Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.425072018s" Aug 13 07:21:29.734835 containerd[1542]: time="2025-08-13T07:21:29.734829304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 07:21:29.735726 containerd[1542]: time="2025-08-13T07:21:29.735707265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:21:29.736793 containerd[1542]: time="2025-08-13T07:21:29.736771961Z" level=info msg="CreateContainer within sandbox \"aac1f3e35c90e6e79cfe1bc4bbb7a08564f369fea0f7c6a250b25ef81f01bd35\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 07:21:29.805201 containerd[1542]: time="2025-08-13T07:21:29.805174800Z" level=info msg="CreateContainer within sandbox \"aac1f3e35c90e6e79cfe1bc4bbb7a08564f369fea0f7c6a250b25ef81f01bd35\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"289eddd083237dd08a27b6889522dde35084f88b1ec0a74f72eb81d71d88d427\"" Aug 13 07:21:29.819265 containerd[1542]: time="2025-08-13T07:21:29.818500610Z" level=info msg="StartContainer for \"289eddd083237dd08a27b6889522dde35084f88b1ec0a74f72eb81d71d88d427\"" Aug 13 07:21:29.860946 systemd[1]: Started cri-containerd-289eddd083237dd08a27b6889522dde35084f88b1ec0a74f72eb81d71d88d427.scope - libcontainer container 289eddd083237dd08a27b6889522dde35084f88b1ec0a74f72eb81d71d88d427. Aug 13 07:21:29.969311 containerd[1542]: time="2025-08-13T07:21:29.969272265Z" level=info msg="StartContainer for \"289eddd083237dd08a27b6889522dde35084f88b1ec0a74f72eb81d71d88d427\" returns successfully" Aug 13 07:21:30.083907 containerd[1542]: time="2025-08-13T07:21:30.083739727Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:21:30.084147 containerd[1542]: time="2025-08-13T07:21:30.084119671Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 07:21:30.087163 containerd[1542]: time="2025-08-13T07:21:30.087139606Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 351.416897ms" Aug 13 07:21:30.087163 containerd[1542]: time="2025-08-13T07:21:30.087163246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:21:30.089140 containerd[1542]: time="2025-08-13T07:21:30.089012491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 07:21:30.091071 containerd[1542]: time="2025-08-13T07:21:30.090990426Z" level=info msg="CreateContainer within sandbox \"dea9ab1aad54da803c3ee99b6ee1cecf0f5cea255e84a1b1a5f02406848f8d90\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:21:30.114796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3157845880.mount: Deactivated successfully. Aug 13 07:21:30.137933 containerd[1542]: time="2025-08-13T07:21:30.123289158Z" level=info msg="CreateContainer within sandbox \"dea9ab1aad54da803c3ee99b6ee1cecf0f5cea255e84a1b1a5f02406848f8d90\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7ba19e07c08ef845d566ed11cf51e34e1b7f13d9e0ab7b4518b9f721bd0a858a\"" Aug 13 07:21:30.137933 containerd[1542]: time="2025-08-13T07:21:30.123717287Z" level=info msg="StartContainer for \"7ba19e07c08ef845d566ed11cf51e34e1b7f13d9e0ab7b4518b9f721bd0a858a\"" Aug 13 07:21:30.172008 systemd[1]: Started cri-containerd-7ba19e07c08ef845d566ed11cf51e34e1b7f13d9e0ab7b4518b9f721bd0a858a.scope - libcontainer container 7ba19e07c08ef845d566ed11cf51e34e1b7f13d9e0ab7b4518b9f721bd0a858a. Aug 13 07:21:30.257263 containerd[1542]: time="2025-08-13T07:21:30.256946214Z" level=info msg="StartContainer for \"7ba19e07c08ef845d566ed11cf51e34e1b7f13d9e0ab7b4518b9f721bd0a858a\" returns successfully" Aug 13 07:21:30.960141 kubelet[2712]: I0813 07:21:30.959596 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-649d79c576-jsplw" podStartSLOduration=31.439591789 podStartE2EDuration="45.959580648s" podCreationTimestamp="2025-08-13 07:20:45 +0000 UTC" firstStartedPulling="2025-08-13 07:21:15.568853234 +0000 UTC m=+43.538285375" lastFinishedPulling="2025-08-13 07:21:30.088842091 +0000 UTC m=+58.058274234" observedRunningTime="2025-08-13 07:21:30.957858345 +0000 UTC m=+58.927290496" watchObservedRunningTime="2025-08-13 07:21:30.959580648 +0000 UTC m=+58.929012787" Aug 13 07:21:32.124172 kubelet[2712]: I0813 07:21:32.124145 2712 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:21:32.175616 containerd[1542]: time="2025-08-13T07:21:32.175585638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:21:32.178418 containerd[1542]: time="2025-08-13T07:21:32.177813330Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 07:21:32.178882 containerd[1542]: time="2025-08-13T07:21:32.178734698Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:21:32.180243 containerd[1542]: time="2025-08-13T07:21:32.180216859Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:21:32.181596 containerd[1542]: time="2025-08-13T07:21:32.181408444Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.092377522s" Aug 13 07:21:32.181596 containerd[1542]: time="2025-08-13T07:21:32.181439434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 07:21:32.706791 containerd[1542]: time="2025-08-13T07:21:32.706432502Z" level=info msg="CreateContainer within sandbox \"aac1f3e35c90e6e79cfe1bc4bbb7a08564f369fea0f7c6a250b25ef81f01bd35\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 07:21:32.743021 containerd[1542]: time="2025-08-13T07:21:32.742970346Z" level=info msg="StopPodSandbox for \"b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d\"" Aug 13 07:21:33.036638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount837175485.mount: Deactivated successfully. Aug 13 07:21:33.057882 containerd[1542]: time="2025-08-13T07:21:33.041464265Z" level=info msg="CreateContainer within sandbox \"aac1f3e35c90e6e79cfe1bc4bbb7a08564f369fea0f7c6a250b25ef81f01bd35\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e7867b1eae390b9d536a9daf53ea47121cdb2c9728db60dca1034c1e71ee5bc3\"" Aug 13 07:21:33.057882 containerd[1542]: time="2025-08-13T07:21:33.048591597Z" level=info msg="StartContainer for \"e7867b1eae390b9d536a9daf53ea47121cdb2c9728db60dca1034c1e71ee5bc3\"" Aug 13 07:21:33.373135 systemd[1]: Started cri-containerd-e7867b1eae390b9d536a9daf53ea47121cdb2c9728db60dca1034c1e71ee5bc3.scope - libcontainer container e7867b1eae390b9d536a9daf53ea47121cdb2c9728db60dca1034c1e71ee5bc3. Aug 13 07:21:33.498073 containerd[1542]: time="2025-08-13T07:21:33.497617445Z" level=info msg="StartContainer for \"e7867b1eae390b9d536a9daf53ea47121cdb2c9728db60dca1034c1e71ee5bc3\" returns successfully" Aug 13 07:21:34.008801 containerd[1542]: 2025-08-13 07:21:33.706 [WARNING][5412] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66794f6b97--5xnbz-eth0", GenerateName:"calico-kube-controllers-66794f6b97-", Namespace:"calico-system", SelfLink:"", UID:"1308b7c2-9bc6-44fd-8d86-34c607162a5d", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 20, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66794f6b97", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"537e4504d54a154f756f111b3dc2199c13107e814f5bfd44c598c832b42051ed", Pod:"calico-kube-controllers-66794f6b97-5xnbz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali46b0f5135aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:21:34.008801 containerd[1542]: 2025-08-13 07:21:33.710 [INFO][5412] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" Aug 13 07:21:34.008801 containerd[1542]: 2025-08-13 07:21:33.711 [INFO][5412] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" iface="eth0" netns="" Aug 13 07:21:34.008801 containerd[1542]: 2025-08-13 07:21:33.711 [INFO][5412] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" Aug 13 07:21:34.008801 containerd[1542]: 2025-08-13 07:21:33.711 [INFO][5412] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" Aug 13 07:21:34.008801 containerd[1542]: 2025-08-13 07:21:33.988 [INFO][5453] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" HandleID="k8s-pod-network.b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" Workload="localhost-k8s-calico--kube--controllers--66794f6b97--5xnbz-eth0" Aug 13 07:21:34.008801 containerd[1542]: 2025-08-13 07:21:33.992 [INFO][5453] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:21:34.008801 containerd[1542]: 2025-08-13 07:21:33.993 [INFO][5453] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:21:34.008801 containerd[1542]: 2025-08-13 07:21:34.003 [WARNING][5453] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" HandleID="k8s-pod-network.b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" Workload="localhost-k8s-calico--kube--controllers--66794f6b97--5xnbz-eth0" Aug 13 07:21:34.008801 containerd[1542]: 2025-08-13 07:21:34.003 [INFO][5453] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" HandleID="k8s-pod-network.b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" Workload="localhost-k8s-calico--kube--controllers--66794f6b97--5xnbz-eth0" Aug 13 07:21:34.008801 containerd[1542]: 2025-08-13 07:21:34.005 [INFO][5453] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:21:34.008801 containerd[1542]: 2025-08-13 07:21:34.006 [INFO][5412] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" Aug 13 07:21:34.008801 containerd[1542]: time="2025-08-13T07:21:34.008409322Z" level=info msg="TearDown network for sandbox \"b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d\" successfully" Aug 13 07:21:34.008801 containerd[1542]: time="2025-08-13T07:21:34.008424893Z" level=info msg="StopPodSandbox for \"b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d\" returns successfully" Aug 13 07:21:34.094432 containerd[1542]: time="2025-08-13T07:21:34.094350276Z" level=info msg="RemovePodSandbox for \"b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d\"" Aug 13 07:21:34.096518 containerd[1542]: time="2025-08-13T07:21:34.096337681Z" level=info msg="Forcibly stopping sandbox \"b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d\"" Aug 13 07:21:34.238884 containerd[1542]: 2025-08-13 07:21:34.185 [WARNING][5467] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66794f6b97--5xnbz-eth0", GenerateName:"calico-kube-controllers-66794f6b97-", Namespace:"calico-system", SelfLink:"", UID:"1308b7c2-9bc6-44fd-8d86-34c607162a5d", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 20, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66794f6b97", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"537e4504d54a154f756f111b3dc2199c13107e814f5bfd44c598c832b42051ed", Pod:"calico-kube-controllers-66794f6b97-5xnbz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali46b0f5135aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:21:34.238884 containerd[1542]: 2025-08-13 07:21:34.185 [INFO][5467] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" Aug 13 07:21:34.238884 containerd[1542]: 2025-08-13 07:21:34.185 [INFO][5467] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" iface="eth0" netns="" Aug 13 07:21:34.238884 containerd[1542]: 2025-08-13 07:21:34.185 [INFO][5467] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" Aug 13 07:21:34.238884 containerd[1542]: 2025-08-13 07:21:34.185 [INFO][5467] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" Aug 13 07:21:34.238884 containerd[1542]: 2025-08-13 07:21:34.226 [INFO][5475] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" HandleID="k8s-pod-network.b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" Workload="localhost-k8s-calico--kube--controllers--66794f6b97--5xnbz-eth0" Aug 13 07:21:34.238884 containerd[1542]: 2025-08-13 07:21:34.226 [INFO][5475] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:21:34.238884 containerd[1542]: 2025-08-13 07:21:34.226 [INFO][5475] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:21:34.238884 containerd[1542]: 2025-08-13 07:21:34.231 [WARNING][5475] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" HandleID="k8s-pod-network.b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" Workload="localhost-k8s-calico--kube--controllers--66794f6b97--5xnbz-eth0" Aug 13 07:21:34.238884 containerd[1542]: 2025-08-13 07:21:34.231 [INFO][5475] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" HandleID="k8s-pod-network.b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" Workload="localhost-k8s-calico--kube--controllers--66794f6b97--5xnbz-eth0" Aug 13 07:21:34.238884 containerd[1542]: 2025-08-13 07:21:34.232 [INFO][5475] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:21:34.238884 containerd[1542]: 2025-08-13 07:21:34.235 [INFO][5467] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d" Aug 13 07:21:34.238884 containerd[1542]: time="2025-08-13T07:21:34.238712943Z" level=info msg="TearDown network for sandbox \"b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d\" successfully" Aug 13 07:21:34.329879 containerd[1542]: time="2025-08-13T07:21:34.329260336Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:21:34.369273 containerd[1542]: time="2025-08-13T07:21:34.369192180Z" level=info msg="RemovePodSandbox \"b56002a7fd1af2167ef2ba3b46fbadee2f2e66b7d9a0a3f4f9678e3757171a8d\" returns successfully" Aug 13 07:21:34.415991 containerd[1542]: time="2025-08-13T07:21:34.414836378Z" level=info msg="StopPodSandbox for \"d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355\"" Aug 13 07:21:34.539393 containerd[1542]: 2025-08-13 07:21:34.464 [WARNING][5489] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" WorkloadEndpoint="localhost-k8s-whisker--6685cc6896--jpz9d-eth0" Aug 13 07:21:34.539393 containerd[1542]: 2025-08-13 07:21:34.464 [INFO][5489] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" Aug 13 07:21:34.539393 containerd[1542]: 2025-08-13 07:21:34.464 [INFO][5489] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" iface="eth0" netns="" Aug 13 07:21:34.539393 containerd[1542]: 2025-08-13 07:21:34.464 [INFO][5489] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" Aug 13 07:21:34.539393 containerd[1542]: 2025-08-13 07:21:34.464 [INFO][5489] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" Aug 13 07:21:34.539393 containerd[1542]: 2025-08-13 07:21:34.511 [INFO][5497] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" HandleID="k8s-pod-network.d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" Workload="localhost-k8s-whisker--6685cc6896--jpz9d-eth0" Aug 13 07:21:34.539393 containerd[1542]: 2025-08-13 07:21:34.513 [INFO][5497] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:21:34.539393 containerd[1542]: 2025-08-13 07:21:34.513 [INFO][5497] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:21:34.539393 containerd[1542]: 2025-08-13 07:21:34.531 [WARNING][5497] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" HandleID="k8s-pod-network.d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" Workload="localhost-k8s-whisker--6685cc6896--jpz9d-eth0" Aug 13 07:21:34.539393 containerd[1542]: 2025-08-13 07:21:34.531 [INFO][5497] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" HandleID="k8s-pod-network.d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" Workload="localhost-k8s-whisker--6685cc6896--jpz9d-eth0" Aug 13 07:21:34.539393 containerd[1542]: 2025-08-13 07:21:34.532 [INFO][5497] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:21:34.539393 containerd[1542]: 2025-08-13 07:21:34.534 [INFO][5489] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" Aug 13 07:21:34.545965 containerd[1542]: time="2025-08-13T07:21:34.539420014Z" level=info msg="TearDown network for sandbox \"d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355\" successfully" Aug 13 07:21:34.545965 containerd[1542]: time="2025-08-13T07:21:34.539437214Z" level=info msg="StopPodSandbox for \"d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355\" returns successfully" Aug 13 07:21:34.545965 containerd[1542]: time="2025-08-13T07:21:34.540055457Z" level=info msg="RemovePodSandbox for \"d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355\"" Aug 13 07:21:34.545965 containerd[1542]: time="2025-08-13T07:21:34.540075096Z" level=info msg="Forcibly stopping sandbox \"d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355\"" Aug 13 07:21:34.626273 containerd[1542]: 2025-08-13 07:21:34.582 [WARNING][5511] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" WorkloadEndpoint="localhost-k8s-whisker--6685cc6896--jpz9d-eth0" Aug 13 07:21:34.626273 containerd[1542]: 2025-08-13 07:21:34.582 [INFO][5511] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" Aug 13 07:21:34.626273 containerd[1542]: 2025-08-13 07:21:34.583 [INFO][5511] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" iface="eth0" netns="" Aug 13 07:21:34.626273 containerd[1542]: 2025-08-13 07:21:34.583 [INFO][5511] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" Aug 13 07:21:34.626273 containerd[1542]: 2025-08-13 07:21:34.583 [INFO][5511] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" Aug 13 07:21:34.626273 containerd[1542]: 2025-08-13 07:21:34.612 [INFO][5518] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" HandleID="k8s-pod-network.d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" Workload="localhost-k8s-whisker--6685cc6896--jpz9d-eth0" Aug 13 07:21:34.626273 containerd[1542]: 2025-08-13 07:21:34.613 [INFO][5518] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:21:34.626273 containerd[1542]: 2025-08-13 07:21:34.613 [INFO][5518] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:21:34.626273 containerd[1542]: 2025-08-13 07:21:34.619 [WARNING][5518] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" HandleID="k8s-pod-network.d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" Workload="localhost-k8s-whisker--6685cc6896--jpz9d-eth0" Aug 13 07:21:34.626273 containerd[1542]: 2025-08-13 07:21:34.619 [INFO][5518] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" HandleID="k8s-pod-network.d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" Workload="localhost-k8s-whisker--6685cc6896--jpz9d-eth0" Aug 13 07:21:34.626273 containerd[1542]: 2025-08-13 07:21:34.621 [INFO][5518] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:21:34.626273 containerd[1542]: 2025-08-13 07:21:34.624 [INFO][5511] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355" Aug 13 07:21:34.628246 containerd[1542]: time="2025-08-13T07:21:34.626718932Z" level=info msg="TearDown network for sandbox \"d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355\" successfully" Aug 13 07:21:34.645276 containerd[1542]: time="2025-08-13T07:21:34.644545448Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:21:34.645276 containerd[1542]: time="2025-08-13T07:21:34.644588747Z" level=info msg="RemovePodSandbox \"d80a2cca0e19569d9d06df36db2fb645b3ea8f77bc223e531ddc02ca3eacf355\" returns successfully" Aug 13 07:21:34.723969 containerd[1542]: time="2025-08-13T07:21:34.723766585Z" level=info msg="StopPodSandbox for \"0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855\"" Aug 13 07:21:34.847082 containerd[1542]: 2025-08-13 07:21:34.779 [WARNING][5532] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--m6rn2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ea5dce70-7db5-4a88-9ece-439f7b00c36d", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 20, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e72f156ffd27c92e514f02479c8d6cfb6fbe607da13e1bf143fc544f11b79e10", Pod:"coredns-7c65d6cfc9-m6rn2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f01ad1a1ee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:21:34.847082 containerd[1542]: 2025-08-13 07:21:34.779 [INFO][5532] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" Aug 13 07:21:34.847082 containerd[1542]: 2025-08-13 07:21:34.779 [INFO][5532] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" iface="eth0" netns="" Aug 13 07:21:34.847082 containerd[1542]: 2025-08-13 07:21:34.779 [INFO][5532] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" Aug 13 07:21:34.847082 containerd[1542]: 2025-08-13 07:21:34.779 [INFO][5532] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" Aug 13 07:21:34.847082 containerd[1542]: 2025-08-13 07:21:34.822 [INFO][5539] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" HandleID="k8s-pod-network.0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" Workload="localhost-k8s-coredns--7c65d6cfc9--m6rn2-eth0" Aug 13 07:21:34.847082 containerd[1542]: 2025-08-13 07:21:34.822 [INFO][5539] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:21:34.847082 containerd[1542]: 2025-08-13 07:21:34.822 [INFO][5539] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:21:34.847082 containerd[1542]: 2025-08-13 07:21:34.841 [WARNING][5539] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" HandleID="k8s-pod-network.0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" Workload="localhost-k8s-coredns--7c65d6cfc9--m6rn2-eth0" Aug 13 07:21:34.847082 containerd[1542]: 2025-08-13 07:21:34.841 [INFO][5539] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" HandleID="k8s-pod-network.0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" Workload="localhost-k8s-coredns--7c65d6cfc9--m6rn2-eth0" Aug 13 07:21:34.847082 containerd[1542]: 2025-08-13 07:21:34.842 [INFO][5539] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:21:34.847082 containerd[1542]: 2025-08-13 07:21:34.844 [INFO][5532] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" Aug 13 07:21:34.851079 containerd[1542]: time="2025-08-13T07:21:34.847062923Z" level=info msg="TearDown network for sandbox \"0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855\" successfully" Aug 13 07:21:34.851079 containerd[1542]: time="2025-08-13T07:21:34.847320122Z" level=info msg="StopPodSandbox for \"0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855\" returns successfully" Aug 13 07:21:34.876203 containerd[1542]: time="2025-08-13T07:21:34.875983996Z" level=info msg="RemovePodSandbox for \"0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855\"" Aug 13 07:21:34.876203 containerd[1542]: time="2025-08-13T07:21:34.876010009Z" level=info msg="Forcibly stopping sandbox \"0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855\"" Aug 13 07:21:34.963791 containerd[1542]: 2025-08-13 07:21:34.918 [WARNING][5553] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--m6rn2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ea5dce70-7db5-4a88-9ece-439f7b00c36d", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 20, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e72f156ffd27c92e514f02479c8d6cfb6fbe607da13e1bf143fc544f11b79e10", Pod:"coredns-7c65d6cfc9-m6rn2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f01ad1a1ee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:21:34.963791 containerd[1542]: 2025-08-13 07:21:34.918 [INFO][5553] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" Aug 13 07:21:34.963791 containerd[1542]: 2025-08-13 07:21:34.918 [INFO][5553] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" iface="eth0" netns="" Aug 13 07:21:34.963791 containerd[1542]: 2025-08-13 07:21:34.918 [INFO][5553] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" Aug 13 07:21:34.963791 containerd[1542]: 2025-08-13 07:21:34.918 [INFO][5553] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" Aug 13 07:21:34.963791 containerd[1542]: 2025-08-13 07:21:34.952 [INFO][5560] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" HandleID="k8s-pod-network.0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" Workload="localhost-k8s-coredns--7c65d6cfc9--m6rn2-eth0" Aug 13 07:21:34.963791 containerd[1542]: 2025-08-13 07:21:34.952 [INFO][5560] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:21:34.963791 containerd[1542]: 2025-08-13 07:21:34.953 [INFO][5560] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:21:34.963791 containerd[1542]: 2025-08-13 07:21:34.958 [WARNING][5560] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" HandleID="k8s-pod-network.0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" Workload="localhost-k8s-coredns--7c65d6cfc9--m6rn2-eth0" Aug 13 07:21:34.963791 containerd[1542]: 2025-08-13 07:21:34.958 [INFO][5560] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" HandleID="k8s-pod-network.0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" Workload="localhost-k8s-coredns--7c65d6cfc9--m6rn2-eth0" Aug 13 07:21:34.963791 containerd[1542]: 2025-08-13 07:21:34.960 [INFO][5560] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:21:34.963791 containerd[1542]: 2025-08-13 07:21:34.962 [INFO][5553] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855" Aug 13 07:21:34.979099 containerd[1542]: time="2025-08-13T07:21:34.963729795Z" level=info msg="TearDown network for sandbox \"0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855\" successfully" Aug 13 07:21:35.022976 containerd[1542]: time="2025-08-13T07:21:35.022867621Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:21:35.022976 containerd[1542]: time="2025-08-13T07:21:35.022911495Z" level=info msg="RemovePodSandbox \"0f9dad026c300d3ed047b5b1a7f24d39a659f1b82c13f45a061268399e554855\" returns successfully" Aug 13 07:21:35.073697 containerd[1542]: time="2025-08-13T07:21:35.073665084Z" level=info msg="StopPodSandbox for \"80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db\"" Aug 13 07:21:35.171013 containerd[1542]: 2025-08-13 07:21:35.112 [WARNING][5599] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--w2f57-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"eb2e06ca-79db-46b7-a389-34bc602b3a47", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 20, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b31ad8eaacc9e63e79b3e9ed4fa4e99dda3d1f241bbe6e5fd09f066711e18970", Pod:"coredns-7c65d6cfc9-w2f57", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali70258bc21f5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:21:35.171013 containerd[1542]: 2025-08-13 07:21:35.112 [INFO][5599] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" Aug 13 07:21:35.171013 containerd[1542]: 2025-08-13 07:21:35.112 [INFO][5599] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" iface="eth0" netns="" Aug 13 07:21:35.171013 containerd[1542]: 2025-08-13 07:21:35.113 [INFO][5599] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" Aug 13 07:21:35.171013 containerd[1542]: 2025-08-13 07:21:35.113 [INFO][5599] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" Aug 13 07:21:35.171013 containerd[1542]: 2025-08-13 07:21:35.141 [INFO][5607] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" HandleID="k8s-pod-network.80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" Workload="localhost-k8s-coredns--7c65d6cfc9--w2f57-eth0" Aug 13 07:21:35.171013 containerd[1542]: 2025-08-13 07:21:35.141 [INFO][5607] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:21:35.171013 containerd[1542]: 2025-08-13 07:21:35.141 [INFO][5607] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:21:35.171013 containerd[1542]: 2025-08-13 07:21:35.167 [WARNING][5607] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" HandleID="k8s-pod-network.80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" Workload="localhost-k8s-coredns--7c65d6cfc9--w2f57-eth0" Aug 13 07:21:35.171013 containerd[1542]: 2025-08-13 07:21:35.167 [INFO][5607] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" HandleID="k8s-pod-network.80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" Workload="localhost-k8s-coredns--7c65d6cfc9--w2f57-eth0" Aug 13 07:21:35.171013 containerd[1542]: 2025-08-13 07:21:35.168 [INFO][5607] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:21:35.171013 containerd[1542]: 2025-08-13 07:21:35.169 [INFO][5599] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" Aug 13 07:21:35.171366 containerd[1542]: time="2025-08-13T07:21:35.171050935Z" level=info msg="TearDown network for sandbox \"80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db\" successfully" Aug 13 07:21:35.171366 containerd[1542]: time="2025-08-13T07:21:35.171066833Z" level=info msg="StopPodSandbox for \"80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db\" returns successfully" Aug 13 07:21:35.268448 containerd[1542]: time="2025-08-13T07:21:35.268221164Z" level=info msg="RemovePodSandbox for \"80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db\"" Aug 13 07:21:35.268448 containerd[1542]: time="2025-08-13T07:21:35.268251532Z" level=info msg="Forcibly stopping sandbox \"80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db\"" Aug 13 07:21:35.334018 containerd[1542]: 2025-08-13 07:21:35.310 [WARNING][5621] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--w2f57-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"eb2e06ca-79db-46b7-a389-34bc602b3a47", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 20, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b31ad8eaacc9e63e79b3e9ed4fa4e99dda3d1f241bbe6e5fd09f066711e18970", Pod:"coredns-7c65d6cfc9-w2f57", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali70258bc21f5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:21:35.334018 containerd[1542]: 2025-08-13 07:21:35.311 [INFO][5621] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" Aug 13 07:21:35.334018 containerd[1542]: 2025-08-13 07:21:35.311 [INFO][5621] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" iface="eth0" netns="" Aug 13 07:21:35.334018 containerd[1542]: 2025-08-13 07:21:35.311 [INFO][5621] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" Aug 13 07:21:35.334018 containerd[1542]: 2025-08-13 07:21:35.311 [INFO][5621] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" Aug 13 07:21:35.334018 containerd[1542]: 2025-08-13 07:21:35.325 [INFO][5628] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" HandleID="k8s-pod-network.80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" Workload="localhost-k8s-coredns--7c65d6cfc9--w2f57-eth0" Aug 13 07:21:35.334018 containerd[1542]: 2025-08-13 07:21:35.325 [INFO][5628] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:21:35.334018 containerd[1542]: 2025-08-13 07:21:35.325 [INFO][5628] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:21:35.334018 containerd[1542]: 2025-08-13 07:21:35.329 [WARNING][5628] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" HandleID="k8s-pod-network.80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" Workload="localhost-k8s-coredns--7c65d6cfc9--w2f57-eth0" Aug 13 07:21:35.334018 containerd[1542]: 2025-08-13 07:21:35.329 [INFO][5628] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" HandleID="k8s-pod-network.80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" Workload="localhost-k8s-coredns--7c65d6cfc9--w2f57-eth0" Aug 13 07:21:35.334018 containerd[1542]: 2025-08-13 07:21:35.330 [INFO][5628] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:21:35.334018 containerd[1542]: 2025-08-13 07:21:35.332 [INFO][5621] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db" Aug 13 07:21:35.344330 containerd[1542]: time="2025-08-13T07:21:35.334048138Z" level=info msg="TearDown network for sandbox \"80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db\" successfully" Aug 13 07:21:35.512945 kubelet[2712]: I0813 07:21:35.503064 2712 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 07:21:35.528410 kubelet[2712]: I0813 07:21:35.514542 2712 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 07:21:35.816340 kubelet[2712]: I0813 07:21:35.627399 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-cs2xn" podStartSLOduration=29.907756313 podStartE2EDuration="48.591899399s" podCreationTimestamp="2025-08-13 07:20:47 +0000 UTC" firstStartedPulling="2025-08-13 07:21:13.85415782 +0000 UTC m=+41.823589962" lastFinishedPulling="2025-08-13 07:21:32.538300906 +0000 UTC m=+60.507733048" observedRunningTime="2025-08-13 07:21:35.563415378 +0000 UTC m=+63.532847529" watchObservedRunningTime="2025-08-13 07:21:35.591899399 +0000 UTC m=+63.561331545" Aug 13 07:21:36.236129 containerd[1542]: time="2025-08-13T07:21:36.235867480Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:21:36.236129 containerd[1542]: time="2025-08-13T07:21:36.235913395Z" level=info msg="RemovePodSandbox \"80252c029773c1756727573daf3e40fdcaf08f401d4f9e91e4f9b51568b502db\" returns successfully" Aug 13 07:21:36.236759 containerd[1542]: time="2025-08-13T07:21:36.236601855Z" level=info msg="StopPodSandbox for \"470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91\"" Aug 13 07:21:36.347569 containerd[1542]: 2025-08-13 07:21:36.300 [WARNING][5645] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--649d79c576--kmjq6-eth0", GenerateName:"calico-apiserver-649d79c576-", Namespace:"calico-apiserver", SelfLink:"", UID:"b14a1626-6ffd-46ab-a2c5-1f786b38698c", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 20, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"649d79c576", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0201bfe4a8f47b43cdef5bf893321402a0e3febb3060e66fa8d4efea2d063071", Pod:"calico-apiserver-649d79c576-kmjq6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ec092aa222", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:21:36.347569 containerd[1542]: 2025-08-13 07:21:36.300 [INFO][5645] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" Aug 13 07:21:36.347569 containerd[1542]: 2025-08-13 07:21:36.300 [INFO][5645] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" iface="eth0" netns="" Aug 13 07:21:36.347569 containerd[1542]: 2025-08-13 07:21:36.300 [INFO][5645] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" Aug 13 07:21:36.347569 containerd[1542]: 2025-08-13 07:21:36.300 [INFO][5645] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" Aug 13 07:21:36.347569 containerd[1542]: 2025-08-13 07:21:36.332 [INFO][5652] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" HandleID="k8s-pod-network.470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" Workload="localhost-k8s-calico--apiserver--649d79c576--kmjq6-eth0" Aug 13 07:21:36.347569 containerd[1542]: 2025-08-13 07:21:36.332 [INFO][5652] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:21:36.347569 containerd[1542]: 2025-08-13 07:21:36.332 [INFO][5652] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:21:36.347569 containerd[1542]: 2025-08-13 07:21:36.341 [WARNING][5652] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" HandleID="k8s-pod-network.470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" Workload="localhost-k8s-calico--apiserver--649d79c576--kmjq6-eth0" Aug 13 07:21:36.347569 containerd[1542]: 2025-08-13 07:21:36.341 [INFO][5652] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" HandleID="k8s-pod-network.470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" Workload="localhost-k8s-calico--apiserver--649d79c576--kmjq6-eth0" Aug 13 07:21:36.347569 containerd[1542]: 2025-08-13 07:21:36.342 [INFO][5652] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:21:36.347569 containerd[1542]: 2025-08-13 07:21:36.345 [INFO][5645] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" Aug 13 07:21:36.349852 containerd[1542]: time="2025-08-13T07:21:36.347680876Z" level=info msg="TearDown network for sandbox \"470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91\" successfully" Aug 13 07:21:36.349852 containerd[1542]: time="2025-08-13T07:21:36.347732596Z" level=info msg="StopPodSandbox for \"470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91\" returns successfully" Aug 13 07:21:36.349852 containerd[1542]: time="2025-08-13T07:21:36.348106052Z" level=info msg="RemovePodSandbox for \"470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91\"" Aug 13 07:21:36.349852 containerd[1542]: time="2025-08-13T07:21:36.348121534Z" level=info msg="Forcibly stopping sandbox \"470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91\"" Aug 13 07:21:36.493813 containerd[1542]: 2025-08-13 07:21:36.385 [WARNING][5667] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--649d79c576--kmjq6-eth0", GenerateName:"calico-apiserver-649d79c576-", Namespace:"calico-apiserver", SelfLink:"", UID:"b14a1626-6ffd-46ab-a2c5-1f786b38698c", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 20, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"649d79c576", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0201bfe4a8f47b43cdef5bf893321402a0e3febb3060e66fa8d4efea2d063071", Pod:"calico-apiserver-649d79c576-kmjq6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ec092aa222", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:21:36.493813 containerd[1542]: 2025-08-13 07:21:36.385 [INFO][5667] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" Aug 13 07:21:36.493813 containerd[1542]: 2025-08-13 07:21:36.385 [INFO][5667] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" iface="eth0" netns="" Aug 13 07:21:36.493813 containerd[1542]: 2025-08-13 07:21:36.385 [INFO][5667] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" Aug 13 07:21:36.493813 containerd[1542]: 2025-08-13 07:21:36.385 [INFO][5667] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" Aug 13 07:21:36.493813 containerd[1542]: 2025-08-13 07:21:36.411 [INFO][5677] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" HandleID="k8s-pod-network.470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" Workload="localhost-k8s-calico--apiserver--649d79c576--kmjq6-eth0" Aug 13 07:21:36.493813 containerd[1542]: 2025-08-13 07:21:36.412 [INFO][5677] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:21:36.493813 containerd[1542]: 2025-08-13 07:21:36.412 [INFO][5677] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:21:36.493813 containerd[1542]: 2025-08-13 07:21:36.487 [WARNING][5677] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" HandleID="k8s-pod-network.470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" Workload="localhost-k8s-calico--apiserver--649d79c576--kmjq6-eth0" Aug 13 07:21:36.493813 containerd[1542]: 2025-08-13 07:21:36.487 [INFO][5677] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" HandleID="k8s-pod-network.470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" Workload="localhost-k8s-calico--apiserver--649d79c576--kmjq6-eth0" Aug 13 07:21:36.493813 containerd[1542]: 2025-08-13 07:21:36.490 [INFO][5677] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:21:36.493813 containerd[1542]: 2025-08-13 07:21:36.492 [INFO][5667] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91" Aug 13 07:21:36.493813 containerd[1542]: time="2025-08-13T07:21:36.493798166Z" level=info msg="TearDown network for sandbox \"470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91\" successfully" Aug 13 07:21:36.497562 containerd[1542]: time="2025-08-13T07:21:36.497544091Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:21:36.499943 containerd[1542]: time="2025-08-13T07:21:36.497639884Z" level=info msg="RemovePodSandbox \"470e456aa4766f2939b66d9c8f1e5f4c40a7a4f983d90d7a914eccfc33b15f91\" returns successfully" Aug 13 07:21:36.499943 containerd[1542]: time="2025-08-13T07:21:36.498754418Z" level=info msg="StopPodSandbox for \"a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284\"" Aug 13 07:21:36.565083 containerd[1542]: 2025-08-13 07:21:36.529 [WARNING][5691] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--nwh9w-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"f1b8ec24-0eba-4c4b-be6f-8021be0e5ebc", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 20, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0a0e1ee3f37d49a3eb0c1cfd7b8972a1e448d406be1c6106faec04f406b06e42", Pod:"goldmane-58fd7646b9-nwh9w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie6e01a11ed9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:21:36.565083 containerd[1542]: 2025-08-13 07:21:36.529 [INFO][5691] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" Aug 13 07:21:36.565083 containerd[1542]: 2025-08-13 07:21:36.529 [INFO][5691] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" iface="eth0" netns="" Aug 13 07:21:36.565083 containerd[1542]: 2025-08-13 07:21:36.529 [INFO][5691] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" Aug 13 07:21:36.565083 containerd[1542]: 2025-08-13 07:21:36.529 [INFO][5691] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" Aug 13 07:21:36.565083 containerd[1542]: 2025-08-13 07:21:36.552 [INFO][5698] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" HandleID="k8s-pod-network.a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" Workload="localhost-k8s-goldmane--58fd7646b9--nwh9w-eth0" Aug 13 07:21:36.565083 containerd[1542]: 2025-08-13 07:21:36.552 [INFO][5698] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:21:36.565083 containerd[1542]: 2025-08-13 07:21:36.552 [INFO][5698] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:21:36.565083 containerd[1542]: 2025-08-13 07:21:36.557 [WARNING][5698] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" HandleID="k8s-pod-network.a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" Workload="localhost-k8s-goldmane--58fd7646b9--nwh9w-eth0" Aug 13 07:21:36.565083 containerd[1542]: 2025-08-13 07:21:36.557 [INFO][5698] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" HandleID="k8s-pod-network.a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" Workload="localhost-k8s-goldmane--58fd7646b9--nwh9w-eth0" Aug 13 07:21:36.565083 containerd[1542]: 2025-08-13 07:21:36.559 [INFO][5698] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:21:36.565083 containerd[1542]: 2025-08-13 07:21:36.562 [INFO][5691] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" Aug 13 07:21:36.567957 containerd[1542]: time="2025-08-13T07:21:36.565371850Z" level=info msg="TearDown network for sandbox \"a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284\" successfully" Aug 13 07:21:36.567957 containerd[1542]: time="2025-08-13T07:21:36.565389040Z" level=info msg="StopPodSandbox for \"a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284\" returns successfully" Aug 13 07:21:36.567957 containerd[1542]: time="2025-08-13T07:21:36.566523191Z" level=info msg="RemovePodSandbox for \"a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284\"" Aug 13 07:21:36.567957 containerd[1542]: time="2025-08-13T07:21:36.566542521Z" level=info msg="Forcibly stopping sandbox \"a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284\"" Aug 13 07:21:36.653464 containerd[1542]: 2025-08-13 07:21:36.612 [WARNING][5712] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--nwh9w-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"f1b8ec24-0eba-4c4b-be6f-8021be0e5ebc", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 20, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0a0e1ee3f37d49a3eb0c1cfd7b8972a1e448d406be1c6106faec04f406b06e42", Pod:"goldmane-58fd7646b9-nwh9w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie6e01a11ed9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:21:36.653464 containerd[1542]: 2025-08-13 07:21:36.612 [INFO][5712] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" Aug 13 07:21:36.653464 containerd[1542]: 2025-08-13 07:21:36.612 [INFO][5712] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" iface="eth0" netns="" Aug 13 07:21:36.653464 containerd[1542]: 2025-08-13 07:21:36.612 [INFO][5712] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" Aug 13 07:21:36.653464 containerd[1542]: 2025-08-13 07:21:36.612 [INFO][5712] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" Aug 13 07:21:36.653464 containerd[1542]: 2025-08-13 07:21:36.642 [INFO][5720] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" HandleID="k8s-pod-network.a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" Workload="localhost-k8s-goldmane--58fd7646b9--nwh9w-eth0" Aug 13 07:21:36.653464 containerd[1542]: 2025-08-13 07:21:36.643 [INFO][5720] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:21:36.653464 containerd[1542]: 2025-08-13 07:21:36.643 [INFO][5720] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:21:36.653464 containerd[1542]: 2025-08-13 07:21:36.648 [WARNING][5720] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" HandleID="k8s-pod-network.a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" Workload="localhost-k8s-goldmane--58fd7646b9--nwh9w-eth0" Aug 13 07:21:36.653464 containerd[1542]: 2025-08-13 07:21:36.648 [INFO][5720] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" HandleID="k8s-pod-network.a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" Workload="localhost-k8s-goldmane--58fd7646b9--nwh9w-eth0" Aug 13 07:21:36.653464 containerd[1542]: 2025-08-13 07:21:36.649 [INFO][5720] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:21:36.653464 containerd[1542]: 2025-08-13 07:21:36.652 [INFO][5712] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284" Aug 13 07:21:36.654847 containerd[1542]: time="2025-08-13T07:21:36.653478893Z" level=info msg="TearDown network for sandbox \"a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284\" successfully" Aug 13 07:21:36.666674 containerd[1542]: time="2025-08-13T07:21:36.666645902Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:21:36.666747 containerd[1542]: time="2025-08-13T07:21:36.666712307Z" level=info msg="RemovePodSandbox \"a53cfa89dd190ae287382b4ac442a8995d24f84e3fd03a4e31a64f9ccc696284\" returns successfully" Aug 13 07:21:36.667136 containerd[1542]: time="2025-08-13T07:21:36.667095359Z" level=info msg="StopPodSandbox for \"cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3\"" Aug 13 07:21:36.743631 containerd[1542]: 2025-08-13 07:21:36.700 [WARNING][5735] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cs2xn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"58b74c31-6d05-4f11-8c94-9d85e9d65a22", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 20, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aac1f3e35c90e6e79cfe1bc4bbb7a08564f369fea0f7c6a250b25ef81f01bd35", Pod:"csi-node-driver-cs2xn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9dd9bc586f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:21:36.743631 containerd[1542]: 2025-08-13 07:21:36.700 [INFO][5735] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" Aug 13 07:21:36.743631 containerd[1542]: 2025-08-13 07:21:36.700 [INFO][5735] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" iface="eth0" netns="" Aug 13 07:21:36.743631 containerd[1542]: 2025-08-13 07:21:36.700 [INFO][5735] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" Aug 13 07:21:36.743631 containerd[1542]: 2025-08-13 07:21:36.700 [INFO][5735] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" Aug 13 07:21:36.743631 containerd[1542]: 2025-08-13 07:21:36.724 [INFO][5742] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" HandleID="k8s-pod-network.cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" Workload="localhost-k8s-csi--node--driver--cs2xn-eth0" Aug 13 07:21:36.743631 containerd[1542]: 2025-08-13 07:21:36.725 [INFO][5742] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:21:36.743631 containerd[1542]: 2025-08-13 07:21:36.725 [INFO][5742] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:21:36.743631 containerd[1542]: 2025-08-13 07:21:36.737 [WARNING][5742] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" HandleID="k8s-pod-network.cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" Workload="localhost-k8s-csi--node--driver--cs2xn-eth0" Aug 13 07:21:36.743631 containerd[1542]: 2025-08-13 07:21:36.737 [INFO][5742] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" HandleID="k8s-pod-network.cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" Workload="localhost-k8s-csi--node--driver--cs2xn-eth0" Aug 13 07:21:36.743631 containerd[1542]: 2025-08-13 07:21:36.738 [INFO][5742] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:21:36.743631 containerd[1542]: 2025-08-13 07:21:36.741 [INFO][5735] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" Aug 13 07:21:36.743631 containerd[1542]: time="2025-08-13T07:21:36.742664019Z" level=info msg="TearDown network for sandbox \"cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3\" successfully" Aug 13 07:21:36.743631 containerd[1542]: time="2025-08-13T07:21:36.742684016Z" level=info msg="StopPodSandbox for \"cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3\" returns successfully" Aug 13 07:21:36.743631 containerd[1542]: time="2025-08-13T07:21:36.743381320Z" level=info msg="RemovePodSandbox for \"cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3\"" Aug 13 07:21:36.743631 containerd[1542]: time="2025-08-13T07:21:36.743396777Z" level=info msg="Forcibly stopping sandbox \"cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3\"" Aug 13 07:21:36.824704 containerd[1542]: 2025-08-13 07:21:36.782 [WARNING][5757] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cs2xn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"58b74c31-6d05-4f11-8c94-9d85e9d65a22", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 20, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aac1f3e35c90e6e79cfe1bc4bbb7a08564f369fea0f7c6a250b25ef81f01bd35", Pod:"csi-node-driver-cs2xn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9dd9bc586f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:21:36.824704 containerd[1542]: 2025-08-13 07:21:36.782 [INFO][5757] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" Aug 13 07:21:36.824704 containerd[1542]: 2025-08-13 07:21:36.782 [INFO][5757] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" iface="eth0" netns="" Aug 13 07:21:36.824704 containerd[1542]: 2025-08-13 07:21:36.782 [INFO][5757] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" Aug 13 07:21:36.824704 containerd[1542]: 2025-08-13 07:21:36.782 [INFO][5757] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" Aug 13 07:21:36.824704 containerd[1542]: 2025-08-13 07:21:36.806 [INFO][5765] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" HandleID="k8s-pod-network.cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" Workload="localhost-k8s-csi--node--driver--cs2xn-eth0" Aug 13 07:21:36.824704 containerd[1542]: 2025-08-13 07:21:36.806 [INFO][5765] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:21:36.824704 containerd[1542]: 2025-08-13 07:21:36.806 [INFO][5765] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:21:36.824704 containerd[1542]: 2025-08-13 07:21:36.819 [WARNING][5765] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" HandleID="k8s-pod-network.cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" Workload="localhost-k8s-csi--node--driver--cs2xn-eth0" Aug 13 07:21:36.824704 containerd[1542]: 2025-08-13 07:21:36.819 [INFO][5765] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" HandleID="k8s-pod-network.cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" Workload="localhost-k8s-csi--node--driver--cs2xn-eth0" Aug 13 07:21:36.824704 containerd[1542]: 2025-08-13 07:21:36.821 [INFO][5765] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:21:36.824704 containerd[1542]: 2025-08-13 07:21:36.823 [INFO][5757] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3" Aug 13 07:21:36.827086 containerd[1542]: time="2025-08-13T07:21:36.824814326Z" level=info msg="TearDown network for sandbox \"cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3\" successfully" Aug 13 07:21:36.831241 containerd[1542]: time="2025-08-13T07:21:36.831214752Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:21:36.831312 containerd[1542]: time="2025-08-13T07:21:36.831261681Z" level=info msg="RemovePodSandbox \"cc9ef92495e825b286cc52131fb0f765fef889ff1e3b2bf1d63fc295d18969b3\" returns successfully" Aug 13 07:21:36.831601 containerd[1542]: time="2025-08-13T07:21:36.831587131Z" level=info msg="StopPodSandbox for \"0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc\"" Aug 13 07:21:36.950838 containerd[1542]: 2025-08-13 07:21:36.901 [WARNING][5779] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--649d79c576--jsplw-eth0", GenerateName:"calico-apiserver-649d79c576-", Namespace:"calico-apiserver", SelfLink:"", UID:"bec5125a-0b32-4b9c-9b75-c6827db5f9b6", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 20, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"649d79c576", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dea9ab1aad54da803c3ee99b6ee1cecf0f5cea255e84a1b1a5f02406848f8d90", Pod:"calico-apiserver-649d79c576-jsplw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali793ec9fd11e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:21:36.950838 containerd[1542]: 2025-08-13 07:21:36.901 [INFO][5779] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" Aug 13 07:21:36.950838 containerd[1542]: 2025-08-13 07:21:36.901 [INFO][5779] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" iface="eth0" netns="" Aug 13 07:21:36.950838 containerd[1542]: 2025-08-13 07:21:36.901 [INFO][5779] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" Aug 13 07:21:36.950838 containerd[1542]: 2025-08-13 07:21:36.901 [INFO][5779] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" Aug 13 07:21:36.950838 containerd[1542]: 2025-08-13 07:21:36.936 [INFO][5786] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" HandleID="k8s-pod-network.0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" Workload="localhost-k8s-calico--apiserver--649d79c576--jsplw-eth0" Aug 13 07:21:36.950838 containerd[1542]: 2025-08-13 07:21:36.936 [INFO][5786] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:21:36.950838 containerd[1542]: 2025-08-13 07:21:36.936 [INFO][5786] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:21:36.950838 containerd[1542]: 2025-08-13 07:21:36.946 [WARNING][5786] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" HandleID="k8s-pod-network.0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" Workload="localhost-k8s-calico--apiserver--649d79c576--jsplw-eth0" Aug 13 07:21:36.950838 containerd[1542]: 2025-08-13 07:21:36.946 [INFO][5786] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" HandleID="k8s-pod-network.0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" Workload="localhost-k8s-calico--apiserver--649d79c576--jsplw-eth0" Aug 13 07:21:36.950838 containerd[1542]: 2025-08-13 07:21:36.947 [INFO][5786] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:21:36.950838 containerd[1542]: 2025-08-13 07:21:36.948 [INFO][5779] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" Aug 13 07:21:36.954971 containerd[1542]: time="2025-08-13T07:21:36.951073179Z" level=info msg="TearDown network for sandbox \"0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc\" successfully" Aug 13 07:21:36.954971 containerd[1542]: time="2025-08-13T07:21:36.951105691Z" level=info msg="StopPodSandbox for \"0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc\" returns successfully" Aug 13 07:21:36.954971 containerd[1542]: time="2025-08-13T07:21:36.951475647Z" level=info msg="RemovePodSandbox for \"0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc\"" Aug 13 07:21:36.954971 containerd[1542]: time="2025-08-13T07:21:36.951490233Z" level=info msg="Forcibly stopping sandbox \"0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc\"" Aug 13 07:21:37.030612 containerd[1542]: 2025-08-13 07:21:36.980 [WARNING][5800] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--649d79c576--jsplw-eth0", GenerateName:"calico-apiserver-649d79c576-", Namespace:"calico-apiserver", SelfLink:"", UID:"bec5125a-0b32-4b9c-9b75-c6827db5f9b6", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 20, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"649d79c576", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dea9ab1aad54da803c3ee99b6ee1cecf0f5cea255e84a1b1a5f02406848f8d90", Pod:"calico-apiserver-649d79c576-jsplw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali793ec9fd11e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:21:37.030612 containerd[1542]: 2025-08-13 07:21:36.980 [INFO][5800] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" Aug 13 07:21:37.030612 containerd[1542]: 2025-08-13 07:21:36.980 [INFO][5800] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" iface="eth0" netns="" Aug 13 07:21:37.030612 containerd[1542]: 2025-08-13 07:21:36.980 [INFO][5800] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" Aug 13 07:21:37.030612 containerd[1542]: 2025-08-13 07:21:36.980 [INFO][5800] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" Aug 13 07:21:37.030612 containerd[1542]: 2025-08-13 07:21:37.007 [INFO][5807] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" HandleID="k8s-pod-network.0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" Workload="localhost-k8s-calico--apiserver--649d79c576--jsplw-eth0" Aug 13 07:21:37.030612 containerd[1542]: 2025-08-13 07:21:37.007 [INFO][5807] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:21:37.030612 containerd[1542]: 2025-08-13 07:21:37.007 [INFO][5807] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:21:37.030612 containerd[1542]: 2025-08-13 07:21:37.023 [WARNING][5807] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" HandleID="k8s-pod-network.0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" Workload="localhost-k8s-calico--apiserver--649d79c576--jsplw-eth0" Aug 13 07:21:37.030612 containerd[1542]: 2025-08-13 07:21:37.024 [INFO][5807] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" HandleID="k8s-pod-network.0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" Workload="localhost-k8s-calico--apiserver--649d79c576--jsplw-eth0" Aug 13 07:21:37.030612 containerd[1542]: 2025-08-13 07:21:37.025 [INFO][5807] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:21:37.030612 containerd[1542]: 2025-08-13 07:21:37.027 [INFO][5800] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc" Aug 13 07:21:37.030612 containerd[1542]: time="2025-08-13T07:21:37.030299530Z" level=info msg="TearDown network for sandbox \"0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc\" successfully" Aug 13 07:21:37.048695 containerd[1542]: time="2025-08-13T07:21:37.047894156Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:21:37.048695 containerd[1542]: time="2025-08-13T07:21:37.047938404Z" level=info msg="RemovePodSandbox \"0bee2fcc494e97c86ba04b59de2ba0f270a01b7b91ff4c47834f9d1075f20ddc\" returns successfully" Aug 13 07:21:44.331295 systemd[1]: Started sshd@7-139.178.70.105:22-139.178.68.195:39234.service - OpenSSH per-connection server daemon (139.178.68.195:39234). Aug 13 07:21:44.494072 kubelet[2712]: I0813 07:21:44.484769 2712 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:21:44.561516 sshd[5820]: Accepted publickey for core from 139.178.68.195 port 39234 ssh2: RSA SHA256:ngPUvt8IOvnSuX56Jd1rJjbUaAfs3mZgf3k+pwY7KWw Aug 13 07:21:44.568012 sshd[5820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:21:44.592577 systemd-logind[1513]: New session 10 of user core. Aug 13 07:21:44.597651 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 07:21:45.400412 sshd[5820]: pam_unix(sshd:session): session closed for user core Aug 13 07:21:45.409239 systemd-logind[1513]: Session 10 logged out. Waiting for processes to exit. Aug 13 07:21:45.410774 systemd[1]: sshd@7-139.178.70.105:22-139.178.68.195:39234.service: Deactivated successfully. Aug 13 07:21:45.413224 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 07:21:45.414598 systemd-logind[1513]: Removed session 10. Aug 13 07:21:50.481516 systemd[1]: Started sshd@8-139.178.70.105:22-139.178.68.195:35384.service - OpenSSH per-connection server daemon (139.178.68.195:35384). Aug 13 07:21:50.702132 sshd[5865]: Accepted publickey for core from 139.178.68.195 port 35384 ssh2: RSA SHA256:ngPUvt8IOvnSuX56Jd1rJjbUaAfs3mZgf3k+pwY7KWw Aug 13 07:21:50.704728 sshd[5865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:21:50.708327 systemd-logind[1513]: New session 11 of user core. Aug 13 07:21:50.715911 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 07:21:51.300133 sshd[5865]: pam_unix(sshd:session): session closed for user core Aug 13 07:21:51.305219 systemd[1]: sshd@8-139.178.70.105:22-139.178.68.195:35384.service: Deactivated successfully. Aug 13 07:21:51.306378 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 07:21:51.306864 systemd-logind[1513]: Session 11 logged out. Waiting for processes to exit. Aug 13 07:21:51.307484 systemd-logind[1513]: Removed session 11. Aug 13 07:21:51.993001 systemd[1]: run-containerd-runc-k8s.io-8debb08f85bb7a37f7ddda7f57fe01b5fcab1e2e42318c0f50f99cd51ffdb5a8-runc.lTe2ae.mount: Deactivated successfully. Aug 13 07:21:56.352944 systemd[1]: Started sshd@9-139.178.70.105:22-139.178.68.195:35390.service - OpenSSH per-connection server daemon (139.178.68.195:35390). Aug 13 07:21:56.512085 sshd[5941]: Accepted publickey for core from 139.178.68.195 port 35390 ssh2: RSA SHA256:ngPUvt8IOvnSuX56Jd1rJjbUaAfs3mZgf3k+pwY7KWw Aug 13 07:21:56.517727 sshd[5941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:21:56.537565 systemd-logind[1513]: New session 12 of user core. Aug 13 07:21:56.538958 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 07:21:57.784806 sshd[5941]: pam_unix(sshd:session): session closed for user core Aug 13 07:21:57.792495 systemd[1]: sshd@9-139.178.70.105:22-139.178.68.195:35390.service: Deactivated successfully. Aug 13 07:21:57.797375 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 07:21:57.798301 systemd-logind[1513]: Session 12 logged out. Waiting for processes to exit. Aug 13 07:21:57.804514 systemd[1]: Started sshd@10-139.178.70.105:22-139.178.68.195:35406.service - OpenSSH per-connection server daemon (139.178.68.195:35406). Aug 13 07:21:57.805194 systemd-logind[1513]: Removed session 12. Aug 13 07:21:57.862563 sshd[5975]: Accepted publickey for core from 139.178.68.195 port 35406 ssh2: RSA SHA256:ngPUvt8IOvnSuX56Jd1rJjbUaAfs3mZgf3k+pwY7KWw Aug 13 07:21:57.863421 sshd[5975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:21:57.866454 systemd-logind[1513]: New session 13 of user core. Aug 13 07:21:57.871143 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 07:21:58.324240 systemd[1]: Started sshd@11-139.178.70.105:22-139.178.68.195:35410.service - OpenSSH per-connection server daemon (139.178.68.195:35410). Aug 13 07:21:58.345328 sshd[5975]: pam_unix(sshd:session): session closed for user core Aug 13 07:21:58.378285 systemd-logind[1513]: Session 13 logged out. Waiting for processes to exit. Aug 13 07:21:58.379270 systemd[1]: sshd@10-139.178.70.105:22-139.178.68.195:35406.service: Deactivated successfully. Aug 13 07:21:58.380393 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 07:21:58.381113 systemd-logind[1513]: Removed session 13. Aug 13 07:21:58.670412 sshd[5984]: Accepted publickey for core from 139.178.68.195 port 35410 ssh2: RSA SHA256:ngPUvt8IOvnSuX56Jd1rJjbUaAfs3mZgf3k+pwY7KWw Aug 13 07:21:58.671814 sshd[5984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:21:58.677015 systemd-logind[1513]: New session 14 of user core. Aug 13 07:21:58.680912 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 07:21:59.565090 sshd[5984]: pam_unix(sshd:session): session closed for user core Aug 13 07:21:59.585143 systemd-logind[1513]: Session 14 logged out. Waiting for processes to exit. Aug 13 07:21:59.585258 systemd[1]: sshd@11-139.178.70.105:22-139.178.68.195:35410.service: Deactivated successfully. Aug 13 07:21:59.586285 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 07:21:59.586769 systemd-logind[1513]: Removed session 14. Aug 13 07:22:04.696041 systemd[1]: Started sshd@12-139.178.70.105:22-139.178.68.195:42522.service - OpenSSH per-connection server daemon (139.178.68.195:42522). Aug 13 07:22:04.868731 sshd[6039]: Accepted publickey for core from 139.178.68.195 port 42522 ssh2: RSA SHA256:ngPUvt8IOvnSuX56Jd1rJjbUaAfs3mZgf3k+pwY7KWw Aug 13 07:22:04.873079 sshd[6039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:22:04.881088 systemd-logind[1513]: New session 15 of user core. Aug 13 07:22:04.888929 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 07:22:05.712381 sshd[6039]: pam_unix(sshd:session): session closed for user core Aug 13 07:22:05.719454 systemd[1]: sshd@12-139.178.70.105:22-139.178.68.195:42522.service: Deactivated successfully. Aug 13 07:22:05.720475 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 07:22:05.721253 systemd-logind[1513]: Session 15 logged out. Waiting for processes to exit. Aug 13 07:22:05.725049 systemd[1]: Started sshd@13-139.178.70.105:22-139.178.68.195:42532.service - OpenSSH per-connection server daemon (139.178.68.195:42532). Aug 13 07:22:05.728474 systemd-logind[1513]: Removed session 15. Aug 13 07:22:05.797958 sshd[6052]: Accepted publickey for core from 139.178.68.195 port 42532 ssh2: RSA SHA256:ngPUvt8IOvnSuX56Jd1rJjbUaAfs3mZgf3k+pwY7KWw Aug 13 07:22:05.798960 sshd[6052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:22:05.801715 systemd-logind[1513]: New session 16 of user core. Aug 13 07:22:05.808908 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 07:22:06.250492 sshd[6052]: pam_unix(sshd:session): session closed for user core Aug 13 07:22:06.255189 systemd[1]: sshd@13-139.178.70.105:22-139.178.68.195:42532.service: Deactivated successfully. Aug 13 07:22:06.256930 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 07:22:06.258123 systemd-logind[1513]: Session 16 logged out. Waiting for processes to exit. Aug 13 07:22:06.262034 systemd[1]: Started sshd@14-139.178.70.105:22-139.178.68.195:42538.service - OpenSSH per-connection server daemon (139.178.68.195:42538). Aug 13 07:22:06.262719 systemd-logind[1513]: Removed session 16. Aug 13 07:22:06.333608 sshd[6062]: Accepted publickey for core from 139.178.68.195 port 42538 ssh2: RSA SHA256:ngPUvt8IOvnSuX56Jd1rJjbUaAfs3mZgf3k+pwY7KWw Aug 13 07:22:06.334979 sshd[6062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:22:06.338375 systemd-logind[1513]: New session 17 of user core. Aug 13 07:22:06.343915 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 07:22:13.654936 sshd[6062]: pam_unix(sshd:session): session closed for user core Aug 13 07:22:13.718105 systemd[1]: sshd@14-139.178.70.105:22-139.178.68.195:42538.service: Deactivated successfully. Aug 13 07:22:13.719339 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 07:22:13.720138 systemd-logind[1513]: Session 17 logged out. Waiting for processes to exit. Aug 13 07:22:13.751354 systemd[1]: Started sshd@15-139.178.70.105:22-139.178.68.195:50812.service - OpenSSH per-connection server daemon (139.178.68.195:50812). Aug 13 07:22:13.755524 systemd-logind[1513]: Removed session 17. Aug 13 07:22:13.968744 sshd[6106]: Accepted publickey for core from 139.178.68.195 port 50812 ssh2: RSA SHA256:ngPUvt8IOvnSuX56Jd1rJjbUaAfs3mZgf3k+pwY7KWw Aug 13 07:22:13.976565 sshd[6106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:22:13.996141 systemd-logind[1513]: New session 18 of user core. Aug 13 07:22:14.000982 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 07:22:35.488553 kubelet[2712]: E0813 07:22:35.289357 2712 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="21.695s" Aug 13 07:22:39.214666 sshd[6106]: pam_unix(sshd:session): session closed for user core Aug 13 07:22:39.261218 systemd[1]: sshd@15-139.178.70.105:22-139.178.68.195:50812.service: Deactivated successfully. Aug 13 07:22:39.263194 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 07:22:39.263291 systemd[1]: session-18.scope: Consumed 6.151s CPU time. Aug 13 07:22:39.264136 systemd-logind[1513]: Session 18 logged out. Waiting for processes to exit. Aug 13 07:22:39.274850 systemd[1]: Started sshd@16-139.178.70.105:22-139.178.68.195:56906.service - OpenSSH per-connection server daemon (139.178.68.195:56906). Aug 13 07:22:39.275370 systemd-logind[1513]: Removed session 18. Aug 13 07:22:39.463839 sshd[6193]: Accepted publickey for core from 139.178.68.195 port 56906 ssh2: RSA SHA256:ngPUvt8IOvnSuX56Jd1rJjbUaAfs3mZgf3k+pwY7KWw Aug 13 07:22:39.473940 sshd[6193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:22:39.490468 systemd-logind[1513]: New session 19 of user core. Aug 13 07:22:39.495882 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 07:22:42.775538 kubelet[2712]: E0813 07:22:42.585844 2712 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.072s" Aug 13 07:22:47.310368 kubelet[2712]: E0813 07:22:47.197141 2712 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.684s" Aug 13 07:22:50.566887 kubelet[2712]: E0813 07:22:50.535052 2712 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.024s" Aug 13 07:22:52.951484 sshd[6193]: pam_unix(sshd:session): session closed for user core Aug 13 07:22:53.010041 systemd[1]: sshd@16-139.178.70.105:22-139.178.68.195:56906.service: Deactivated successfully. Aug 13 07:22:53.016872 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 07:22:53.017011 systemd[1]: session-19.scope: Consumed 3.496s CPU time. Aug 13 07:22:53.023946 systemd-logind[1513]: Session 19 logged out. Waiting for processes to exit. Aug 13 07:22:53.050102 systemd-logind[1513]: Removed session 19. Aug 13 07:22:53.789278 kubelet[2712]: E0813 07:22:53.766438 2712 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.438s"