May 16 00:19:31.750253 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu May 15 22:08:20 -00 2025 May 16 00:19:31.750270 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=5e2f56b68c7f7e65e4df73d074f249f99b5795b677316c47e2ad758e6bd99733 May 16 00:19:31.750277 kernel: Disabled fast string operations May 16 00:19:31.750281 kernel: BIOS-provided physical RAM map: May 16 00:19:31.750285 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable May 16 00:19:31.750289 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved May 16 00:19:31.750296 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved May 16 00:19:31.750300 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable May 16 00:19:31.750305 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data May 16 00:19:31.750309 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS May 16 00:19:31.750313 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable May 16 00:19:31.750317 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved May 16 00:19:31.750322 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved May 16 00:19:31.750326 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved May 16 00:19:31.750333 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved May 16 00:19:31.750337 kernel: NX (Execute Disable) protection: active May 16 00:19:31.750342 kernel: APIC: Static calls initialized May 16 00:19:31.750347 kernel: SMBIOS 2.7 present. May 16 00:19:31.750352 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 May 16 00:19:31.750357 kernel: vmware: hypercall mode: 0x00 May 16 00:19:31.750362 kernel: Hypervisor detected: VMware May 16 00:19:31.750366 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz May 16 00:19:31.750372 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz May 16 00:19:31.750377 kernel: vmware: using clock offset of 3597939696 ns May 16 00:19:31.750382 kernel: tsc: Detected 3408.000 MHz processor May 16 00:19:31.750387 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 16 00:19:31.750393 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 16 00:19:31.750398 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 May 16 00:19:31.750403 kernel: total RAM covered: 3072M May 16 00:19:31.750407 kernel: Found optimal setting for mtrr clean up May 16 00:19:31.750413 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G May 16 00:19:31.750418 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs May 16 00:19:31.750425 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 16 00:19:31.750430 kernel: Using GB pages for direct mapping May 16 00:19:31.750434 kernel: ACPI: Early table checksum verification disabled May 16 00:19:31.750439 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) May 16 00:19:31.750444 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) May 16 00:19:31.750449 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) May 16 00:19:31.750454 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) May 16 00:19:31.750460 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 16 00:19:31.750467 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 16 00:19:31.750473 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) May 16 00:19:31.750478 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) May 16 00:19:31.750483 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) May 16 00:19:31.750488 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) May 16 00:19:31.750493 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) May 16 00:19:31.750500 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) May 16 00:19:31.750505 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] May 16 00:19:31.750510 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] May 16 00:19:31.750515 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 16 00:19:31.750520 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 16 00:19:31.750525 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] May 16 00:19:31.750531 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] May 16 00:19:31.750536 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] May 16 00:19:31.750541 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] May 16 00:19:31.750547 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] May 16 00:19:31.750552 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] May 16 00:19:31.750557 kernel: system APIC only can use physical flat May 16 00:19:31.750562 kernel: APIC: Switched APIC routing to: physical flat May 16 00:19:31.750568 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 16 00:19:31.750573 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 May 16 00:19:31.750578 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 May 16 00:19:31.750583 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 May 16 00:19:31.750588 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 May 16 00:19:31.750593 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 May 16 00:19:31.750599 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 May 16 00:19:31.750604 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 May 16 00:19:31.750609 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 May 16 00:19:31.750614 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 May 16 00:19:31.750619 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 May 16 00:19:31.750624 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 May 16 00:19:31.750629 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 May 16 00:19:31.750634 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 May 16 00:19:31.750639 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 May 16 00:19:31.750644 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 May 16 00:19:31.750650 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 May 16 00:19:31.750655 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 May 16 00:19:31.750660 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 May 16 00:19:31.750665 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 May 16 00:19:31.750670 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 May 16 00:19:31.750675 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 May 16 00:19:31.750680 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 May 16 00:19:31.750685 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 May 16 00:19:31.750690 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 May 16 00:19:31.750695 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 May 16 00:19:31.750700 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 May 16 00:19:31.750706 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 May 16 00:19:31.750711 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 May 16 00:19:31.750717 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 May 16 00:19:31.750722 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 May 16 00:19:31.750727 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 May 16 00:19:31.750732 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 May 16 00:19:31.750737 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 May 16 00:19:31.750742 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 May 16 00:19:31.750747 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 May 16 00:19:31.750752 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 May 16 00:19:31.750758 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 May 16 00:19:31.750763 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 May 16 00:19:31.750768 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 May 16 00:19:31.750773 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 May 16 00:19:31.750778 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 May 16 00:19:31.750783 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 May 16 00:19:31.750788 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 May 16 00:19:31.750793 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 May 16 00:19:31.750798 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 May 16 00:19:31.750803 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 May 16 00:19:31.750809 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 May 16 00:19:31.750814 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 May 16 00:19:31.750819 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 May 16 00:19:31.750824 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 May 16 00:19:31.750829 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 May 16 00:19:31.750834 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 May 16 00:19:31.750839 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 May 16 00:19:31.750844 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 May 16 00:19:31.750849 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 May 16 00:19:31.750854 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 May 16 00:19:31.750860 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 May 16 00:19:31.750866 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 May 16 00:19:31.750874 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 May 16 00:19:31.750881 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 May 16 00:19:31.750886 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 May 16 00:19:31.750891 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 May 16 00:19:31.750897 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 May 16 00:19:31.750902 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 May 16 00:19:31.750908 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 May 16 00:19:31.750914 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 May 16 00:19:31.750919 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 May 16 00:19:31.750925 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 May 16 00:19:31.750930 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 May 16 00:19:31.750935 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 May 16 00:19:31.750941 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 May 16 00:19:31.750946 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 May 16 00:19:31.750951 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 May 16 00:19:31.750977 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 May 16 00:19:31.750986 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 May 16 00:19:31.750994 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 May 16 00:19:31.751000 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 May 16 00:19:31.751005 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 May 16 00:19:31.751011 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 May 16 00:19:31.751016 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 May 16 00:19:31.751021 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 May 16 00:19:31.751026 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 May 16 00:19:31.751032 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 May 16 00:19:31.751037 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 May 16 00:19:31.751042 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 May 16 00:19:31.751049 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 May 16 00:19:31.751055 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 May 16 00:19:31.751060 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 May 16 00:19:31.751065 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 May 16 00:19:31.751071 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 May 16 00:19:31.751076 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 May 16 00:19:31.751081 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 May 16 00:19:31.751086 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 May 16 00:19:31.751092 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 May 16 00:19:31.751097 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 May 16 00:19:31.751104 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 May 16 00:19:31.751109 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 May 16 00:19:31.751114 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 May 16 00:19:31.751120 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 May 16 00:19:31.751125 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 May 16 00:19:31.751130 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 May 16 00:19:31.751136 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 May 16 00:19:31.751141 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 May 16 00:19:31.751146 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 May 16 00:19:31.751151 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 May 16 00:19:31.751157 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 May 16 00:19:31.751164 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 May 16 00:19:31.751169 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 May 16 00:19:31.751174 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 May 16 00:19:31.751180 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 May 16 00:19:31.751185 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 May 16 00:19:31.751190 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 May 16 00:19:31.751196 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 May 16 00:19:31.751201 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 May 16 00:19:31.751206 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 May 16 00:19:31.751212 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 May 16 00:19:31.751218 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 May 16 00:19:31.751223 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 May 16 00:19:31.751229 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 May 16 00:19:31.751234 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 May 16 00:19:31.751239 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 May 16 00:19:31.751245 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 May 16 00:19:31.751250 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 May 16 00:19:31.751256 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 May 16 00:19:31.751261 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 May 16 00:19:31.751266 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 May 16 00:19:31.751273 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 May 16 00:19:31.751278 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 16 00:19:31.751284 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 16 00:19:31.751290 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug May 16 00:19:31.751296 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] May 16 00:19:31.751301 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] May 16 00:19:31.751307 kernel: Zone ranges: May 16 00:19:31.751316 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 16 00:19:31.751322 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] May 16 00:19:31.751329 kernel: Normal empty May 16 00:19:31.751335 kernel: Movable zone start for each node May 16 00:19:31.751340 kernel: Early memory node ranges May 16 00:19:31.751345 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] May 16 00:19:31.751351 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] May 16 00:19:31.751356 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] May 16 00:19:31.751362 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] May 16 00:19:31.751367 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 16 00:19:31.751373 kernel: On node 0, zone DMA: 98 pages in unavailable ranges May 16 00:19:31.751378 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges May 16 00:19:31.751385 kernel: ACPI: PM-Timer IO Port: 0x1008 May 16 00:19:31.751391 kernel: system APIC only can use physical flat May 16 00:19:31.751396 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) May 16 00:19:31.751401 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) May 16 00:19:31.751407 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) May 16 00:19:31.751412 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) May 16 00:19:31.751418 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) May 16 00:19:31.751423 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) May 16 00:19:31.751428 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) May 16 00:19:31.751434 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) May 16 00:19:31.751441 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) May 16 00:19:31.751447 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) May 16 00:19:31.751452 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) May 16 00:19:31.751457 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) May 16 00:19:31.751463 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) May 16 00:19:31.751468 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) May 16 00:19:31.751473 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) May 16 00:19:31.751479 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) May 16 00:19:31.751484 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) May 16 00:19:31.751490 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) May 16 00:19:31.751496 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) May 16 00:19:31.751502 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) May 16 00:19:31.751507 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) May 16 00:19:31.751512 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) May 16 00:19:31.751518 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) May 16 00:19:31.751523 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) May 16 00:19:31.751528 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) May 16 00:19:31.751534 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) May 16 00:19:31.751539 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) May 16 00:19:31.751547 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) May 16 00:19:31.751552 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) May 16 00:19:31.751557 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) May 16 00:19:31.751563 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) May 16 00:19:31.751568 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) May 16 00:19:31.751574 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) May 16 00:19:31.751579 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) May 16 00:19:31.751584 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) May 16 00:19:31.751590 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) May 16 00:19:31.751595 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) May 16 00:19:31.751602 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) May 16 00:19:31.751607 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) May 16 00:19:31.751613 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) May 16 00:19:31.751619 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) May 16 00:19:31.751628 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) May 16 00:19:31.751636 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) May 16 00:19:31.751645 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) May 16 00:19:31.751650 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) May 16 00:19:31.751656 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) May 16 00:19:31.751661 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) May 16 00:19:31.751668 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) May 16 00:19:31.751674 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) May 16 00:19:31.751679 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) May 16 00:19:31.751685 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) May 16 00:19:31.751690 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) May 16 00:19:31.751696 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) May 16 00:19:31.751701 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) May 16 00:19:31.751707 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) May 16 00:19:31.751712 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) May 16 00:19:31.751720 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) May 16 00:19:31.751729 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) May 16 00:19:31.751734 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) May 16 00:19:31.751739 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) May 16 00:19:31.751745 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) May 16 00:19:31.751750 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) May 16 00:19:31.751756 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) May 16 00:19:31.751761 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) May 16 00:19:31.751766 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) May 16 00:19:31.751772 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) May 16 00:19:31.751778 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) May 16 00:19:31.751784 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) May 16 00:19:31.751789 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) May 16 00:19:31.751794 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) May 16 00:19:31.751800 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) May 16 00:19:31.751806 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) May 16 00:19:31.751814 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) May 16 00:19:31.751819 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) May 16 00:19:31.751825 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) May 16 00:19:31.751830 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) May 16 00:19:31.751837 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) May 16 00:19:31.751843 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) May 16 00:19:31.751849 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) May 16 00:19:31.751854 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) May 16 00:19:31.751859 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) May 16 00:19:31.751865 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) May 16 00:19:31.751870 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) May 16 00:19:31.751875 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) May 16 00:19:31.751881 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) May 16 00:19:31.751901 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) May 16 00:19:31.751909 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) May 16 00:19:31.751918 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) May 16 00:19:31.751926 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) May 16 00:19:31.751934 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) May 16 00:19:31.751940 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) May 16 00:19:31.751945 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) May 16 00:19:31.751950 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) May 16 00:19:31.751956 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) May 16 00:19:31.752005 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) May 16 00:19:31.752011 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) May 16 00:19:31.752019 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) May 16 00:19:31.752024 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) May 16 00:19:31.752030 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) May 16 00:19:31.752035 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) May 16 00:19:31.752041 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) May 16 00:19:31.752046 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) May 16 00:19:31.752051 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) May 16 00:19:31.752057 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) May 16 00:19:31.752062 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) May 16 00:19:31.752069 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) May 16 00:19:31.752074 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) May 16 00:19:31.752080 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) May 16 00:19:31.752089 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) May 16 00:19:31.752094 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) May 16 00:19:31.752100 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) May 16 00:19:31.752106 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) May 16 00:19:31.752111 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) May 16 00:19:31.752116 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) May 16 00:19:31.752122 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) May 16 00:19:31.752129 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) May 16 00:19:31.752134 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) May 16 00:19:31.752140 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) May 16 00:19:31.752145 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) May 16 00:19:31.752151 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) May 16 00:19:31.752156 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) May 16 00:19:31.752162 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) May 16 00:19:31.752167 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) May 16 00:19:31.752173 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) May 16 00:19:31.752178 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) May 16 00:19:31.752185 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) May 16 00:19:31.752192 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) May 16 00:19:31.752199 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) May 16 00:19:31.752205 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 May 16 00:19:31.752210 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) May 16 00:19:31.752216 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 16 00:19:31.752221 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 May 16 00:19:31.752227 kernel: TSC deadline timer available May 16 00:19:31.752232 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs May 16 00:19:31.752239 kernel: [mem 0x80000000-0xefffffff] available for PCI devices May 16 00:19:31.752245 kernel: Booting paravirtualized kernel on VMware hypervisor May 16 00:19:31.752251 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 16 00:19:31.752257 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 May 16 00:19:31.752262 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 May 16 00:19:31.752268 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 May 16 00:19:31.752273 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 May 16 00:19:31.752279 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 May 16 00:19:31.752284 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 May 16 00:19:31.752291 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 May 16 00:19:31.752296 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 May 16 00:19:31.752311 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 May 16 00:19:31.752320 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 May 16 00:19:31.752325 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 May 16 00:19:31.752331 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 May 16 00:19:31.752337 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 May 16 00:19:31.752343 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 May 16 00:19:31.752349 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 May 16 00:19:31.752355 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 May 16 00:19:31.752361 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 May 16 00:19:31.752366 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 May 16 00:19:31.752372 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 May 16 00:19:31.752379 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=5e2f56b68c7f7e65e4df73d074f249f99b5795b677316c47e2ad758e6bd99733 May 16 00:19:31.752385 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 00:19:31.752391 kernel: random: crng init done May 16 00:19:31.752398 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 16 00:19:31.752404 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes May 16 00:19:31.752409 kernel: printk: log_buf_len min size: 262144 bytes May 16 00:19:31.752415 kernel: printk: log_buf_len: 1048576 bytes May 16 00:19:31.752421 kernel: printk: early log buf free: 239648(91%) May 16 00:19:31.752427 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 00:19:31.752433 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 16 00:19:31.752438 kernel: Fallback order for Node 0: 0 May 16 00:19:31.752444 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 May 16 00:19:31.752450 kernel: Policy zone: DMA32 May 16 00:19:31.752457 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 00:19:31.752464 kernel: Memory: 1932236K/2096628K available (14336K kernel code, 2296K rwdata, 25068K rodata, 43600K init, 1472K bss, 164132K reserved, 0K cma-reserved) May 16 00:19:31.752470 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 May 16 00:19:31.752476 kernel: ftrace: allocating 37997 entries in 149 pages May 16 00:19:31.752482 kernel: ftrace: allocated 149 pages with 4 groups May 16 00:19:31.752488 kernel: Dynamic Preempt: voluntary May 16 00:19:31.752494 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 00:19:31.752500 kernel: rcu: RCU event tracing is enabled. May 16 00:19:31.752506 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. May 16 00:19:31.752513 kernel: Trampoline variant of Tasks RCU enabled. May 16 00:19:31.752519 kernel: Rude variant of Tasks RCU enabled. May 16 00:19:31.752525 kernel: Tracing variant of Tasks RCU enabled. May 16 00:19:31.752531 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 00:19:31.752536 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 May 16 00:19:31.752542 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 May 16 00:19:31.752550 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. May 16 00:19:31.752555 kernel: Console: colour VGA+ 80x25 May 16 00:19:31.752561 kernel: printk: console [tty0] enabled May 16 00:19:31.752567 kernel: printk: console [ttyS0] enabled May 16 00:19:31.752573 kernel: ACPI: Core revision 20230628 May 16 00:19:31.752579 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns May 16 00:19:31.752585 kernel: APIC: Switch to symmetric I/O mode setup May 16 00:19:31.752591 kernel: x2apic enabled May 16 00:19:31.752597 kernel: APIC: Switched APIC routing to: physical x2apic May 16 00:19:31.752604 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 16 00:19:31.752610 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 16 00:19:31.752619 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) May 16 00:19:31.752631 kernel: Disabled fast string operations May 16 00:19:31.752637 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 16 00:19:31.752646 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 16 00:19:31.752658 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 16 00:19:31.752664 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit May 16 00:19:31.752670 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall May 16 00:19:31.752678 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS May 16 00:19:31.752684 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT May 16 00:19:31.752690 kernel: RETBleed: Mitigation: Enhanced IBRS May 16 00:19:31.752698 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 16 00:19:31.752704 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 16 00:19:31.752710 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 16 00:19:31.752716 kernel: SRBDS: Unknown: Dependent on hypervisor status May 16 00:19:31.752722 kernel: GDS: Unknown: Dependent on hypervisor status May 16 00:19:31.752728 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 16 00:19:31.752735 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 16 00:19:31.752741 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 16 00:19:31.752747 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 16 00:19:31.752753 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 16 00:19:31.752758 kernel: Freeing SMP alternatives memory: 32K May 16 00:19:31.752764 kernel: pid_max: default: 131072 minimum: 1024 May 16 00:19:31.752770 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 16 00:19:31.752776 kernel: landlock: Up and running. May 16 00:19:31.752782 kernel: SELinux: Initializing. May 16 00:19:31.752789 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 16 00:19:31.752795 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 16 00:19:31.752801 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) May 16 00:19:31.752807 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 16 00:19:31.752813 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 16 00:19:31.752819 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 16 00:19:31.752825 kernel: Performance Events: Skylake events, core PMU driver. May 16 00:19:31.752830 kernel: core: CPUID marked event: 'cpu cycles' unavailable May 16 00:19:31.752838 kernel: core: CPUID marked event: 'instructions' unavailable May 16 00:19:31.752843 kernel: core: CPUID marked event: 'bus cycles' unavailable May 16 00:19:31.752849 kernel: core: CPUID marked event: 'cache references' unavailable May 16 00:19:31.752855 kernel: core: CPUID marked event: 'cache misses' unavailable May 16 00:19:31.752861 kernel: core: CPUID marked event: 'branch instructions' unavailable May 16 00:19:31.752867 kernel: core: CPUID marked event: 'branch misses' unavailable May 16 00:19:31.752872 kernel: ... version: 1 May 16 00:19:31.752878 kernel: ... bit width: 48 May 16 00:19:31.752884 kernel: ... generic registers: 4 May 16 00:19:31.752893 kernel: ... value mask: 0000ffffffffffff May 16 00:19:31.752900 kernel: ... max period: 000000007fffffff May 16 00:19:31.752906 kernel: ... fixed-purpose events: 0 May 16 00:19:31.752912 kernel: ... event mask: 000000000000000f May 16 00:19:31.752917 kernel: signal: max sigframe size: 1776 May 16 00:19:31.752923 kernel: rcu: Hierarchical SRCU implementation. May 16 00:19:31.752929 kernel: rcu: Max phase no-delay instances is 400. May 16 00:19:31.752935 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 16 00:19:31.752942 kernel: smp: Bringing up secondary CPUs ... May 16 00:19:31.752949 kernel: smpboot: x86: Booting SMP configuration: May 16 00:19:31.752955 kernel: .... node #0, CPUs: #1 May 16 00:19:31.752972 kernel: Disabled fast string operations May 16 00:19:31.752980 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 May 16 00:19:31.752986 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 May 16 00:19:31.752992 kernel: smp: Brought up 1 node, 2 CPUs May 16 00:19:31.752998 kernel: smpboot: Max logical packages: 128 May 16 00:19:31.753004 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) May 16 00:19:31.753009 kernel: devtmpfs: initialized May 16 00:19:31.753015 kernel: x86/mm: Memory block size: 128MB May 16 00:19:31.753024 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) May 16 00:19:31.753029 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 00:19:31.753035 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 16 00:19:31.753041 kernel: pinctrl core: initialized pinctrl subsystem May 16 00:19:31.753047 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 00:19:31.753053 kernel: audit: initializing netlink subsys (disabled) May 16 00:19:31.753062 kernel: audit: type=2000 audit(1747354770.066:1): state=initialized audit_enabled=0 res=1 May 16 00:19:31.753068 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 00:19:31.753074 kernel: thermal_sys: Registered thermal governor 'user_space' May 16 00:19:31.753081 kernel: cpuidle: using governor menu May 16 00:19:31.753087 kernel: Simple Boot Flag at 0x36 set to 0x80 May 16 00:19:31.753093 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 00:19:31.753099 kernel: dca service started, version 1.12.1 May 16 00:19:31.753105 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) May 16 00:19:31.753111 kernel: PCI: Using configuration type 1 for base access May 16 00:19:31.753117 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 16 00:19:31.753123 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 16 00:19:31.753129 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 16 00:19:31.753136 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 16 00:19:31.753143 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 16 00:19:31.753151 kernel: ACPI: Added _OSI(Module Device) May 16 00:19:31.753157 kernel: ACPI: Added _OSI(Processor Device) May 16 00:19:31.753162 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 00:19:31.753168 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 00:19:31.753174 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 16 00:19:31.753180 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored May 16 00:19:31.753186 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 16 00:19:31.753193 kernel: ACPI: Interpreter enabled May 16 00:19:31.753199 kernel: ACPI: PM: (supports S0 S1 S5) May 16 00:19:31.753209 kernel: ACPI: Using IOAPIC for interrupt routing May 16 00:19:31.753219 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 16 00:19:31.753225 kernel: PCI: Using E820 reservations for host bridge windows May 16 00:19:31.753234 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F May 16 00:19:31.753243 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) May 16 00:19:31.753344 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 00:19:31.753404 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] May 16 00:19:31.753462 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] May 16 00:19:31.753470 kernel: PCI host bridge to bus 0000:00 May 16 00:19:31.753528 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 16 00:19:31.753575 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] May 16 00:19:31.753627 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 16 00:19:31.753680 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 16 00:19:31.753725 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] May 16 00:19:31.753773 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] May 16 00:19:31.753834 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 May 16 00:19:31.753894 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 May 16 00:19:31.753956 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 May 16 00:19:31.754037 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a May 16 00:19:31.754094 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] May 16 00:19:31.754147 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 16 00:19:31.754198 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 16 00:19:31.754248 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 16 00:19:31.754299 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 16 00:19:31.754355 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 May 16 00:19:31.754410 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI May 16 00:19:31.754461 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB May 16 00:19:31.754520 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 May 16 00:19:31.754572 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] May 16 00:19:31.754623 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] May 16 00:19:31.754678 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 May 16 00:19:31.754729 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] May 16 00:19:31.754784 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] May 16 00:19:31.754834 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] May 16 00:19:31.754884 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] May 16 00:19:31.754935 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 16 00:19:31.755001 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 May 16 00:19:31.755057 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 May 16 00:19:31.755113 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold May 16 00:19:31.755169 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 May 16 00:19:31.755222 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold May 16 00:19:31.755280 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 May 16 00:19:31.755331 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold May 16 00:19:31.755388 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 May 16 00:19:31.755440 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold May 16 00:19:31.755499 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 May 16 00:19:31.755551 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold May 16 00:19:31.755607 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 May 16 00:19:31.755659 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold May 16 00:19:31.755714 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 May 16 00:19:31.755765 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold May 16 00:19:31.755824 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 May 16 00:19:31.755876 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold May 16 00:19:31.755932 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 May 16 00:19:31.757612 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold May 16 00:19:31.757683 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 May 16 00:19:31.757744 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold May 16 00:19:31.757804 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 May 16 00:19:31.757859 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold May 16 00:19:31.757915 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 May 16 00:19:31.758021 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold May 16 00:19:31.758080 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 May 16 00:19:31.758137 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold May 16 00:19:31.758193 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 May 16 00:19:31.758245 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold May 16 00:19:31.758301 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 May 16 00:19:31.758354 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold May 16 00:19:31.758409 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 May 16 00:19:31.758461 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold May 16 00:19:31.758520 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 May 16 00:19:31.758573 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold May 16 00:19:31.758632 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 May 16 00:19:31.758685 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold May 16 00:19:31.758742 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 May 16 00:19:31.758795 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold May 16 00:19:31.758854 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 May 16 00:19:31.758906 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold May 16 00:19:31.759002 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 May 16 00:19:31.759058 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold May 16 00:19:31.759115 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 May 16 00:19:31.759168 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold May 16 00:19:31.759227 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 May 16 00:19:31.759280 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold May 16 00:19:31.759340 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 May 16 00:19:31.759392 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold May 16 00:19:31.759447 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 May 16 00:19:31.759499 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold May 16 00:19:31.759558 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 May 16 00:19:31.759610 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold May 16 00:19:31.759666 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 May 16 00:19:31.759719 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold May 16 00:19:31.759787 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 May 16 00:19:31.759841 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold May 16 00:19:31.759897 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 May 16 00:19:31.759953 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold May 16 00:19:31.760078 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 May 16 00:19:31.760131 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold May 16 00:19:31.760186 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 May 16 00:19:31.760238 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold May 16 00:19:31.760303 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 May 16 00:19:31.760362 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold May 16 00:19:31.760416 kernel: pci_bus 0000:01: extended config space not accessible May 16 00:19:31.760469 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 16 00:19:31.760521 kernel: pci_bus 0000:02: extended config space not accessible May 16 00:19:31.760531 kernel: acpiphp: Slot [32] registered May 16 00:19:31.760537 kernel: acpiphp: Slot [33] registered May 16 00:19:31.760543 kernel: acpiphp: Slot [34] registered May 16 00:19:31.760551 kernel: acpiphp: Slot [35] registered May 16 00:19:31.760557 kernel: acpiphp: Slot [36] registered May 16 00:19:31.760562 kernel: acpiphp: Slot [37] registered May 16 00:19:31.760568 kernel: acpiphp: Slot [38] registered May 16 00:19:31.760574 kernel: acpiphp: Slot [39] registered May 16 00:19:31.760580 kernel: acpiphp: Slot [40] registered May 16 00:19:31.760586 kernel: acpiphp: Slot [41] registered May 16 00:19:31.760592 kernel: acpiphp: Slot [42] registered May 16 00:19:31.760597 kernel: acpiphp: Slot [43] registered May 16 00:19:31.760604 kernel: acpiphp: Slot [44] registered May 16 00:19:31.760610 kernel: acpiphp: Slot [45] registered May 16 00:19:31.760616 kernel: acpiphp: Slot [46] registered May 16 00:19:31.760622 kernel: acpiphp: Slot [47] registered May 16 00:19:31.760628 kernel: acpiphp: Slot [48] registered May 16 00:19:31.760634 kernel: acpiphp: Slot [49] registered May 16 00:19:31.760640 kernel: acpiphp: Slot [50] registered May 16 00:19:31.760646 kernel: acpiphp: Slot [51] registered May 16 00:19:31.760651 kernel: acpiphp: Slot [52] registered May 16 00:19:31.760657 kernel: acpiphp: Slot [53] registered May 16 00:19:31.760664 kernel: acpiphp: Slot [54] registered May 16 00:19:31.760670 kernel: acpiphp: Slot [55] registered May 16 00:19:31.760676 kernel: acpiphp: Slot [56] registered May 16 00:19:31.760682 kernel: acpiphp: Slot [57] registered May 16 00:19:31.760688 kernel: acpiphp: Slot [58] registered May 16 00:19:31.760694 kernel: acpiphp: Slot [59] registered May 16 00:19:31.760699 kernel: acpiphp: Slot [60] registered May 16 00:19:31.760705 kernel: acpiphp: Slot [61] registered May 16 00:19:31.760711 kernel: acpiphp: Slot [62] registered May 16 00:19:31.760718 kernel: acpiphp: Slot [63] registered May 16 00:19:31.760770 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) May 16 00:19:31.760822 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 16 00:19:31.760874 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 16 00:19:31.760925 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 16 00:19:31.760984 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) May 16 00:19:31.761036 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) May 16 00:19:31.761086 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) May 16 00:19:31.761140 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) May 16 00:19:31.761190 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) May 16 00:19:31.761248 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 May 16 00:19:31.761302 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] May 16 00:19:31.761354 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] May 16 00:19:31.761406 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 16 00:19:31.761458 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold May 16 00:19:31.761513 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 16 00:19:31.761566 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 16 00:19:31.761617 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 16 00:19:31.761668 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 16 00:19:31.761720 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 16 00:19:31.761772 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 16 00:19:31.761823 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 16 00:19:31.761873 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 16 00:19:31.761929 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 16 00:19:31.761996 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 16 00:19:31.762050 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 16 00:19:31.762102 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 16 00:19:31.762154 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 16 00:19:31.762206 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 16 00:19:31.762257 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 16 00:19:31.762318 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 16 00:19:31.762370 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 16 00:19:31.762422 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 16 00:19:31.762476 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 16 00:19:31.762527 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 16 00:19:31.762581 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 16 00:19:31.762635 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 16 00:19:31.762687 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 16 00:19:31.762738 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 16 00:19:31.762791 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 16 00:19:31.762843 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 16 00:19:31.762895 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 16 00:19:31.762955 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 May 16 00:19:31.763028 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] May 16 00:19:31.763082 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] May 16 00:19:31.763136 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] May 16 00:19:31.763190 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] May 16 00:19:31.763243 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 16 00:19:31.763296 kernel: pci 0000:0b:00.0: supports D1 D2 May 16 00:19:31.763371 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 16 00:19:31.763426 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 16 00:19:31.763478 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 16 00:19:31.763531 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 16 00:19:31.763581 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 16 00:19:31.763632 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 16 00:19:31.763683 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 16 00:19:31.763733 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 16 00:19:31.763801 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 16 00:19:31.763871 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 16 00:19:31.763922 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 16 00:19:31.764008 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 16 00:19:31.764062 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 16 00:19:31.764112 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 16 00:19:31.764162 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 16 00:19:31.764212 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 16 00:19:31.764265 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 16 00:19:31.764314 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 16 00:19:31.764365 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 16 00:19:31.764415 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 16 00:19:31.764465 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 16 00:19:31.764514 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 16 00:19:31.764565 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 16 00:19:31.764615 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 16 00:19:31.764667 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 16 00:19:31.764717 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 16 00:19:31.764767 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 16 00:19:31.764817 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 16 00:19:31.764867 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 16 00:19:31.764917 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 16 00:19:31.764979 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 16 00:19:31.765032 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 16 00:19:31.765087 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 16 00:19:31.765137 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 16 00:19:31.765186 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 16 00:19:31.765236 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 16 00:19:31.765287 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 16 00:19:31.765360 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 16 00:19:31.765426 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 16 00:19:31.765477 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 16 00:19:31.765532 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 16 00:19:31.765583 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 16 00:19:31.765633 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 16 00:19:31.765698 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 16 00:19:31.765747 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 16 00:19:31.765796 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 16 00:19:31.765846 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 16 00:19:31.765896 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 16 00:19:31.765947 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 16 00:19:31.767261 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 16 00:19:31.767332 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 16 00:19:31.767396 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 16 00:19:31.767447 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 16 00:19:31.767498 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 16 00:19:31.767556 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 16 00:19:31.767607 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 16 00:19:31.767661 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 16 00:19:31.767723 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 16 00:19:31.767789 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 16 00:19:31.767842 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 16 00:19:31.767892 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 16 00:19:31.767942 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 16 00:19:31.768000 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 16 00:19:31.768054 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 16 00:19:31.768104 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 16 00:19:31.768154 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 16 00:19:31.768205 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 16 00:19:31.768254 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 16 00:19:31.768304 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 16 00:19:31.768356 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 16 00:19:31.768406 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 16 00:19:31.768459 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 16 00:19:31.768510 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 16 00:19:31.768560 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 16 00:19:31.768610 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 16 00:19:31.768661 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 16 00:19:31.768726 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 16 00:19:31.768777 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 16 00:19:31.768829 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 16 00:19:31.768883 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 16 00:19:31.768935 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 16 00:19:31.768944 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 May 16 00:19:31.768950 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 May 16 00:19:31.768956 kernel: ACPI: PCI: Interrupt link LNKB disabled May 16 00:19:31.769212 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 16 00:19:31.769218 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 May 16 00:19:31.769224 kernel: iommu: Default domain type: Translated May 16 00:19:31.769230 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 16 00:19:31.769239 kernel: PCI: Using ACPI for IRQ routing May 16 00:19:31.769245 kernel: PCI: pci_cache_line_size set to 64 bytes May 16 00:19:31.769251 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] May 16 00:19:31.769257 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] May 16 00:19:31.769320 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device May 16 00:19:31.769378 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible May 16 00:19:31.769428 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 16 00:19:31.769437 kernel: vgaarb: loaded May 16 00:19:31.769443 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 May 16 00:19:31.769452 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter May 16 00:19:31.769458 kernel: clocksource: Switched to clocksource tsc-early May 16 00:19:31.769464 kernel: VFS: Disk quotas dquot_6.6.0 May 16 00:19:31.769470 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 00:19:31.769475 kernel: pnp: PnP ACPI init May 16 00:19:31.769527 kernel: system 00:00: [io 0x1000-0x103f] has been reserved May 16 00:19:31.769575 kernel: system 00:00: [io 0x1040-0x104f] has been reserved May 16 00:19:31.769622 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved May 16 00:19:31.769674 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved May 16 00:19:31.769724 kernel: pnp 00:06: [dma 2] May 16 00:19:31.769777 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved May 16 00:19:31.769824 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved May 16 00:19:31.769871 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved May 16 00:19:31.769880 kernel: pnp: PnP ACPI: found 8 devices May 16 00:19:31.769886 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 16 00:19:31.769897 kernel: NET: Registered PF_INET protocol family May 16 00:19:31.769907 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 16 00:19:31.769916 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 16 00:19:31.769925 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 00:19:31.769933 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 16 00:19:31.769942 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 16 00:19:31.769951 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 16 00:19:31.769967 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 16 00:19:31.769977 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 16 00:19:31.769983 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 00:19:31.769989 kernel: NET: Registered PF_XDP protocol family May 16 00:19:31.770046 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 16 00:19:31.770101 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 16 00:19:31.770154 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 16 00:19:31.770207 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 16 00:19:31.770261 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 16 00:19:31.770405 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 May 16 00:19:31.770638 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 May 16 00:19:31.770695 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 May 16 00:19:31.770774 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 May 16 00:19:31.770829 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 May 16 00:19:31.770884 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 May 16 00:19:31.770936 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 May 16 00:19:31.771038 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 May 16 00:19:31.771091 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 May 16 00:19:31.771142 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 May 16 00:19:31.771193 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 May 16 00:19:31.771247 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 May 16 00:19:31.771298 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 May 16 00:19:31.771350 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 May 16 00:19:31.771402 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 May 16 00:19:31.771453 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 May 16 00:19:31.771504 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 May 16 00:19:31.771558 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 May 16 00:19:31.771611 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] May 16 00:19:31.771662 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] May 16 00:19:31.771714 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 16 00:19:31.771765 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.771817 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 16 00:19:31.771886 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.771951 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 16 00:19:31.772011 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.772063 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 16 00:19:31.772114 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.772164 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 16 00:19:31.772216 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.772271 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 16 00:19:31.772339 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.772395 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 16 00:19:31.772447 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.772498 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 16 00:19:31.772549 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.772600 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 16 00:19:31.772651 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.772703 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 16 00:19:31.772753 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.772807 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 16 00:19:31.772859 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.772910 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 16 00:19:31.775007 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.775072 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 16 00:19:31.775127 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.775179 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 16 00:19:31.775231 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.775286 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 16 00:19:31.775371 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.775422 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 16 00:19:31.775473 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.775524 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 16 00:19:31.775574 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.775624 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 16 00:19:31.775675 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.775728 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 16 00:19:31.775779 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.775829 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 16 00:19:31.775880 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.775931 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 16 00:19:31.775991 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.776042 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 16 00:19:31.776093 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.776153 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 16 00:19:31.776207 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.776258 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 16 00:19:31.776309 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.776359 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 16 00:19:31.776410 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.776478 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 16 00:19:31.776544 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.776595 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 16 00:19:31.776645 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.776699 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 16 00:19:31.776749 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.776799 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 16 00:19:31.776850 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.776901 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 16 00:19:31.776951 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.777017 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 16 00:19:31.777093 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.777151 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 16 00:19:31.777206 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.777258 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 16 00:19:31.777331 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.777384 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 16 00:19:31.777451 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.777501 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 16 00:19:31.777552 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.777637 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 16 00:19:31.777688 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.777739 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 16 00:19:31.777794 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.777845 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 16 00:19:31.777895 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.777945 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 16 00:19:31.778049 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.778100 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 16 00:19:31.778151 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.778201 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 16 00:19:31.778252 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.778306 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 16 00:19:31.778357 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 16 00:19:31.778408 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 16 00:19:31.779991 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] May 16 00:19:31.780054 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 16 00:19:31.780108 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 16 00:19:31.780159 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 16 00:19:31.780215 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] May 16 00:19:31.780270 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 16 00:19:31.780326 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 16 00:19:31.780376 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 16 00:19:31.780427 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] May 16 00:19:31.780480 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 16 00:19:31.780531 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 16 00:19:31.780582 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 16 00:19:31.780633 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 16 00:19:31.780685 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 16 00:19:31.780736 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 16 00:19:31.780791 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 16 00:19:31.780848 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 16 00:19:31.780899 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 16 00:19:31.780950 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 16 00:19:31.781036 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 16 00:19:31.781089 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 16 00:19:31.781140 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 16 00:19:31.781191 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 16 00:19:31.781245 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 16 00:19:31.781295 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 16 00:19:31.781346 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 16 00:19:31.781396 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 16 00:19:31.781446 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 16 00:19:31.781497 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 16 00:19:31.781548 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 16 00:19:31.781602 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 16 00:19:31.781654 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 16 00:19:31.781708 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] May 16 00:19:31.781759 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 16 00:19:31.781809 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 16 00:19:31.781861 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 16 00:19:31.781912 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] May 16 00:19:31.781971 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 16 00:19:31.782024 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 16 00:19:31.782079 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 16 00:19:31.782131 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 16 00:19:31.782183 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 16 00:19:31.782235 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 16 00:19:31.782286 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 16 00:19:31.782336 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 16 00:19:31.782387 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 16 00:19:31.782438 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 16 00:19:31.782489 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 16 00:19:31.782539 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 16 00:19:31.782593 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 16 00:19:31.782644 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 16 00:19:31.782694 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 16 00:19:31.782745 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 16 00:19:31.782795 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 16 00:19:31.782846 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 16 00:19:31.782897 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 16 00:19:31.782949 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 16 00:19:31.784087 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 16 00:19:31.784148 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 16 00:19:31.784201 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 16 00:19:31.784563 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 16 00:19:31.784624 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 16 00:19:31.784677 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 16 00:19:31.784730 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 16 00:19:31.784783 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 16 00:19:31.784838 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 16 00:19:31.784890 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 16 00:19:31.784941 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 16 00:19:31.785006 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 16 00:19:31.785058 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 16 00:19:31.785108 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 16 00:19:31.785159 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 16 00:19:31.785209 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 16 00:19:31.785260 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 16 00:19:31.785310 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 16 00:19:31.785399 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 16 00:19:31.785449 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 16 00:19:31.785503 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 16 00:19:31.785556 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 16 00:19:31.785607 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 16 00:19:31.785658 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 16 00:19:31.785709 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 16 00:19:31.785761 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 16 00:19:31.785811 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 16 00:19:31.786075 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 16 00:19:31.786132 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 16 00:19:31.786184 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 16 00:19:31.786242 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 16 00:19:31.786294 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 16 00:19:31.786352 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 16 00:19:31.786405 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 16 00:19:31.786457 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 16 00:19:31.786508 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 16 00:19:31.786560 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 16 00:19:31.786611 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 16 00:19:31.786662 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 16 00:19:31.786716 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 16 00:19:31.786768 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 16 00:19:31.786820 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 16 00:19:31.786870 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 16 00:19:31.786921 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 16 00:19:31.787046 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 16 00:19:31.787100 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 16 00:19:31.787151 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 16 00:19:31.787203 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 16 00:19:31.787254 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 16 00:19:31.787308 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 16 00:19:31.787359 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 16 00:19:31.787411 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 16 00:19:31.787463 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 16 00:19:31.787514 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 16 00:19:31.787566 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 16 00:19:31.787617 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 16 00:19:31.787667 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] May 16 00:19:31.787713 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] May 16 00:19:31.787762 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] May 16 00:19:31.787807 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] May 16 00:19:31.787852 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] May 16 00:19:31.787901 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] May 16 00:19:31.787949 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] May 16 00:19:31.788239 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] May 16 00:19:31.788289 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] May 16 00:19:31.788339 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] May 16 00:19:31.788386 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] May 16 00:19:31.788432 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] May 16 00:19:31.788478 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] May 16 00:19:31.788528 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] May 16 00:19:31.788575 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] May 16 00:19:31.788622 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] May 16 00:19:31.788675 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] May 16 00:19:31.788722 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] May 16 00:19:31.788768 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] May 16 00:19:31.788818 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] May 16 00:19:31.788865 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] May 16 00:19:31.788911 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] May 16 00:19:31.788967 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] May 16 00:19:31.789024 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] May 16 00:19:31.789077 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] May 16 00:19:31.789124 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] May 16 00:19:31.789174 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] May 16 00:19:31.789220 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] May 16 00:19:31.789269 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] May 16 00:19:31.789318 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] May 16 00:19:31.789368 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] May 16 00:19:31.789415 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] May 16 00:19:31.789475 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] May 16 00:19:31.789523 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] May 16 00:19:31.789576 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] May 16 00:19:31.789631 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] May 16 00:19:31.789678 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] May 16 00:19:31.790198 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] May 16 00:19:31.790257 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] May 16 00:19:31.790305 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] May 16 00:19:31.790358 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] May 16 00:19:31.790409 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] May 16 00:19:31.790460 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] May 16 00:19:31.790514 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] May 16 00:19:31.790561 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] May 16 00:19:31.790615 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] May 16 00:19:31.790662 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] May 16 00:19:31.790712 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] May 16 00:19:31.790762 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] May 16 00:19:31.790812 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] May 16 00:19:31.790860 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] May 16 00:19:31.790911 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] May 16 00:19:31.790964 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] May 16 00:19:31.791014 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] May 16 00:19:31.791067 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] May 16 00:19:31.791115 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] May 16 00:19:31.791162 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] May 16 00:19:31.791212 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] May 16 00:19:31.791260 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] May 16 00:19:31.791306 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] May 16 00:19:31.791357 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] May 16 00:19:31.791408 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] May 16 00:19:31.791459 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] May 16 00:19:31.791507 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] May 16 00:19:31.791560 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] May 16 00:19:31.791607 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] May 16 00:19:31.791657 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] May 16 00:19:31.791707 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] May 16 00:19:31.791758 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] May 16 00:19:31.791805 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] May 16 00:19:31.791858 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] May 16 00:19:31.791906 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] May 16 00:19:31.791955 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] May 16 00:19:31.793886 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] May 16 00:19:31.793939 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] May 16 00:19:31.794001 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] May 16 00:19:31.794054 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] May 16 00:19:31.794100 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] May 16 00:19:31.794150 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] May 16 00:19:31.794201 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] May 16 00:19:31.794252 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] May 16 00:19:31.794298 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] May 16 00:19:31.794351 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] May 16 00:19:31.794398 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] May 16 00:19:31.794449 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] May 16 00:19:31.794496 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] May 16 00:19:31.794549 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] May 16 00:19:31.794609 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] May 16 00:19:31.794666 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 16 00:19:31.794677 kernel: PCI: CLS 32 bytes, default 64 May 16 00:19:31.794684 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 16 00:19:31.794691 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 16 00:19:31.794698 kernel: clocksource: Switched to clocksource tsc May 16 00:19:31.794706 kernel: Initialise system trusted keyrings May 16 00:19:31.794713 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 16 00:19:31.794719 kernel: Key type asymmetric registered May 16 00:19:31.794725 kernel: Asymmetric key parser 'x509' registered May 16 00:19:31.794731 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 16 00:19:31.794737 kernel: io scheduler mq-deadline registered May 16 00:19:31.794743 kernel: io scheduler kyber registered May 16 00:19:31.794750 kernel: io scheduler bfq registered May 16 00:19:31.794804 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 May 16 00:19:31.794860 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 16 00:19:31.794916 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 May 16 00:19:31.794981 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 16 00:19:31.795038 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 May 16 00:19:31.795094 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 16 00:19:31.795147 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 May 16 00:19:31.795201 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 16 00:19:31.795256 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 May 16 00:19:31.795309 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 16 00:19:31.795399 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 May 16 00:19:31.795451 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 16 00:19:31.795504 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 May 16 00:19:31.795558 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 16 00:19:31.795613 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 May 16 00:19:31.795676 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 16 00:19:31.795729 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 May 16 00:19:31.795781 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 16 00:19:31.795834 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 May 16 00:19:31.795900 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 16 00:19:31.795975 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 May 16 00:19:31.796039 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 16 00:19:31.796098 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 May 16 00:19:31.796152 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 16 00:19:31.796205 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 May 16 00:19:31.796261 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 16 00:19:31.796319 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 May 16 00:19:31.796372 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 16 00:19:31.796425 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 May 16 00:19:31.796477 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 16 00:19:31.796531 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 May 16 00:19:31.796586 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 16 00:19:31.796640 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 May 16 00:19:31.796693 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 16 00:19:31.796752 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 May 16 00:19:31.796810 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 16 00:19:31.796864 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 May 16 00:19:31.796915 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 16 00:19:31.796980 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 May 16 00:19:31.797033 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 16 00:19:31.797087 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 May 16 00:19:31.797138 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 16 00:19:31.797192 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 May 16 00:19:31.797247 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 16 00:19:31.797301 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 May 16 00:19:31.797353 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 16 00:19:31.797406 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 May 16 00:19:31.797458 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 16 00:19:31.797513 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 May 16 00:19:31.797564 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 16 00:19:31.797617 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 May 16 00:19:31.797667 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 16 00:19:31.797719 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 May 16 00:19:31.797770 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 16 00:19:31.797823 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 May 16 00:19:31.797877 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 16 00:19:31.797930 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 May 16 00:19:31.797996 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 16 00:19:31.798051 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 May 16 00:19:31.798103 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 16 00:19:31.798159 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 May 16 00:19:31.798211 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 16 00:19:31.798263 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 May 16 00:19:31.798315 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 16 00:19:31.798325 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 16 00:19:31.798331 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 00:19:31.798341 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 16 00:19:31.798347 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 May 16 00:19:31.798354 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 16 00:19:31.798360 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 16 00:19:31.798413 kernel: rtc_cmos 00:01: registered as rtc0 May 16 00:19:31.798461 kernel: rtc_cmos 00:01: setting system clock to 2025-05-16T00:19:31 UTC (1747354771) May 16 00:19:31.798508 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram May 16 00:19:31.798517 kernel: intel_pstate: CPU model not supported May 16 00:19:31.798526 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 16 00:19:31.798532 kernel: NET: Registered PF_INET6 protocol family May 16 00:19:31.798538 kernel: Segment Routing with IPv6 May 16 00:19:31.798544 kernel: In-situ OAM (IOAM) with IPv6 May 16 00:19:31.798551 kernel: NET: Registered PF_PACKET protocol family May 16 00:19:31.798557 kernel: Key type dns_resolver registered May 16 00:19:31.798563 kernel: IPI shorthand broadcast: enabled May 16 00:19:31.798569 kernel: sched_clock: Marking stable (895003629, 225485798)->(1184247158, -63757731) May 16 00:19:31.798576 kernel: registered taskstats version 1 May 16 00:19:31.798583 kernel: Loading compiled-in X.509 certificates May 16 00:19:31.798589 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 36d9e3bf63b9b28466bcfa7a508d814673a33a26' May 16 00:19:31.798595 kernel: Key type .fscrypt registered May 16 00:19:31.798601 kernel: Key type fscrypt-provisioning registered May 16 00:19:31.798607 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 00:19:31.798613 kernel: ima: Allocated hash algorithm: sha1 May 16 00:19:31.798619 kernel: ima: No architecture policies found May 16 00:19:31.798626 kernel: clk: Disabling unused clocks May 16 00:19:31.798633 kernel: Freeing unused kernel image (initmem) memory: 43600K May 16 00:19:31.798639 kernel: Write protecting the kernel read-only data: 40960k May 16 00:19:31.798646 kernel: Freeing unused kernel image (rodata/data gap) memory: 1556K May 16 00:19:31.798652 kernel: Run /init as init process May 16 00:19:31.798658 kernel: with arguments: May 16 00:19:31.798664 kernel: /init May 16 00:19:31.798670 kernel: with environment: May 16 00:19:31.798677 kernel: HOME=/ May 16 00:19:31.798683 kernel: TERM=linux May 16 00:19:31.798688 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 00:19:31.798697 systemd[1]: Successfully made /usr/ read-only. May 16 00:19:31.798705 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 00:19:31.798711 systemd[1]: Detected virtualization vmware. May 16 00:19:31.798718 systemd[1]: Detected architecture x86-64. May 16 00:19:31.798724 systemd[1]: Running in initrd. May 16 00:19:31.798730 systemd[1]: No hostname configured, using default hostname. May 16 00:19:31.798736 systemd[1]: Hostname set to . May 16 00:19:31.798744 systemd[1]: Initializing machine ID from random generator. May 16 00:19:31.798750 systemd[1]: Queued start job for default target initrd.target. May 16 00:19:31.798757 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 00:19:31.798763 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 00:19:31.798771 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 16 00:19:31.798777 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 00:19:31.798784 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 16 00:19:31.798792 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 16 00:19:31.798799 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 16 00:19:31.798805 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 16 00:19:31.798811 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 00:19:31.798818 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 00:19:31.798824 systemd[1]: Reached target paths.target - Path Units. May 16 00:19:31.798831 systemd[1]: Reached target slices.target - Slice Units. May 16 00:19:31.798837 systemd[1]: Reached target swap.target - Swaps. May 16 00:19:31.798845 systemd[1]: Reached target timers.target - Timer Units. May 16 00:19:31.798852 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 16 00:19:31.798858 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 00:19:31.798864 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 16 00:19:31.798872 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 16 00:19:31.798879 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 00:19:31.798885 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 00:19:31.798892 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 00:19:31.798898 systemd[1]: Reached target sockets.target - Socket Units. May 16 00:19:31.798906 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 16 00:19:31.798912 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 00:19:31.798919 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 16 00:19:31.798925 systemd[1]: Starting systemd-fsck-usr.service... May 16 00:19:31.798934 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 00:19:31.798944 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 00:19:31.798951 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:19:31.799078 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 16 00:19:31.799088 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 00:19:31.799095 systemd[1]: Finished systemd-fsck-usr.service. May 16 00:19:31.799102 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 00:19:31.799123 systemd-journald[216]: Collecting audit messages is disabled. May 16 00:19:31.799142 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:19:31.799150 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 00:19:31.799157 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 00:19:31.799163 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 00:19:31.799171 kernel: Bridge firewalling registered May 16 00:19:31.799177 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 00:19:31.799184 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 00:19:31.799190 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 00:19:31.799197 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:19:31.799204 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 16 00:19:31.799210 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 00:19:31.799218 systemd-journald[216]: Journal started May 16 00:19:31.799234 systemd-journald[216]: Runtime Journal (/run/log/journal/53b12d687a49460492cb35d93a8298f4) is 4.8M, max 38.6M, 33.7M free. May 16 00:19:31.751926 systemd-modules-load[218]: Inserted module 'overlay' May 16 00:19:31.775001 systemd-modules-load[218]: Inserted module 'br_netfilter' May 16 00:19:31.802934 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 00:19:31.802947 systemd[1]: Started systemd-journald.service - Journal Service. May 16 00:19:31.803659 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 00:19:31.808504 dracut-cmdline[238]: dracut-dracut-053 May 16 00:19:31.810174 dracut-cmdline[238]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=5e2f56b68c7f7e65e4df73d074f249f99b5795b677316c47e2ad758e6bd99733 May 16 00:19:31.816551 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 00:19:31.817694 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 00:19:31.842167 systemd-resolved[274]: Positive Trust Anchors: May 16 00:19:31.842416 systemd-resolved[274]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:19:31.842582 systemd-resolved[274]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 00:19:31.844955 systemd-resolved[274]: Defaulting to hostname 'linux'. May 16 00:19:31.845672 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 00:19:31.845803 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 00:19:31.854971 kernel: SCSI subsystem initialized May 16 00:19:31.860977 kernel: Loading iSCSI transport class v2.0-870. May 16 00:19:31.867974 kernel: iscsi: registered transport (tcp) May 16 00:19:31.880997 kernel: iscsi: registered transport (qla4xxx) May 16 00:19:31.881033 kernel: QLogic iSCSI HBA Driver May 16 00:19:31.900745 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 16 00:19:31.901591 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 16 00:19:31.920145 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 00:19:31.920169 kernel: device-mapper: uevent: version 1.0.3 May 16 00:19:31.921264 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 16 00:19:31.951978 kernel: raid6: avx2x4 gen() 46853 MB/s May 16 00:19:31.968970 kernel: raid6: avx2x2 gen() 53482 MB/s May 16 00:19:31.986171 kernel: raid6: avx2x1 gen() 45531 MB/s May 16 00:19:31.986191 kernel: raid6: using algorithm avx2x2 gen() 53482 MB/s May 16 00:19:32.004185 kernel: raid6: .... xor() 32582 MB/s, rmw enabled May 16 00:19:32.004221 kernel: raid6: using avx2x2 recovery algorithm May 16 00:19:32.016971 kernel: xor: automatically using best checksumming function avx May 16 00:19:32.106986 kernel: Btrfs loaded, zoned=no, fsverity=no May 16 00:19:32.112087 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 16 00:19:32.113178 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 00:19:32.126087 systemd-udevd[434]: Using default interface naming scheme 'v255'. May 16 00:19:32.128949 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 00:19:32.131025 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 16 00:19:32.150704 dracut-pre-trigger[439]: rd.md=0: removing MD RAID activation May 16 00:19:32.165575 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 16 00:19:32.166570 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 00:19:32.241582 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 00:19:32.243495 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 16 00:19:32.259037 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 16 00:19:32.259662 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 16 00:19:32.260640 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 00:19:32.261004 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 00:19:32.262101 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 16 00:19:32.274163 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 16 00:19:32.314972 kernel: VMware PVSCSI driver - version 1.0.7.0-k May 16 00:19:32.317146 kernel: vmw_pvscsi: using 64bit dma May 16 00:19:32.317166 kernel: vmw_pvscsi: max_id: 16 May 16 00:19:32.317175 kernel: vmw_pvscsi: setting ring_pages to 8 May 16 00:19:32.320974 kernel: vmw_pvscsi: enabling reqCallThreshold May 16 00:19:32.320992 kernel: vmw_pvscsi: driver-based request coalescing enabled May 16 00:19:32.321001 kernel: vmw_pvscsi: using MSI-X May 16 00:19:32.324967 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 May 16 00:19:32.329993 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 May 16 00:19:32.330115 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 May 16 00:19:32.333972 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI May 16 00:19:32.335974 kernel: libata version 3.00 loaded. May 16 00:19:32.341974 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 May 16 00:19:32.345487 kernel: ata_piix 0000:00:07.1: version 2.13 May 16 00:19:32.345561 kernel: scsi host1: ata_piix May 16 00:19:32.347971 kernel: scsi host2: ata_piix May 16 00:19:32.350003 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 May 16 00:19:32.350021 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 May 16 00:19:32.351973 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps May 16 00:19:32.354997 kernel: cryptd: max_cpu_qlen set to 1000 May 16 00:19:32.357144 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 00:19:32.357222 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:19:32.357569 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 00:19:32.357678 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:19:32.357747 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:19:32.357987 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:19:32.358605 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:19:32.378659 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:19:32.379315 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 00:19:32.402779 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:19:32.521011 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 May 16 00:19:32.526988 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 May 16 00:19:32.536480 kernel: AVX2 version of gcm_enc/dec engaged. May 16 00:19:32.536512 kernel: AES CTR mode by8 optimization enabled May 16 00:19:32.537701 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 May 16 00:19:32.553975 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) May 16 00:19:32.554093 kernel: sd 0:0:0:0: [sda] Write Protect is off May 16 00:19:32.554159 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 May 16 00:19:32.555969 kernel: sd 0:0:0:0: [sda] Cache data unavailable May 16 00:19:32.556049 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through May 16 00:19:32.559500 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 16 00:19:32.559518 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 16 00:19:32.571095 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray May 16 00:19:32.571241 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 16 00:19:32.581987 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 16 00:19:32.588328 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (477) May 16 00:19:32.597812 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. May 16 00:19:32.603459 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. May 16 00:19:32.609203 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. May 16 00:19:32.614978 kernel: BTRFS: device fsid a728581e-9e7f-4655-895a-4f66e17e3645 devid 1 transid 40 /dev/sda3 scanned by (udev-worker) (486) May 16 00:19:32.623144 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. May 16 00:19:32.623369 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. May 16 00:19:32.624277 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 16 00:19:32.686972 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 16 00:19:32.693975 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 16 00:19:33.698579 disk-uuid[589]: The operation has completed successfully. May 16 00:19:33.699285 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 16 00:19:33.734786 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 00:19:33.734873 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 16 00:19:33.751525 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 16 00:19:33.764068 sh[605]: Success May 16 00:19:33.782978 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 16 00:19:33.846214 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 16 00:19:33.848845 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 16 00:19:33.856146 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 16 00:19:33.872750 kernel: BTRFS info (device dm-0): first mount of filesystem a728581e-9e7f-4655-895a-4f66e17e3645 May 16 00:19:33.872772 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 16 00:19:33.872784 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 16 00:19:33.874640 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 16 00:19:33.874659 kernel: BTRFS info (device dm-0): using free space tree May 16 00:19:33.881969 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 16 00:19:33.883752 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 16 00:19:33.884498 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... May 16 00:19:33.886714 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 16 00:19:33.917762 kernel: BTRFS info (device sda6): first mount of filesystem 206158fa-d3b7-4891-accd-2db768e6ca22 May 16 00:19:33.917802 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 16 00:19:33.917812 kernel: BTRFS info (device sda6): using free space tree May 16 00:19:33.930980 kernel: BTRFS info (device sda6): enabling ssd optimizations May 16 00:19:33.933976 kernel: BTRFS info (device sda6): last unmount of filesystem 206158fa-d3b7-4891-accd-2db768e6ca22 May 16 00:19:33.936625 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 16 00:19:33.938684 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 16 00:19:33.955937 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. May 16 00:19:33.957125 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 16 00:19:34.015809 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 00:19:34.018627 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 00:19:34.033794 ignition[662]: Ignition 2.20.0 May 16 00:19:34.033800 ignition[662]: Stage: fetch-offline May 16 00:19:34.033819 ignition[662]: no configs at "/usr/lib/ignition/base.d" May 16 00:19:34.033824 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 16 00:19:34.033880 ignition[662]: parsed url from cmdline: "" May 16 00:19:34.033882 ignition[662]: no config URL provided May 16 00:19:34.033884 ignition[662]: reading system config file "/usr/lib/ignition/user.ign" May 16 00:19:34.033889 ignition[662]: no config at "/usr/lib/ignition/user.ign" May 16 00:19:34.034283 ignition[662]: config successfully fetched May 16 00:19:34.034301 ignition[662]: parsing config with SHA512: ada00b09ac71a7914f0eff91ce4262122d8bddab3b3d4338e263b55564c4d0be4cf3f7c954742c194a8087444d684522e9d41444229e5812b60c89220d6cab5f May 16 00:19:34.039229 unknown[662]: fetched base config from "system" May 16 00:19:34.039484 ignition[662]: fetch-offline: fetch-offline passed May 16 00:19:34.039235 unknown[662]: fetched user config from "vmware" May 16 00:19:34.039528 ignition[662]: Ignition finished successfully May 16 00:19:34.040798 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 16 00:19:34.042354 systemd-networkd[790]: lo: Link UP May 16 00:19:34.042361 systemd-networkd[790]: lo: Gained carrier May 16 00:19:34.043141 systemd-networkd[790]: Enumeration completed May 16 00:19:34.043374 systemd-networkd[790]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. May 16 00:19:34.043455 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 00:19:34.047260 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 16 00:19:34.047371 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 16 00:19:34.043790 systemd[1]: Reached target network.target - Network. May 16 00:19:34.044083 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 16 00:19:34.045506 systemd-networkd[790]: ens192: Link UP May 16 00:19:34.045508 systemd-networkd[790]: ens192: Gained carrier May 16 00:19:34.045696 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 16 00:19:34.059642 ignition[799]: Ignition 2.20.0 May 16 00:19:34.059648 ignition[799]: Stage: kargs May 16 00:19:34.059765 ignition[799]: no configs at "/usr/lib/ignition/base.d" May 16 00:19:34.059771 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 16 00:19:34.060285 ignition[799]: kargs: kargs passed May 16 00:19:34.060310 ignition[799]: Ignition finished successfully May 16 00:19:34.061167 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 16 00:19:34.062010 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 16 00:19:34.074196 ignition[806]: Ignition 2.20.0 May 16 00:19:34.074206 ignition[806]: Stage: disks May 16 00:19:34.074336 ignition[806]: no configs at "/usr/lib/ignition/base.d" May 16 00:19:34.074342 ignition[806]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 16 00:19:34.075031 ignition[806]: disks: disks passed May 16 00:19:34.075066 ignition[806]: Ignition finished successfully May 16 00:19:34.075860 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 16 00:19:34.076087 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 16 00:19:34.076211 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 16 00:19:34.076419 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 00:19:34.076630 systemd[1]: Reached target sysinit.target - System Initialization. May 16 00:19:34.076821 systemd[1]: Reached target basic.target - Basic System. May 16 00:19:34.077522 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 16 00:19:34.132354 systemd-fsck[814]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 16 00:19:34.134345 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 16 00:19:34.135135 systemd[1]: Mounting sysroot.mount - /sysroot... May 16 00:19:34.220971 kernel: EXT4-fs (sda9): mounted filesystem f27adc75-a467-4bfb-9c02-79a2879452a3 r/w with ordered data mode. Quota mode: none. May 16 00:19:34.221307 systemd[1]: Mounted sysroot.mount - /sysroot. May 16 00:19:34.221850 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 16 00:19:34.222916 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 00:19:34.225009 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 16 00:19:34.225483 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 16 00:19:34.225514 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 00:19:34.225530 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 16 00:19:34.240013 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 16 00:19:34.242187 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 16 00:19:34.247977 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (822) May 16 00:19:34.250971 kernel: BTRFS info (device sda6): first mount of filesystem 206158fa-d3b7-4891-accd-2db768e6ca22 May 16 00:19:34.250993 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 16 00:19:34.251003 kernel: BTRFS info (device sda6): using free space tree May 16 00:19:34.255971 kernel: BTRFS info (device sda6): enabling ssd optimizations May 16 00:19:34.257560 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 00:19:34.272693 initrd-setup-root[846]: cut: /sysroot/etc/passwd: No such file or directory May 16 00:19:34.275882 initrd-setup-root[853]: cut: /sysroot/etc/group: No such file or directory May 16 00:19:34.278255 initrd-setup-root[860]: cut: /sysroot/etc/shadow: No such file or directory May 16 00:19:34.280870 initrd-setup-root[867]: cut: /sysroot/etc/gshadow: No such file or directory May 16 00:19:34.335891 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 16 00:19:34.336817 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 16 00:19:34.338066 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 16 00:19:34.346988 kernel: BTRFS info (device sda6): last unmount of filesystem 206158fa-d3b7-4891-accd-2db768e6ca22 May 16 00:19:34.358688 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 16 00:19:34.365364 ignition[935]: INFO : Ignition 2.20.0 May 16 00:19:34.365364 ignition[935]: INFO : Stage: mount May 16 00:19:34.365733 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:19:34.365733 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 16 00:19:34.366375 ignition[935]: INFO : mount: mount passed May 16 00:19:34.366484 ignition[935]: INFO : Ignition finished successfully May 16 00:19:34.367235 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 16 00:19:34.367891 systemd[1]: Starting ignition-files.service - Ignition (files)... May 16 00:19:34.871465 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 16 00:19:34.873318 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 00:19:34.909849 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (947) May 16 00:19:34.909887 kernel: BTRFS info (device sda6): first mount of filesystem 206158fa-d3b7-4891-accd-2db768e6ca22 May 16 00:19:34.909896 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 16 00:19:34.911651 kernel: BTRFS info (device sda6): using free space tree May 16 00:19:34.914972 kernel: BTRFS info (device sda6): enabling ssd optimizations May 16 00:19:34.916455 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 00:19:34.935366 ignition[963]: INFO : Ignition 2.20.0 May 16 00:19:34.935366 ignition[963]: INFO : Stage: files May 16 00:19:34.936212 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:19:34.936212 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 16 00:19:34.936742 ignition[963]: DEBUG : files: compiled without relabeling support, skipping May 16 00:19:34.937169 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 00:19:34.937169 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 00:19:34.953327 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 00:19:34.953664 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 00:19:34.953950 unknown[963]: wrote ssh authorized keys file for user: core May 16 00:19:34.954307 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 00:19:34.972292 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 16 00:19:34.972292 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 May 16 00:19:35.017986 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 16 00:19:35.231298 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 16 00:19:35.231298 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 16 00:19:35.231751 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 16 00:19:35.231751 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 16 00:19:35.231751 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 16 00:19:35.231751 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 00:19:35.231751 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 00:19:35.231751 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 00:19:35.231751 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 00:19:35.232979 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:19:35.232979 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:19:35.232979 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 16 00:19:35.232979 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 16 00:19:35.232979 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 16 00:19:35.232979 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 May 16 00:19:35.284257 systemd-networkd[790]: ens192: Gained IPv6LL May 16 00:19:35.909908 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 16 00:19:36.105789 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 16 00:19:36.105789 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 16 00:19:36.106537 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 16 00:19:36.106537 ignition[963]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 16 00:19:36.106831 ignition[963]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 00:19:36.106831 ignition[963]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 00:19:36.106831 ignition[963]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 16 00:19:36.106831 ignition[963]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 16 00:19:36.107429 ignition[963]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 00:19:36.107429 ignition[963]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 00:19:36.107429 ignition[963]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 16 00:19:36.107429 ignition[963]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 16 00:19:36.160934 ignition[963]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 16 00:19:36.163411 ignition[963]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 16 00:19:36.163637 ignition[963]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 16 00:19:36.163637 ignition[963]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 16 00:19:36.163637 ignition[963]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 16 00:19:36.163637 ignition[963]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 00:19:36.164701 ignition[963]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 00:19:36.164701 ignition[963]: INFO : files: files passed May 16 00:19:36.164701 ignition[963]: INFO : Ignition finished successfully May 16 00:19:36.164891 systemd[1]: Finished ignition-files.service - Ignition (files). May 16 00:19:36.165736 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 16 00:19:36.168050 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 16 00:19:36.171632 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 00:19:36.171716 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 16 00:19:36.174786 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 00:19:36.174786 initrd-setup-root-after-ignition[995]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 16 00:19:36.175667 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 00:19:36.176570 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 00:19:36.176999 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 16 00:19:36.177624 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 16 00:19:36.210198 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 00:19:36.210259 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 16 00:19:36.210542 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 16 00:19:36.210664 systemd[1]: Reached target initrd.target - Initrd Default Target. May 16 00:19:36.210856 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 16 00:19:36.211318 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 16 00:19:36.220498 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 00:19:36.221523 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 16 00:19:36.231582 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 16 00:19:36.231743 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 00:19:36.232018 systemd[1]: Stopped target timers.target - Timer Units. May 16 00:19:36.232202 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 00:19:36.232270 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 00:19:36.232630 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 16 00:19:36.232774 systemd[1]: Stopped target basic.target - Basic System. May 16 00:19:36.232951 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 16 00:19:36.233137 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 16 00:19:36.233344 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 16 00:19:36.233545 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 16 00:19:36.233916 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 16 00:19:36.234117 systemd[1]: Stopped target sysinit.target - System Initialization. May 16 00:19:36.234312 systemd[1]: Stopped target local-fs.target - Local File Systems. May 16 00:19:36.234494 systemd[1]: Stopped target swap.target - Swaps. May 16 00:19:36.234654 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 00:19:36.234715 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 16 00:19:36.235078 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 16 00:19:36.235230 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 00:19:36.235425 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 16 00:19:36.235469 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 00:19:36.235633 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 00:19:36.235692 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 16 00:19:36.235935 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 00:19:36.236027 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 16 00:19:36.236234 systemd[1]: Stopped target paths.target - Path Units. May 16 00:19:36.236361 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 00:19:36.237990 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 00:19:36.238152 systemd[1]: Stopped target slices.target - Slice Units. May 16 00:19:36.238346 systemd[1]: Stopped target sockets.target - Socket Units. May 16 00:19:36.238542 systemd[1]: iscsid.socket: Deactivated successfully. May 16 00:19:36.238639 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 16 00:19:36.238796 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 00:19:36.238839 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 00:19:36.239117 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 00:19:36.239202 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 00:19:36.239419 systemd[1]: ignition-files.service: Deactivated successfully. May 16 00:19:36.239496 systemd[1]: Stopped ignition-files.service - Ignition (files). May 16 00:19:36.240188 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 16 00:19:36.242530 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 16 00:19:36.242681 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 00:19:36.242772 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 16 00:19:36.242966 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 00:19:36.243047 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 16 00:19:36.245562 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 00:19:36.245608 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 16 00:19:36.254017 ignition[1019]: INFO : Ignition 2.20.0 May 16 00:19:36.254350 ignition[1019]: INFO : Stage: umount May 16 00:19:36.254460 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:19:36.254460 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 16 00:19:36.255019 ignition[1019]: INFO : umount: umount passed May 16 00:19:36.255460 ignition[1019]: INFO : Ignition finished successfully May 16 00:19:36.255798 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 00:19:36.255872 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 16 00:19:36.256168 systemd[1]: Stopped target network.target - Network. May 16 00:19:36.256257 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 00:19:36.256313 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 16 00:19:36.256434 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 00:19:36.256458 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 16 00:19:36.256557 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 00:19:36.256580 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 16 00:19:36.256780 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 16 00:19:36.256801 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 16 00:19:36.257018 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 16 00:19:36.257152 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 16 00:19:36.262078 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 00:19:36.262151 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 16 00:19:36.263666 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 16 00:19:36.263818 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 16 00:19:36.263850 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 00:19:36.265265 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 16 00:19:36.265415 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 00:19:36.265464 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 16 00:19:36.267875 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 00:19:36.267944 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 16 00:19:36.268383 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 00:19:36.268424 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 16 00:19:36.269246 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 16 00:19:36.269344 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 00:19:36.269372 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 00:19:36.269502 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. May 16 00:19:36.269523 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. May 16 00:19:36.269643 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 00:19:36.269663 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 00:19:36.269819 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 00:19:36.269840 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 16 00:19:36.269966 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 00:19:36.270921 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 16 00:19:36.281289 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 00:19:36.281469 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 00:19:36.281743 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 00:19:36.281765 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 16 00:19:36.281873 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 00:19:36.281888 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 16 00:19:36.282057 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 00:19:36.282114 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 16 00:19:36.282244 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 00:19:36.282266 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 16 00:19:36.282380 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 00:19:36.282404 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:19:36.283023 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 16 00:19:36.284028 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 16 00:19:36.284056 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 00:19:36.284707 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 16 00:19:36.284732 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 00:19:36.286024 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 00:19:36.286048 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 16 00:19:36.286176 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:19:36.286198 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:19:36.286896 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 16 00:19:36.286927 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 16 00:19:36.287126 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 00:19:36.287910 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 16 00:19:36.288255 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 00:19:36.288308 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 16 00:19:36.363764 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 00:19:36.363850 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 16 00:19:36.364249 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 16 00:19:36.364364 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 00:19:36.364396 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 16 00:19:36.364988 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 16 00:19:36.382008 systemd[1]: Switching root. May 16 00:19:36.438175 systemd-journald[216]: Journal stopped May 16 00:19:38.050260 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). May 16 00:19:38.050282 kernel: SELinux: policy capability network_peer_controls=1 May 16 00:19:38.050291 kernel: SELinux: policy capability open_perms=1 May 16 00:19:38.050296 kernel: SELinux: policy capability extended_socket_class=1 May 16 00:19:38.050302 kernel: SELinux: policy capability always_check_network=0 May 16 00:19:38.050307 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 00:19:38.050315 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 00:19:38.050321 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 00:19:38.050326 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 00:19:38.050332 systemd[1]: Successfully loaded SELinux policy in 35.610ms. May 16 00:19:38.050339 kernel: audit: type=1403 audit(1747354777.271:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 16 00:19:38.050345 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.447ms. May 16 00:19:38.050352 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 00:19:38.050360 systemd[1]: Detected virtualization vmware. May 16 00:19:38.050367 systemd[1]: Detected architecture x86-64. May 16 00:19:38.050373 systemd[1]: Detected first boot. May 16 00:19:38.050380 systemd[1]: Initializing machine ID from random generator. May 16 00:19:38.050388 zram_generator::config[1065]: No configuration found. May 16 00:19:38.050470 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc May 16 00:19:38.050481 kernel: Guest personality initialized and is active May 16 00:19:38.050487 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 16 00:19:38.050493 kernel: Initialized host personality May 16 00:19:38.050499 kernel: NET: Registered PF_VSOCK protocol family May 16 00:19:38.050505 systemd[1]: Populated /etc with preset unit settings. May 16 00:19:38.050514 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 16 00:19:38.050523 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" May 16 00:19:38.050530 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 16 00:19:38.050536 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 16 00:19:38.050542 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 16 00:19:38.050549 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 16 00:19:38.050555 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 16 00:19:38.050564 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 16 00:19:38.050571 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 16 00:19:38.050577 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 16 00:19:38.050584 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 16 00:19:38.050590 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 16 00:19:38.050597 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 16 00:19:38.050604 systemd[1]: Created slice user.slice - User and Session Slice. May 16 00:19:38.050610 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 00:19:38.050618 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 00:19:38.050627 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 16 00:19:38.050634 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 16 00:19:38.050640 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 16 00:19:38.050647 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 00:19:38.050654 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 16 00:19:38.050661 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 00:19:38.050669 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 16 00:19:38.050676 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 16 00:19:38.050683 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 16 00:19:38.050690 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 16 00:19:38.050696 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 00:19:38.050703 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 00:19:38.050710 systemd[1]: Reached target slices.target - Slice Units. May 16 00:19:38.050717 systemd[1]: Reached target swap.target - Swaps. May 16 00:19:38.050723 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 16 00:19:38.050731 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 16 00:19:38.050739 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 16 00:19:38.050746 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 00:19:38.050752 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 00:19:38.050761 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 00:19:38.050768 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 16 00:19:38.050775 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 16 00:19:38.050782 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 16 00:19:38.050789 systemd[1]: Mounting media.mount - External Media Directory... May 16 00:19:38.050796 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:19:38.050809 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 16 00:19:38.050830 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 16 00:19:38.050844 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 16 00:19:38.050853 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 00:19:38.050866 systemd[1]: Reached target machines.target - Containers. May 16 00:19:38.050876 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 16 00:19:38.050883 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... May 16 00:19:38.050890 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 00:19:38.050897 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 16 00:19:38.050904 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 00:19:38.050911 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 00:19:38.050919 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 00:19:38.050926 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 16 00:19:38.050933 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 00:19:38.050940 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 00:19:38.050947 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 16 00:19:38.050954 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 16 00:19:38.050970 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 16 00:19:38.050977 systemd[1]: Stopped systemd-fsck-usr.service. May 16 00:19:38.050986 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 00:19:38.050993 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 00:19:38.051001 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 00:19:38.051008 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 00:19:38.051016 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 16 00:19:38.051023 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 16 00:19:38.051029 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 00:19:38.051036 systemd[1]: verity-setup.service: Deactivated successfully. May 16 00:19:38.051045 systemd[1]: Stopped verity-setup.service. May 16 00:19:38.051052 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:19:38.051059 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 16 00:19:38.051066 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 16 00:19:38.051072 systemd[1]: Mounted media.mount - External Media Directory. May 16 00:19:38.051079 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 16 00:19:38.051086 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 16 00:19:38.051093 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 16 00:19:38.051100 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 16 00:19:38.051108 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 00:19:38.051115 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 00:19:38.051122 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 16 00:19:38.051129 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:19:38.051135 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 00:19:38.051142 kernel: fuse: init (API version 7.39) May 16 00:19:38.051149 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:19:38.051155 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 00:19:38.051164 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 00:19:38.051172 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 00:19:38.051179 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 00:19:38.051185 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 16 00:19:38.051192 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 16 00:19:38.051199 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 00:19:38.051206 kernel: loop: module loaded May 16 00:19:38.051213 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 16 00:19:38.051220 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 16 00:19:38.051230 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 00:19:38.051238 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 00:19:38.051246 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 16 00:19:38.051253 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 16 00:19:38.051260 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 16 00:19:38.051267 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 00:19:38.051291 systemd-journald[1165]: Collecting audit messages is disabled. May 16 00:19:38.051311 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 16 00:19:38.051320 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:19:38.051328 systemd-journald[1165]: Journal started May 16 00:19:38.051345 systemd-journald[1165]: Runtime Journal (/run/log/journal/3ee7edbd69b646a0b9faa36b6738cb47) is 4.8M, max 38.6M, 33.7M free. May 16 00:19:37.822194 systemd[1]: Queued start job for default target multi-user.target. May 16 00:19:37.829997 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 16 00:19:37.830210 systemd[1]: systemd-journald.service: Deactivated successfully. May 16 00:19:38.051992 jq[1135]: true May 16 00:19:38.052556 jq[1181]: true May 16 00:19:38.059977 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 16 00:19:38.064952 kernel: ACPI: bus type drm_connector registered May 16 00:19:38.064990 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 00:19:38.067108 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 16 00:19:38.079353 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 00:19:38.079403 systemd[1]: Started systemd-journald.service - Journal Service. May 16 00:19:38.084197 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:19:38.084696 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 00:19:38.084937 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:19:38.085096 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 00:19:38.085464 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 16 00:19:38.089155 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 16 00:19:38.089345 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 16 00:19:38.089583 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 16 00:19:38.093892 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 16 00:19:38.099309 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 16 00:19:38.101342 kernel: loop0: detected capacity change from 0 to 151640 May 16 00:19:38.101078 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 16 00:19:38.103367 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 16 00:19:38.103504 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 00:19:38.114731 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 00:19:38.116777 systemd-journald[1165]: Time spent on flushing to /var/log/journal/3ee7edbd69b646a0b9faa36b6738cb47 is 74.705ms for 1855 entries. May 16 00:19:38.116777 systemd-journald[1165]: System Journal (/var/log/journal/3ee7edbd69b646a0b9faa36b6738cb47) is 8M, max 584.8M, 576.8M free. May 16 00:19:38.222346 systemd-journald[1165]: Received client request to flush runtime journal. May 16 00:19:38.222380 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 00:19:38.160909 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. May 16 00:19:38.160919 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. May 16 00:19:38.171187 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 00:19:38.176547 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 16 00:19:38.207895 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 16 00:19:38.220569 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 00:19:38.222080 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 16 00:19:38.228814 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 16 00:19:38.241033 ignition[1193]: Ignition 2.20.0 May 16 00:19:38.242061 kernel: loop1: detected capacity change from 0 to 109808 May 16 00:19:38.242051 ignition[1193]: deleting config from guestinfo properties May 16 00:19:38.247904 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 16 00:19:38.252985 ignition[1193]: Successfully deleted config May 16 00:19:38.253203 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 00:19:38.257119 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). May 16 00:19:38.261078 udevadm[1237]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 16 00:19:38.280510 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. May 16 00:19:38.280523 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. May 16 00:19:38.283443 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 00:19:38.291986 kernel: loop2: detected capacity change from 0 to 2960 May 16 00:19:38.463262 kernel: loop3: detected capacity change from 0 to 229808 May 16 00:19:38.518093 kernel: loop4: detected capacity change from 0 to 151640 May 16 00:19:38.632982 kernel: loop5: detected capacity change from 0 to 109808 May 16 00:19:38.662009 kernel: loop6: detected capacity change from 0 to 2960 May 16 00:19:38.681981 kernel: loop7: detected capacity change from 0 to 229808 May 16 00:19:38.707502 (sd-merge)[1248]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. May 16 00:19:38.708082 (sd-merge)[1248]: Merged extensions into '/usr'. May 16 00:19:38.711794 systemd[1]: Reload requested from client PID 1189 ('systemd-sysext') (unit systemd-sysext.service)... May 16 00:19:38.711854 systemd[1]: Reloading... May 16 00:19:38.764973 zram_generator::config[1272]: No configuration found. May 16 00:19:38.841208 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 16 00:19:38.861456 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:19:38.909545 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 00:19:38.909682 systemd[1]: Reloading finished in 197 ms. May 16 00:19:38.923490 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 16 00:19:38.930981 systemd[1]: Starting ensure-sysext.service... May 16 00:19:38.934052 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 00:19:38.938485 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 16 00:19:38.941771 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 00:19:38.953040 systemd[1]: Reload requested from client PID 1331 ('systemctl') (unit ensure-sysext.service)... May 16 00:19:38.953051 systemd[1]: Reloading... May 16 00:19:38.976633 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 00:19:38.976808 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 16 00:19:38.977392 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 00:19:38.977553 systemd-tmpfiles[1332]: ACLs are not supported, ignoring. May 16 00:19:38.977601 systemd-tmpfiles[1332]: ACLs are not supported, ignoring. May 16 00:19:38.981528 systemd-udevd[1334]: Using default interface naming scheme 'v255'. May 16 00:19:38.992020 zram_generator::config[1364]: No configuration found. May 16 00:19:39.014760 systemd-tmpfiles[1332]: Detected autofs mount point /boot during canonicalization of boot. May 16 00:19:39.014766 systemd-tmpfiles[1332]: Skipping /boot May 16 00:19:39.020776 systemd-tmpfiles[1332]: Detected autofs mount point /boot during canonicalization of boot. May 16 00:19:39.020784 systemd-tmpfiles[1332]: Skipping /boot May 16 00:19:39.065349 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 16 00:19:39.082744 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:19:39.126272 systemd[1]: Reloading finished in 173 ms. May 16 00:19:39.138154 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 00:19:39.138672 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 00:19:39.155535 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 00:19:39.156705 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 16 00:19:39.164155 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 16 00:19:39.168517 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 00:19:39.172403 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 00:19:39.178091 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 16 00:19:39.191228 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:19:39.193654 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 00:19:39.195967 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 00:19:39.198912 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 00:19:39.200050 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 00:19:39.200119 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 00:19:39.200179 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:19:39.201586 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:19:39.201671 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 00:19:39.201721 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 00:19:39.201772 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:19:39.206045 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:19:39.208685 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 00:19:39.208876 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 00:19:39.208937 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 00:19:39.209035 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:19:39.212752 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:19:39.213348 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 00:19:39.219178 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 16 00:19:39.220186 systemd[1]: Finished ensure-sysext.service. May 16 00:19:39.228157 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:19:39.228279 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 00:19:39.232446 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:19:39.236241 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 16 00:19:39.238944 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 16 00:19:39.239226 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:19:39.239325 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 00:19:39.244868 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:19:39.244993 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 00:19:39.245199 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 00:19:39.250482 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 16 00:19:39.258899 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 16 00:19:39.282988 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1420) May 16 00:19:39.286973 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 16 00:19:39.292972 kernel: ACPI: button: Power Button [PWRF] May 16 00:19:39.315411 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 16 00:19:39.316236 ldconfig[1185]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 00:19:39.324259 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 16 00:19:39.329209 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 16 00:19:39.351656 augenrules[1494]: No rules May 16 00:19:39.357200 systemd[1]: audit-rules.service: Deactivated successfully. May 16 00:19:39.357330 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 00:19:39.360853 systemd-networkd[1440]: lo: Link UP May 16 00:19:39.360858 systemd-networkd[1440]: lo: Gained carrier May 16 00:19:39.362746 systemd-networkd[1440]: Enumeration completed May 16 00:19:39.362807 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 00:19:39.362967 systemd-networkd[1440]: ens192: Configuring with /etc/systemd/network/00-vmware.network. May 16 00:19:39.369316 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 16 00:19:39.369591 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 16 00:19:39.367160 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 16 00:19:39.370175 systemd-networkd[1440]: ens192: Link UP May 16 00:19:39.370271 systemd-networkd[1440]: ens192: Gained carrier May 16 00:19:39.372101 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 16 00:19:39.372472 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 16 00:19:39.405038 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 16 00:19:39.407212 systemd-resolved[1441]: Positive Trust Anchors: May 16 00:19:39.407337 systemd-resolved[1441]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:19:39.407362 systemd-resolved[1441]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 00:19:39.415499 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. May 16 00:19:39.415676 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 16 00:19:39.415845 systemd[1]: Reached target time-set.target - System Time Set. May 16 00:19:39.416646 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 16 00:19:39.417327 systemd-resolved[1441]: Defaulting to hostname 'linux'. May 16 00:19:39.418410 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 00:19:39.418552 systemd[1]: Reached target network.target - Network. May 16 00:19:39.418648 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 00:19:39.441700 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 16 00:19:39.455973 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! May 16 00:21:00.868918 systemd-resolved[1441]: Clock change detected. Flushing caches. May 16 00:21:00.868942 systemd-timesyncd[1460]: Contacted time server 23.150.40.242:123 (0.flatcar.pool.ntp.org). May 16 00:21:00.868975 systemd-timesyncd[1460]: Initial clock synchronization to Fri 2025-05-16 00:21:00.868887 UTC. May 16 00:21:00.889429 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 May 16 00:21:00.901574 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 16 00:21:00.901808 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:21:00.905857 (udev-worker)[1427]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. May 16 00:21:00.907388 kernel: mousedev: PS/2 mouse device common for all mice May 16 00:21:00.911295 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:21:00.924033 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 16 00:21:00.925949 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 16 00:21:00.959000 lvm[1516]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:21:00.989429 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 16 00:21:00.989730 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 00:21:00.990998 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 16 00:21:01.004286 lvm[1519]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:21:01.032844 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 16 00:21:01.039112 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:21:01.039356 systemd[1]: Reached target sysinit.target - System Initialization. May 16 00:21:01.039520 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 16 00:21:01.039645 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 16 00:21:01.039860 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 16 00:21:01.040006 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 16 00:21:01.040125 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 16 00:21:01.040233 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 00:21:01.040253 systemd[1]: Reached target paths.target - Path Units. May 16 00:21:01.040338 systemd[1]: Reached target timers.target - Timer Units. May 16 00:21:01.041405 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 16 00:21:01.042485 systemd[1]: Starting docker.socket - Docker Socket for the API... May 16 00:21:01.044091 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 16 00:21:01.044287 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 16 00:21:01.044414 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 16 00:21:01.046573 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 16 00:21:01.046903 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 16 00:21:01.047385 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 16 00:21:01.047543 systemd[1]: Reached target sockets.target - Socket Units. May 16 00:21:01.047645 systemd[1]: Reached target basic.target - Basic System. May 16 00:21:01.047764 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 16 00:21:01.047783 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 16 00:21:01.048575 systemd[1]: Starting containerd.service - containerd container runtime... May 16 00:21:01.051454 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 16 00:21:01.052312 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 16 00:21:01.055312 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 16 00:21:01.055429 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 16 00:21:01.057447 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 16 00:21:01.059641 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 16 00:21:01.062411 jq[1528]: false May 16 00:21:01.062402 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 16 00:21:01.069773 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 16 00:21:01.072177 systemd[1]: Starting systemd-logind.service - User Login Management... May 16 00:21:01.072803 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 00:21:01.073256 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 16 00:21:01.073885 systemd[1]: Starting update-engine.service - Update Engine... May 16 00:21:01.076441 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 16 00:21:01.079420 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... May 16 00:21:01.088807 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 00:21:01.088941 extend-filesystems[1529]: Found loop4 May 16 00:21:01.089437 extend-filesystems[1529]: Found loop5 May 16 00:21:01.089580 extend-filesystems[1529]: Found loop6 May 16 00:21:01.089742 extend-filesystems[1529]: Found loop7 May 16 00:21:01.089881 extend-filesystems[1529]: Found sda May 16 00:21:01.090015 extend-filesystems[1529]: Found sda1 May 16 00:21:01.090157 extend-filesystems[1529]: Found sda2 May 16 00:21:01.090401 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 16 00:21:01.090481 extend-filesystems[1529]: Found sda3 May 16 00:21:01.090617 extend-filesystems[1529]: Found usr May 16 00:21:01.090769 extend-filesystems[1529]: Found sda4 May 16 00:21:01.091482 extend-filesystems[1529]: Found sda6 May 16 00:21:01.091482 extend-filesystems[1529]: Found sda7 May 16 00:21:01.091482 extend-filesystems[1529]: Found sda9 May 16 00:21:01.091482 extend-filesystems[1529]: Checking size of /dev/sda9 May 16 00:21:01.094543 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 00:21:01.094686 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 16 00:21:01.103031 jq[1537]: true May 16 00:21:01.110233 extend-filesystems[1529]: Old size kept for /dev/sda9 May 16 00:21:01.110446 extend-filesystems[1529]: Found sr0 May 16 00:21:01.114265 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 00:21:01.114415 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 16 00:21:01.124220 update_engine[1536]: I20250516 00:21:01.124171 1536 main.cc:92] Flatcar Update Engine starting May 16 00:21:01.128586 (ntainerd)[1554]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 16 00:21:01.129676 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. May 16 00:21:01.132442 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... May 16 00:21:01.137201 systemd[1]: motdgen.service: Deactivated successfully. May 16 00:21:01.137330 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 16 00:21:01.142086 jq[1551]: true May 16 00:21:01.147397 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1418) May 16 00:21:01.148126 dbus-daemon[1527]: [system] SELinux support is enabled May 16 00:21:01.149800 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 16 00:21:01.150629 systemd-logind[1535]: Watching system buttons on /dev/input/event1 (Power Button) May 16 00:21:01.153811 systemd-logind[1535]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 16 00:21:01.157460 systemd-logind[1535]: New seat seat0. May 16 00:21:01.160328 systemd[1]: Started systemd-logind.service - User Login Management. May 16 00:21:01.161450 dbus-daemon[1527]: [system] Successfully activated service 'org.freedesktop.systemd1' May 16 00:21:01.165276 update_engine[1536]: I20250516 00:21:01.165246 1536 update_check_scheduler.cc:74] Next update check in 3m51s May 16 00:21:01.165556 tar[1542]: linux-amd64/LICENSE May 16 00:21:01.165556 tar[1542]: linux-amd64/helm May 16 00:21:01.165440 systemd[1]: Started update-engine.service - Update Engine. May 16 00:21:01.166041 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 00:21:01.166138 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 16 00:21:01.167015 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 00:21:01.167087 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 16 00:21:01.174008 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 16 00:21:01.187028 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. May 16 00:21:01.205199 unknown[1565]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath May 16 00:21:01.208572 unknown[1565]: Core dump limit set to -1 May 16 00:21:01.230583 bash[1589]: Updated "/home/core/.ssh/authorized_keys" May 16 00:21:01.232868 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 16 00:21:01.233576 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 16 00:21:01.336419 sshd_keygen[1568]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 00:21:01.343778 locksmithd[1573]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 00:21:01.357050 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 16 00:21:01.361449 systemd[1]: Starting issuegen.service - Generate /run/issue... May 16 00:21:01.374096 systemd[1]: issuegen.service: Deactivated successfully. May 16 00:21:01.374231 systemd[1]: Finished issuegen.service - Generate /run/issue. May 16 00:21:01.376479 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 16 00:21:01.390852 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 16 00:21:01.393109 systemd[1]: Started getty@tty1.service - Getty on tty1. May 16 00:21:01.396614 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 16 00:21:01.396824 systemd[1]: Reached target getty.target - Login Prompts. May 16 00:21:01.477899 containerd[1554]: time="2025-05-16T00:21:01Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 16 00:21:01.478316 containerd[1554]: time="2025-05-16T00:21:01.478244610Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 16 00:21:01.484257 containerd[1554]: time="2025-05-16T00:21:01.483928747Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="3.726µs" May 16 00:21:01.484257 containerd[1554]: time="2025-05-16T00:21:01.483951141Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 16 00:21:01.484257 containerd[1554]: time="2025-05-16T00:21:01.483967925Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 16 00:21:01.484257 containerd[1554]: time="2025-05-16T00:21:01.484055543Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 16 00:21:01.484257 containerd[1554]: time="2025-05-16T00:21:01.484065801Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 16 00:21:01.484257 containerd[1554]: time="2025-05-16T00:21:01.484080636Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 00:21:01.484257 containerd[1554]: time="2025-05-16T00:21:01.484112800Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 00:21:01.484257 containerd[1554]: time="2025-05-16T00:21:01.484119956Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 00:21:01.484257 containerd[1554]: time="2025-05-16T00:21:01.484240916Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 00:21:01.484257 containerd[1554]: time="2025-05-16T00:21:01.484250093Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 00:21:01.484257 containerd[1554]: time="2025-05-16T00:21:01.484256408Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 00:21:01.484257 containerd[1554]: time="2025-05-16T00:21:01.484261290Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 16 00:21:01.484494 containerd[1554]: time="2025-05-16T00:21:01.484317895Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 16 00:21:01.484494 containerd[1554]: time="2025-05-16T00:21:01.484463690Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 00:21:01.484494 containerd[1554]: time="2025-05-16T00:21:01.484485313Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 00:21:01.484494 containerd[1554]: time="2025-05-16T00:21:01.484492252Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 16 00:21:01.484557 containerd[1554]: time="2025-05-16T00:21:01.484505881Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 16 00:21:01.484928 containerd[1554]: time="2025-05-16T00:21:01.484625662Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 16 00:21:01.484928 containerd[1554]: time="2025-05-16T00:21:01.484665354Z" level=info msg="metadata content store policy set" policy=shared May 16 00:21:01.486176 containerd[1554]: time="2025-05-16T00:21:01.486029307Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 16 00:21:01.486176 containerd[1554]: time="2025-05-16T00:21:01.486057617Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 16 00:21:01.486176 containerd[1554]: time="2025-05-16T00:21:01.486069434Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 16 00:21:01.486176 containerd[1554]: time="2025-05-16T00:21:01.486077831Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 16 00:21:01.486176 containerd[1554]: time="2025-05-16T00:21:01.486085210Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 16 00:21:01.486176 containerd[1554]: time="2025-05-16T00:21:01.486091592Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 16 00:21:01.486176 containerd[1554]: time="2025-05-16T00:21:01.486101702Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 16 00:21:01.486176 containerd[1554]: time="2025-05-16T00:21:01.486109402Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 16 00:21:01.486176 containerd[1554]: time="2025-05-16T00:21:01.486115596Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 16 00:21:01.486176 containerd[1554]: time="2025-05-16T00:21:01.486122048Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 16 00:21:01.486176 containerd[1554]: time="2025-05-16T00:21:01.486127419Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 16 00:21:01.486176 containerd[1554]: time="2025-05-16T00:21:01.486134608Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 16 00:21:01.486387 containerd[1554]: time="2025-05-16T00:21:01.486189986Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 16 00:21:01.486387 containerd[1554]: time="2025-05-16T00:21:01.486206161Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 16 00:21:01.486387 containerd[1554]: time="2025-05-16T00:21:01.486220672Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 16 00:21:01.486387 containerd[1554]: time="2025-05-16T00:21:01.486230661Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 16 00:21:01.486387 containerd[1554]: time="2025-05-16T00:21:01.486237212Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 16 00:21:01.486387 containerd[1554]: time="2025-05-16T00:21:01.486243154Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 16 00:21:01.486387 containerd[1554]: time="2025-05-16T00:21:01.486249339Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 16 00:21:01.486387 containerd[1554]: time="2025-05-16T00:21:01.486261984Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 16 00:21:01.486387 containerd[1554]: time="2025-05-16T00:21:01.486274050Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 16 00:21:01.486387 containerd[1554]: time="2025-05-16T00:21:01.486284787Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 16 00:21:01.486387 containerd[1554]: time="2025-05-16T00:21:01.486292091Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 16 00:21:01.486387 containerd[1554]: time="2025-05-16T00:21:01.486327716Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 16 00:21:01.486387 containerd[1554]: time="2025-05-16T00:21:01.486342035Z" level=info msg="Start snapshots syncer" May 16 00:21:01.486387 containerd[1554]: time="2025-05-16T00:21:01.486374922Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 16 00:21:01.486584 containerd[1554]: time="2025-05-16T00:21:01.486546382Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 16 00:21:01.486584 containerd[1554]: time="2025-05-16T00:21:01.486577245Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 16 00:21:01.486682 containerd[1554]: time="2025-05-16T00:21:01.486620064Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 16 00:21:01.486682 containerd[1554]: time="2025-05-16T00:21:01.486672498Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 16 00:21:01.486711 containerd[1554]: time="2025-05-16T00:21:01.486685287Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 16 00:21:01.486711 containerd[1554]: time="2025-05-16T00:21:01.486693747Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 16 00:21:01.486711 containerd[1554]: time="2025-05-16T00:21:01.486700968Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 16 00:21:01.486711 containerd[1554]: time="2025-05-16T00:21:01.486709142Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 16 00:21:01.486767 containerd[1554]: time="2025-05-16T00:21:01.486715143Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 16 00:21:01.486767 containerd[1554]: time="2025-05-16T00:21:01.486721529Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 16 00:21:01.486767 containerd[1554]: time="2025-05-16T00:21:01.486735428Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 16 00:21:01.486767 containerd[1554]: time="2025-05-16T00:21:01.486742854Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 16 00:21:01.486767 containerd[1554]: time="2025-05-16T00:21:01.486748736Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 16 00:21:01.486831 containerd[1554]: time="2025-05-16T00:21:01.486767231Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 00:21:01.486831 containerd[1554]: time="2025-05-16T00:21:01.486775751Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 00:21:01.486831 containerd[1554]: time="2025-05-16T00:21:01.486781639Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 00:21:01.486831 containerd[1554]: time="2025-05-16T00:21:01.486787243Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 00:21:01.486831 containerd[1554]: time="2025-05-16T00:21:01.486791949Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 16 00:21:01.486831 containerd[1554]: time="2025-05-16T00:21:01.486798203Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 16 00:21:01.486831 containerd[1554]: time="2025-05-16T00:21:01.486811156Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 16 00:21:01.486831 containerd[1554]: time="2025-05-16T00:21:01.486821350Z" level=info msg="runtime interface created" May 16 00:21:01.486831 containerd[1554]: time="2025-05-16T00:21:01.486826349Z" level=info msg="created NRI interface" May 16 00:21:01.486966 containerd[1554]: time="2025-05-16T00:21:01.486834078Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 16 00:21:01.486966 containerd[1554]: time="2025-05-16T00:21:01.486844006Z" level=info msg="Connect containerd service" May 16 00:21:01.486966 containerd[1554]: time="2025-05-16T00:21:01.486859484Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 16 00:21:01.487655 containerd[1554]: time="2025-05-16T00:21:01.487235264Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 00:21:01.614166 tar[1542]: linux-amd64/README.md May 16 00:21:01.625827 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 16 00:21:01.660834 containerd[1554]: time="2025-05-16T00:21:01.660608475Z" level=info msg="Start subscribing containerd event" May 16 00:21:01.660834 containerd[1554]: time="2025-05-16T00:21:01.660649737Z" level=info msg="Start recovering state" May 16 00:21:01.660834 containerd[1554]: time="2025-05-16T00:21:01.660716720Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 00:21:01.660834 containerd[1554]: time="2025-05-16T00:21:01.660720548Z" level=info msg="Start event monitor" May 16 00:21:01.660834 containerd[1554]: time="2025-05-16T00:21:01.660741717Z" level=info msg="Start cni network conf syncer for default" May 16 00:21:01.660834 containerd[1554]: time="2025-05-16T00:21:01.660747633Z" level=info msg="Start streaming server" May 16 00:21:01.660834 containerd[1554]: time="2025-05-16T00:21:01.660753363Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 16 00:21:01.660834 containerd[1554]: time="2025-05-16T00:21:01.660757665Z" level=info msg="runtime interface starting up..." May 16 00:21:01.660834 containerd[1554]: time="2025-05-16T00:21:01.660761304Z" level=info msg="starting plugins..." May 16 00:21:01.660834 containerd[1554]: time="2025-05-16T00:21:01.660771318Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 16 00:21:01.660834 containerd[1554]: time="2025-05-16T00:21:01.660757886Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 00:21:01.661064 containerd[1554]: time="2025-05-16T00:21:01.660864449Z" level=info msg="containerd successfully booted in 0.183193s" May 16 00:21:01.661169 systemd[1]: Started containerd.service - containerd container runtime. May 16 00:21:02.770557 systemd-networkd[1440]: ens192: Gained IPv6LL May 16 00:21:02.772747 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 16 00:21:02.773234 systemd[1]: Reached target network-online.target - Network is Online. May 16 00:21:02.774726 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... May 16 00:21:02.776441 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:21:02.779025 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 16 00:21:02.802023 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 16 00:21:02.814580 systemd[1]: coreos-metadata.service: Deactivated successfully. May 16 00:21:02.814858 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. May 16 00:21:02.815387 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 16 00:21:04.608507 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:21:04.609071 systemd[1]: Reached target multi-user.target - Multi-User System. May 16 00:21:04.609526 systemd[1]: Startup finished in 977ms (kernel) + 5.642s (initrd) + 5.965s (userspace) = 12.585s. May 16 00:21:04.618916 (kubelet)[1717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:21:04.691176 login[1638]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 16 00:21:04.692671 login[1639]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 16 00:21:04.702563 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 16 00:21:04.702873 systemd-logind[1535]: New session 1 of user core. May 16 00:21:04.704364 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 16 00:21:04.707062 systemd-logind[1535]: New session 2 of user core. May 16 00:21:04.715266 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 16 00:21:04.716836 systemd[1]: Starting user@500.service - User Manager for UID 500... May 16 00:21:04.738160 (systemd)[1724]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 00:21:04.739868 systemd-logind[1535]: New session c1 of user core. May 16 00:21:04.876271 systemd[1724]: Queued start job for default target default.target. May 16 00:21:04.891409 systemd[1724]: Created slice app.slice - User Application Slice. May 16 00:21:04.891489 systemd[1724]: Reached target paths.target - Paths. May 16 00:21:04.891560 systemd[1724]: Reached target timers.target - Timers. May 16 00:21:04.892459 systemd[1724]: Starting dbus.socket - D-Bus User Message Bus Socket... May 16 00:21:04.899251 systemd[1724]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 16 00:21:04.899280 systemd[1724]: Reached target sockets.target - Sockets. May 16 00:21:04.899304 systemd[1724]: Reached target basic.target - Basic System. May 16 00:21:04.899326 systemd[1724]: Reached target default.target - Main User Target. May 16 00:21:04.899342 systemd[1724]: Startup finished in 154ms. May 16 00:21:04.899457 systemd[1]: Started user@500.service - User Manager for UID 500. May 16 00:21:04.900503 systemd[1]: Started session-1.scope - Session 1 of User core. May 16 00:21:04.901124 systemd[1]: Started session-2.scope - Session 2 of User core. May 16 00:21:05.878922 kubelet[1717]: E0516 00:21:05.878886 1717 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:21:05.880624 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:21:05.880708 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:21:05.880967 systemd[1]: kubelet.service: Consumed 777ms CPU time, 272.6M memory peak. May 16 00:21:16.131136 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 16 00:21:16.132338 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:21:16.213632 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:21:16.226527 (kubelet)[1766]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:21:16.248973 kubelet[1766]: E0516 00:21:16.248941 1766 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:21:16.251531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:21:16.251609 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:21:16.251992 systemd[1]: kubelet.service: Consumed 92ms CPU time, 109.7M memory peak. May 16 00:21:26.502132 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 16 00:21:26.503842 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:21:26.872692 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:21:26.879592 (kubelet)[1781]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:21:26.904598 kubelet[1781]: E0516 00:21:26.904563 1781 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:21:26.906253 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:21:26.906336 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:21:26.906709 systemd[1]: kubelet.service: Consumed 101ms CPU time, 108.2M memory peak. May 16 00:21:31.343311 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 16 00:21:31.344702 systemd[1]: Started sshd@0-139.178.70.108:22-147.75.109.163:53504.service - OpenSSH per-connection server daemon (147.75.109.163:53504). May 16 00:21:31.398281 sshd[1789]: Accepted publickey for core from 147.75.109.163 port 53504 ssh2: RSA SHA256:EfNgMsQPwPDdLbPZByY9C3lyE6YZGC6uGSeNGzrF4TU May 16 00:21:31.399025 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:21:31.401498 systemd-logind[1535]: New session 3 of user core. May 16 00:21:31.412519 systemd[1]: Started session-3.scope - Session 3 of User core. May 16 00:21:31.466469 systemd[1]: Started sshd@1-139.178.70.108:22-147.75.109.163:53518.service - OpenSSH per-connection server daemon (147.75.109.163:53518). May 16 00:21:31.498780 sshd[1794]: Accepted publickey for core from 147.75.109.163 port 53518 ssh2: RSA SHA256:EfNgMsQPwPDdLbPZByY9C3lyE6YZGC6uGSeNGzrF4TU May 16 00:21:31.499605 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:21:31.502736 systemd-logind[1535]: New session 4 of user core. May 16 00:21:31.508424 systemd[1]: Started session-4.scope - Session 4 of User core. May 16 00:21:31.558321 sshd[1796]: Connection closed by 147.75.109.163 port 53518 May 16 00:21:31.558386 sshd-session[1794]: pam_unix(sshd:session): session closed for user core May 16 00:21:31.581563 systemd[1]: sshd@1-139.178.70.108:22-147.75.109.163:53518.service: Deactivated successfully. May 16 00:21:31.582674 systemd[1]: session-4.scope: Deactivated successfully. May 16 00:21:31.583652 systemd-logind[1535]: Session 4 logged out. Waiting for processes to exit. May 16 00:21:31.584636 systemd[1]: Started sshd@2-139.178.70.108:22-147.75.109.163:53534.service - OpenSSH per-connection server daemon (147.75.109.163:53534). May 16 00:21:31.585515 systemd-logind[1535]: Removed session 4. May 16 00:21:31.630021 sshd[1801]: Accepted publickey for core from 147.75.109.163 port 53534 ssh2: RSA SHA256:EfNgMsQPwPDdLbPZByY9C3lyE6YZGC6uGSeNGzrF4TU May 16 00:21:31.630964 sshd-session[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:21:31.635864 systemd-logind[1535]: New session 5 of user core. May 16 00:21:31.641487 systemd[1]: Started session-5.scope - Session 5 of User core. May 16 00:21:31.688734 sshd[1804]: Connection closed by 147.75.109.163 port 53534 May 16 00:21:31.689177 sshd-session[1801]: pam_unix(sshd:session): session closed for user core May 16 00:21:31.700093 systemd[1]: sshd@2-139.178.70.108:22-147.75.109.163:53534.service: Deactivated successfully. May 16 00:21:31.701435 systemd[1]: session-5.scope: Deactivated successfully. May 16 00:21:31.702481 systemd-logind[1535]: Session 5 logged out. Waiting for processes to exit. May 16 00:21:31.704089 systemd[1]: Started sshd@3-139.178.70.108:22-147.75.109.163:53542.service - OpenSSH per-connection server daemon (147.75.109.163:53542). May 16 00:21:31.704718 systemd-logind[1535]: Removed session 5. May 16 00:21:31.742463 sshd[1809]: Accepted publickey for core from 147.75.109.163 port 53542 ssh2: RSA SHA256:EfNgMsQPwPDdLbPZByY9C3lyE6YZGC6uGSeNGzrF4TU May 16 00:21:31.743272 sshd-session[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:21:31.746492 systemd-logind[1535]: New session 6 of user core. May 16 00:21:31.762564 systemd[1]: Started session-6.scope - Session 6 of User core. May 16 00:21:31.813006 sshd[1812]: Connection closed by 147.75.109.163 port 53542 May 16 00:21:31.813814 sshd-session[1809]: pam_unix(sshd:session): session closed for user core May 16 00:21:31.825095 systemd[1]: sshd@3-139.178.70.108:22-147.75.109.163:53542.service: Deactivated successfully. May 16 00:21:31.826073 systemd[1]: session-6.scope: Deactivated successfully. May 16 00:21:31.826992 systemd-logind[1535]: Session 6 logged out. Waiting for processes to exit. May 16 00:21:31.827894 systemd[1]: Started sshd@4-139.178.70.108:22-147.75.109.163:53550.service - OpenSSH per-connection server daemon (147.75.109.163:53550). May 16 00:21:31.829510 systemd-logind[1535]: Removed session 6. May 16 00:21:31.866250 sshd[1817]: Accepted publickey for core from 147.75.109.163 port 53550 ssh2: RSA SHA256:EfNgMsQPwPDdLbPZByY9C3lyE6YZGC6uGSeNGzrF4TU May 16 00:21:31.867108 sshd-session[1817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:21:31.870943 systemd-logind[1535]: New session 7 of user core. May 16 00:21:31.877452 systemd[1]: Started session-7.scope - Session 7 of User core. May 16 00:21:31.944037 sudo[1821]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 16 00:21:31.944266 sudo[1821]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:21:31.957933 sudo[1821]: pam_unix(sudo:session): session closed for user root May 16 00:21:31.959567 sshd[1820]: Connection closed by 147.75.109.163 port 53550 May 16 00:21:31.959584 sshd-session[1817]: pam_unix(sshd:session): session closed for user core May 16 00:21:31.967499 systemd[1]: sshd@4-139.178.70.108:22-147.75.109.163:53550.service: Deactivated successfully. May 16 00:21:31.968398 systemd[1]: session-7.scope: Deactivated successfully. May 16 00:21:31.968825 systemd-logind[1535]: Session 7 logged out. Waiting for processes to exit. May 16 00:21:31.969864 systemd[1]: Started sshd@5-139.178.70.108:22-147.75.109.163:53566.service - OpenSSH per-connection server daemon (147.75.109.163:53566). May 16 00:21:31.970650 systemd-logind[1535]: Removed session 7. May 16 00:21:32.009988 sshd[1826]: Accepted publickey for core from 147.75.109.163 port 53566 ssh2: RSA SHA256:EfNgMsQPwPDdLbPZByY9C3lyE6YZGC6uGSeNGzrF4TU May 16 00:21:32.010813 sshd-session[1826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:21:32.014751 systemd-logind[1535]: New session 8 of user core. May 16 00:21:32.022504 systemd[1]: Started session-8.scope - Session 8 of User core. May 16 00:21:32.072573 sudo[1831]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 16 00:21:32.072748 sudo[1831]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:21:32.075047 sudo[1831]: pam_unix(sudo:session): session closed for user root May 16 00:21:32.078019 sudo[1830]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 16 00:21:32.078174 sudo[1830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:21:32.084829 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 00:21:32.111551 augenrules[1853]: No rules May 16 00:21:32.112219 systemd[1]: audit-rules.service: Deactivated successfully. May 16 00:21:32.112387 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 00:21:32.113129 sudo[1830]: pam_unix(sudo:session): session closed for user root May 16 00:21:32.113833 sshd[1829]: Connection closed by 147.75.109.163 port 53566 May 16 00:21:32.114042 sshd-session[1826]: pam_unix(sshd:session): session closed for user core May 16 00:21:32.126997 systemd[1]: sshd@5-139.178.70.108:22-147.75.109.163:53566.service: Deactivated successfully. May 16 00:21:32.128011 systemd[1]: session-8.scope: Deactivated successfully. May 16 00:21:32.129043 systemd-logind[1535]: Session 8 logged out. Waiting for processes to exit. May 16 00:21:32.129941 systemd[1]: Started sshd@6-139.178.70.108:22-147.75.109.163:53574.service - OpenSSH per-connection server daemon (147.75.109.163:53574). May 16 00:21:32.131576 systemd-logind[1535]: Removed session 8. May 16 00:21:32.170247 sshd[1861]: Accepted publickey for core from 147.75.109.163 port 53574 ssh2: RSA SHA256:EfNgMsQPwPDdLbPZByY9C3lyE6YZGC6uGSeNGzrF4TU May 16 00:21:32.171049 sshd-session[1861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:21:32.175425 systemd-logind[1535]: New session 9 of user core. May 16 00:21:32.181459 systemd[1]: Started session-9.scope - Session 9 of User core. May 16 00:21:32.230077 sudo[1865]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 00:21:32.230723 sudo[1865]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:21:32.696504 systemd[1]: Starting docker.service - Docker Application Container Engine... May 16 00:21:32.700641 (dockerd)[1883]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 16 00:21:33.020311 dockerd[1883]: time="2025-05-16T00:21:33.020233679Z" level=info msg="Starting up" May 16 00:21:33.021782 dockerd[1883]: time="2025-05-16T00:21:33.021766617Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 16 00:21:33.076824 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3387413633-merged.mount: Deactivated successfully. May 16 00:21:33.121780 dockerd[1883]: time="2025-05-16T00:21:33.121755052Z" level=info msg="Loading containers: start." May 16 00:21:33.268366 kernel: Initializing XFRM netlink socket May 16 00:21:33.337263 systemd-networkd[1440]: docker0: Link UP May 16 00:21:33.371022 dockerd[1883]: time="2025-05-16T00:21:33.370995937Z" level=info msg="Loading containers: done." May 16 00:21:33.388195 dockerd[1883]: time="2025-05-16T00:21:33.388169164Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 16 00:21:33.388274 dockerd[1883]: time="2025-05-16T00:21:33.388218511Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 16 00:21:33.388293 dockerd[1883]: time="2025-05-16T00:21:33.388274849Z" level=info msg="Daemon has completed initialization" May 16 00:21:33.420518 dockerd[1883]: time="2025-05-16T00:21:33.420491129Z" level=info msg="API listen on /run/docker.sock" May 16 00:21:33.420716 systemd[1]: Started docker.service - Docker Application Container Engine. May 16 00:21:34.417623 containerd[1554]: time="2025-05-16T00:21:34.417577541Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\"" May 16 00:21:35.059734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3551285117.mount: Deactivated successfully. May 16 00:21:36.125081 containerd[1554]: time="2025-05-16T00:21:36.125050169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:21:36.126060 containerd[1554]: time="2025-05-16T00:21:36.125706901Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.1: active requests=0, bytes read=30075403" May 16 00:21:36.126060 containerd[1554]: time="2025-05-16T00:21:36.125830569Z" level=info msg="ImageCreate event name:\"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:21:36.127222 containerd[1554]: time="2025-05-16T00:21:36.127206597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:21:36.127861 containerd[1554]: time="2025-05-16T00:21:36.127746887Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.1\" with image id \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\", size \"30072203\" in 1.710146356s" May 16 00:21:36.127861 containerd[1554]: time="2025-05-16T00:21:36.127766315Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\" returns image reference \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\"" May 16 00:21:36.128233 containerd[1554]: time="2025-05-16T00:21:36.128169836Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\"" May 16 00:21:37.131686 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 16 00:21:37.135437 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:21:37.229900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:21:37.236525 (kubelet)[2145]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:21:37.259829 kubelet[2145]: E0516 00:21:37.259807 2145 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:21:37.261526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:21:37.261617 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:21:37.262530 systemd[1]: kubelet.service: Consumed 89ms CPU time, 108M memory peak. May 16 00:21:37.762029 containerd[1554]: time="2025-05-16T00:21:37.761504081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:21:37.768834 containerd[1554]: time="2025-05-16T00:21:37.768814690Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.1: active requests=0, bytes read=26011390" May 16 00:21:37.774009 containerd[1554]: time="2025-05-16T00:21:37.773993613Z" level=info msg="ImageCreate event name:\"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:21:37.781438 containerd[1554]: time="2025-05-16T00:21:37.781409666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:21:37.781929 containerd[1554]: time="2025-05-16T00:21:37.781915216Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.1\" with image id \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\", size \"27638910\" in 1.653729238s" May 16 00:21:37.781979 containerd[1554]: time="2025-05-16T00:21:37.781970281Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\" returns image reference \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\"" May 16 00:21:37.782308 containerd[1554]: time="2025-05-16T00:21:37.782290614Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\"" May 16 00:21:38.792823 containerd[1554]: time="2025-05-16T00:21:38.792790818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:21:38.793268 containerd[1554]: time="2025-05-16T00:21:38.793191793Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.1: active requests=0, bytes read=20148960" May 16 00:21:38.793938 containerd[1554]: time="2025-05-16T00:21:38.793528034Z" level=info msg="ImageCreate event name:\"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:21:38.794792 containerd[1554]: time="2025-05-16T00:21:38.794771701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:21:38.795497 containerd[1554]: time="2025-05-16T00:21:38.795278942Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.1\" with image id \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\", size \"21776498\" in 1.012870211s" May 16 00:21:38.795497 containerd[1554]: time="2025-05-16T00:21:38.795296192Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\" returns image reference \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\"" May 16 00:21:38.795608 containerd[1554]: time="2025-05-16T00:21:38.795592371Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 16 00:21:39.602715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2577115372.mount: Deactivated successfully. May 16 00:21:39.980403 containerd[1554]: time="2025-05-16T00:21:39.980366986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:21:39.985809 containerd[1554]: time="2025-05-16T00:21:39.985770441Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.1: active requests=0, bytes read=31889075" May 16 00:21:39.992929 containerd[1554]: time="2025-05-16T00:21:39.992905651Z" level=info msg="ImageCreate event name:\"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:21:40.003257 containerd[1554]: time="2025-05-16T00:21:40.003225154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:21:40.003836 containerd[1554]: time="2025-05-16T00:21:40.003592944Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.1\" with image id \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\", repo tag \"registry.k8s.io/kube-proxy:v1.33.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\", size \"31888094\" in 1.207981703s" May 16 00:21:40.003836 containerd[1554]: time="2025-05-16T00:21:40.003615049Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\"" May 16 00:21:40.003914 containerd[1554]: time="2025-05-16T00:21:40.003905142Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" May 16 00:21:40.484633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2843017630.mount: Deactivated successfully. May 16 00:21:41.271315 containerd[1554]: time="2025-05-16T00:21:41.271277679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:21:41.280202 containerd[1554]: time="2025-05-16T00:21:41.280173655Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" May 16 00:21:41.288248 containerd[1554]: time="2025-05-16T00:21:41.288230274Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:21:41.293518 containerd[1554]: time="2025-05-16T00:21:41.293491418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:21:41.294250 containerd[1554]: time="2025-05-16T00:21:41.293976581Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.290054166s" May 16 00:21:41.294250 containerd[1554]: time="2025-05-16T00:21:41.293999465Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" May 16 00:21:41.294547 containerd[1554]: time="2025-05-16T00:21:41.294494574Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 16 00:21:41.812212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount943521437.mount: Deactivated successfully. May 16 00:21:41.814548 containerd[1554]: time="2025-05-16T00:21:41.814112543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:21:41.814548 containerd[1554]: time="2025-05-16T00:21:41.814390926Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 16 00:21:41.814548 containerd[1554]: time="2025-05-16T00:21:41.814526468Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:21:41.815535 containerd[1554]: time="2025-05-16T00:21:41.815513732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:21:41.816089 containerd[1554]: time="2025-05-16T00:21:41.815886193Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 521.377074ms" May 16 00:21:41.816089 containerd[1554]: time="2025-05-16T00:21:41.815902498Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 16 00:21:41.816154 containerd[1554]: time="2025-05-16T00:21:41.816142444Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" May 16 00:21:45.133207 containerd[1554]: time="2025-05-16T00:21:45.133177351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:21:45.134431 containerd[1554]: time="2025-05-16T00:21:45.134404568Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58142739" May 16 00:21:45.135394 containerd[1554]: time="2025-05-16T00:21:45.134863518Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:21:45.136062 containerd[1554]: time="2025-05-16T00:21:45.136033701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:21:45.136744 containerd[1554]: time="2025-05-16T00:21:45.136664971Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.320507972s" May 16 00:21:45.136744 containerd[1554]: time="2025-05-16T00:21:45.136682880Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" May 16 00:21:46.036750 update_engine[1536]: I20250516 00:21:46.036379 1536 update_attempter.cc:509] Updating boot flags... May 16 00:21:46.065457 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (2262) May 16 00:21:47.381728 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 16 00:21:47.385816 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:21:47.567158 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 16 00:21:47.567299 systemd[1]: kubelet.service: Failed with result 'signal'. May 16 00:21:47.567530 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:21:47.569227 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:21:47.589882 systemd[1]: Reload requested from client PID 2279 ('systemctl') (unit session-9.scope)... May 16 00:21:47.589960 systemd[1]: Reloading... May 16 00:21:47.658394 zram_generator::config[2326]: No configuration found. May 16 00:21:47.709672 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 16 00:21:47.727701 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:21:47.793383 systemd[1]: Reloading finished in 203 ms. May 16 00:21:47.820473 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 16 00:21:47.820529 systemd[1]: kubelet.service: Failed with result 'signal'. May 16 00:21:47.820694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:21:47.821852 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:21:48.172276 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:21:48.176078 (kubelet)[2391]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 00:21:48.228605 kubelet[2391]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:21:48.228605 kubelet[2391]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 00:21:48.228605 kubelet[2391]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:21:48.269220 kubelet[2391]: I0516 00:21:48.269171 2391 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 00:21:48.510409 kubelet[2391]: I0516 00:21:48.510309 2391 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 16 00:21:48.510409 kubelet[2391]: I0516 00:21:48.510330 2391 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 00:21:48.511367 kubelet[2391]: I0516 00:21:48.510823 2391 server.go:956] "Client rotation is on, will bootstrap in background" May 16 00:21:48.540460 kubelet[2391]: I0516 00:21:48.540437 2391 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 00:21:48.543174 kubelet[2391]: E0516 00:21:48.543141 2391 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.70.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 16 00:21:48.557718 kubelet[2391]: I0516 00:21:48.557693 2391 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 16 00:21:48.562524 kubelet[2391]: I0516 00:21:48.562460 2391 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 00:21:48.567529 kubelet[2391]: I0516 00:21:48.567493 2391 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 00:21:48.572749 kubelet[2391]: I0516 00:21:48.567529 2391 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 00:21:48.572857 kubelet[2391]: I0516 00:21:48.572754 2391 topology_manager.go:138] "Creating topology manager with none policy" May 16 00:21:48.572857 kubelet[2391]: I0516 00:21:48.572763 2391 container_manager_linux.go:303] "Creating device plugin manager" May 16 00:21:48.572857 kubelet[2391]: I0516 00:21:48.572848 2391 state_mem.go:36] "Initialized new in-memory state store" May 16 00:21:48.577111 kubelet[2391]: I0516 00:21:48.577101 2391 kubelet.go:480] "Attempting to sync node with API server" May 16 00:21:48.577151 kubelet[2391]: I0516 00:21:48.577116 2391 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 00:21:48.577151 kubelet[2391]: I0516 00:21:48.577135 2391 kubelet.go:386] "Adding apiserver pod source" May 16 00:21:48.578976 kubelet[2391]: I0516 00:21:48.578814 2391 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 00:21:48.589700 kubelet[2391]: I0516 00:21:48.589574 2391 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 16 00:21:48.591190 kubelet[2391]: I0516 00:21:48.591125 2391 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 16 00:21:48.594801 kubelet[2391]: W0516 00:21:48.594300 2391 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 00:21:48.598263 kubelet[2391]: E0516 00:21:48.598057 2391 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 16 00:21:48.598263 kubelet[2391]: E0516 00:21:48.598115 2391 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 16 00:21:48.598429 kubelet[2391]: I0516 00:21:48.598416 2391 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 00:21:48.598527 kubelet[2391]: I0516 00:21:48.598447 2391 server.go:1289] "Started kubelet" May 16 00:21:48.602824 kubelet[2391]: I0516 00:21:48.602746 2391 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 00:21:48.605425 kubelet[2391]: E0516 00:21:48.603143 2391 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.108:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.108:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fda080f8b6971 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-16 00:21:48.598430065 +0000 UTC m=+0.419999585,LastTimestamp:2025-05-16 00:21:48.598430065 +0000 UTC m=+0.419999585,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 00:21:48.605679 kubelet[2391]: I0516 00:21:48.605660 2391 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 16 00:21:48.607692 kubelet[2391]: I0516 00:21:48.607646 2391 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 16 00:21:48.608264 kubelet[2391]: I0516 00:21:48.608250 2391 server.go:317] "Adding debug handlers to kubelet server" May 16 00:21:48.610387 kubelet[2391]: I0516 00:21:48.610370 2391 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 00:21:48.610527 kubelet[2391]: E0516 00:21:48.610515 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:48.614093 kubelet[2391]: I0516 00:21:48.613761 2391 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 00:21:48.614093 kubelet[2391]: I0516 00:21:48.613904 2391 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 00:21:48.614093 kubelet[2391]: I0516 00:21:48.614027 2391 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 00:21:48.614827 kubelet[2391]: I0516 00:21:48.614819 2391 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 00:21:48.614897 kubelet[2391]: I0516 00:21:48.614892 2391 reconciler.go:26] "Reconciler: start to sync state" May 16 00:21:48.616033 kubelet[2391]: E0516 00:21:48.616014 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="200ms" May 16 00:21:48.616222 kubelet[2391]: I0516 00:21:48.616146 2391 factory.go:223] Registration of the systemd container factory successfully May 16 00:21:48.616222 kubelet[2391]: I0516 00:21:48.616190 2391 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 00:21:48.616933 kubelet[2391]: E0516 00:21:48.616825 2391 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 16 00:21:48.617010 kubelet[2391]: I0516 00:21:48.617000 2391 factory.go:223] Registration of the containerd container factory successfully May 16 00:21:48.626563 kubelet[2391]: I0516 00:21:48.626547 2391 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 16 00:21:48.626646 kubelet[2391]: I0516 00:21:48.626641 2391 status_manager.go:230] "Starting to sync pod status with apiserver" May 16 00:21:48.626688 kubelet[2391]: I0516 00:21:48.626684 2391 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 00:21:48.626724 kubelet[2391]: I0516 00:21:48.626719 2391 kubelet.go:2436] "Starting kubelet main sync loop" May 16 00:21:48.626774 kubelet[2391]: E0516 00:21:48.626764 2391 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 00:21:48.627663 kubelet[2391]: E0516 00:21:48.627648 2391 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 00:21:48.628783 kubelet[2391]: E0516 00:21:48.628770 2391 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 16 00:21:48.629592 kubelet[2391]: I0516 00:21:48.629440 2391 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 00:21:48.629592 kubelet[2391]: I0516 00:21:48.629447 2391 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 00:21:48.629592 kubelet[2391]: I0516 00:21:48.629456 2391 state_mem.go:36] "Initialized new in-memory state store" May 16 00:21:48.632784 kubelet[2391]: I0516 00:21:48.632624 2391 policy_none.go:49] "None policy: Start" May 16 00:21:48.632784 kubelet[2391]: I0516 00:21:48.632637 2391 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 00:21:48.632784 kubelet[2391]: I0516 00:21:48.632644 2391 state_mem.go:35] "Initializing new in-memory state store" May 16 00:21:48.639473 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 16 00:21:48.652512 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 16 00:21:48.654439 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 16 00:21:48.661139 kubelet[2391]: E0516 00:21:48.661032 2391 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 16 00:21:48.661356 kubelet[2391]: I0516 00:21:48.661249 2391 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 00:21:48.661356 kubelet[2391]: I0516 00:21:48.661264 2391 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 00:21:48.661491 kubelet[2391]: I0516 00:21:48.661479 2391 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 00:21:48.662164 kubelet[2391]: E0516 00:21:48.662155 2391 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 00:21:48.662279 kubelet[2391]: E0516 00:21:48.662254 2391 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 16 00:21:48.761964 kubelet[2391]: I0516 00:21:48.761883 2391 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 00:21:48.762829 kubelet[2391]: E0516 00:21:48.762746 2391 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" May 16 00:21:48.778866 systemd[1]: Created slice kubepods-burstable-pod97963c41ada533e2e0872a518ecd4611.slice - libcontainer container kubepods-burstable-pod97963c41ada533e2e0872a518ecd4611.slice. May 16 00:21:48.786999 kubelet[2391]: E0516 00:21:48.786977 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:21:48.816403 kubelet[2391]: I0516 00:21:48.816269 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8fba52155e63f70cc922ab7cc8c200fd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8fba52155e63f70cc922ab7cc8c200fd\") " pod="kube-system/kube-scheduler-localhost" May 16 00:21:48.816403 kubelet[2391]: I0516 00:21:48.816291 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:21:48.816403 kubelet[2391]: I0516 00:21:48.816306 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:21:48.816403 kubelet[2391]: I0516 00:21:48.816320 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:21:48.816403 kubelet[2391]: I0516 00:21:48.816332 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:21:48.816575 kubelet[2391]: I0516 00:21:48.816345 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:21:48.816575 kubelet[2391]: E0516 00:21:48.816463 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="400ms" May 16 00:21:48.822465 systemd[1]: Created slice kubepods-burstable-pod8fba52155e63f70cc922ab7cc8c200fd.slice - libcontainer container kubepods-burstable-pod8fba52155e63f70cc922ab7cc8c200fd.slice. May 16 00:21:48.823801 kubelet[2391]: E0516 00:21:48.823683 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:21:48.857368 systemd[1]: Created slice kubepods-burstable-podba8760e1334d10136adb18f11c6e2445.slice - libcontainer container kubepods-burstable-podba8760e1334d10136adb18f11c6e2445.slice. May 16 00:21:48.858367 kubelet[2391]: E0516 00:21:48.858339 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:21:48.916857 kubelet[2391]: I0516 00:21:48.916832 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ba8760e1334d10136adb18f11c6e2445-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ba8760e1334d10136adb18f11c6e2445\") " pod="kube-system/kube-apiserver-localhost" May 16 00:21:48.916857 kubelet[2391]: I0516 00:21:48.916860 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ba8760e1334d10136adb18f11c6e2445-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ba8760e1334d10136adb18f11c6e2445\") " pod="kube-system/kube-apiserver-localhost" May 16 00:21:48.916970 kubelet[2391]: I0516 00:21:48.916903 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ba8760e1334d10136adb18f11c6e2445-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ba8760e1334d10136adb18f11c6e2445\") " pod="kube-system/kube-apiserver-localhost" May 16 00:21:48.963814 kubelet[2391]: I0516 00:21:48.963782 2391 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 00:21:48.964080 kubelet[2391]: E0516 00:21:48.964062 2391 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" May 16 00:21:49.088679 containerd[1554]: time="2025-05-16T00:21:49.088555381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:97963c41ada533e2e0872a518ecd4611,Namespace:kube-system,Attempt:0,}" May 16 00:21:49.125462 containerd[1554]: time="2025-05-16T00:21:49.125432272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8fba52155e63f70cc922ab7cc8c200fd,Namespace:kube-system,Attempt:0,}" May 16 00:21:49.152797 containerd[1554]: time="2025-05-16T00:21:49.152771817Z" level=info msg="connecting to shim 0255af789bfb61bc7e41349b2f550f6b944093bfe369f190f0dfc90e2c50f791" address="unix:///run/containerd/s/c0115c9f16370b047f304cba63b548e52ff8aa468c4daa443e2f180040d19631" namespace=k8s.io protocol=ttrpc version=3 May 16 00:21:49.160008 containerd[1554]: time="2025-05-16T00:21:49.159072459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ba8760e1334d10136adb18f11c6e2445,Namespace:kube-system,Attempt:0,}" May 16 00:21:49.161410 containerd[1554]: time="2025-05-16T00:21:49.161330168Z" level=info msg="connecting to shim 67644e37372872c852b4e8b2c52af863465f9f26f5814eacddb3a0d9f5e7f999" address="unix:///run/containerd/s/d0bbdfc3c5270d65198b311ad84a5f0f21f481142d48e13e06ccdf4651e8565f" namespace=k8s.io protocol=ttrpc version=3 May 16 00:21:49.176974 containerd[1554]: time="2025-05-16T00:21:49.176943190Z" level=info msg="connecting to shim 522568582599ced241b4678e2853468ce20080afdb6ca16978649ee2b5fd2e4c" address="unix:///run/containerd/s/63664c11fd5d22b77e4668aad78f0c978471745371770390e58e88658c4aa8e4" namespace=k8s.io protocol=ttrpc version=3 May 16 00:21:49.217530 kubelet[2391]: E0516 00:21:49.217482 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="800ms" May 16 00:21:49.220501 systemd[1]: Started cri-containerd-0255af789bfb61bc7e41349b2f550f6b944093bfe369f190f0dfc90e2c50f791.scope - libcontainer container 0255af789bfb61bc7e41349b2f550f6b944093bfe369f190f0dfc90e2c50f791. May 16 00:21:49.222404 systemd[1]: Started cri-containerd-522568582599ced241b4678e2853468ce20080afdb6ca16978649ee2b5fd2e4c.scope - libcontainer container 522568582599ced241b4678e2853468ce20080afdb6ca16978649ee2b5fd2e4c. May 16 00:21:49.223519 systemd[1]: Started cri-containerd-67644e37372872c852b4e8b2c52af863465f9f26f5814eacddb3a0d9f5e7f999.scope - libcontainer container 67644e37372872c852b4e8b2c52af863465f9f26f5814eacddb3a0d9f5e7f999. May 16 00:21:49.262360 containerd[1554]: time="2025-05-16T00:21:49.262328795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ba8760e1334d10136adb18f11c6e2445,Namespace:kube-system,Attempt:0,} returns sandbox id \"522568582599ced241b4678e2853468ce20080afdb6ca16978649ee2b5fd2e4c\"" May 16 00:21:49.268628 containerd[1554]: time="2025-05-16T00:21:49.268230552Z" level=info msg="CreateContainer within sandbox \"522568582599ced241b4678e2853468ce20080afdb6ca16978649ee2b5fd2e4c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 16 00:21:49.278247 containerd[1554]: time="2025-05-16T00:21:49.278222139Z" level=info msg="Container fcc17e7999a0bf3abd1819e34f89bef482f5bf05dfc3484e807151ac0c5eb246: CDI devices from CRI Config.CDIDevices: []" May 16 00:21:49.283628 containerd[1554]: time="2025-05-16T00:21:49.283603875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:97963c41ada533e2e0872a518ecd4611,Namespace:kube-system,Attempt:0,} returns sandbox id \"67644e37372872c852b4e8b2c52af863465f9f26f5814eacddb3a0d9f5e7f999\"" May 16 00:21:49.284774 containerd[1554]: time="2025-05-16T00:21:49.284757031Z" level=info msg="CreateContainer within sandbox \"522568582599ced241b4678e2853468ce20080afdb6ca16978649ee2b5fd2e4c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fcc17e7999a0bf3abd1819e34f89bef482f5bf05dfc3484e807151ac0c5eb246\"" May 16 00:21:49.285455 containerd[1554]: time="2025-05-16T00:21:49.285445210Z" level=info msg="StartContainer for \"fcc17e7999a0bf3abd1819e34f89bef482f5bf05dfc3484e807151ac0c5eb246\"" May 16 00:21:49.290092 containerd[1554]: time="2025-05-16T00:21:49.290072130Z" level=info msg="CreateContainer within sandbox \"67644e37372872c852b4e8b2c52af863465f9f26f5814eacddb3a0d9f5e7f999\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 16 00:21:49.290522 containerd[1554]: time="2025-05-16T00:21:49.290202401Z" level=info msg="connecting to shim fcc17e7999a0bf3abd1819e34f89bef482f5bf05dfc3484e807151ac0c5eb246" address="unix:///run/containerd/s/63664c11fd5d22b77e4668aad78f0c978471745371770390e58e88658c4aa8e4" protocol=ttrpc version=3 May 16 00:21:49.291114 containerd[1554]: time="2025-05-16T00:21:49.290888115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8fba52155e63f70cc922ab7cc8c200fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"0255af789bfb61bc7e41349b2f550f6b944093bfe369f190f0dfc90e2c50f791\"" May 16 00:21:49.299790 containerd[1554]: time="2025-05-16T00:21:49.299767977Z" level=info msg="CreateContainer within sandbox \"0255af789bfb61bc7e41349b2f550f6b944093bfe369f190f0dfc90e2c50f791\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 16 00:21:49.305451 systemd[1]: Started cri-containerd-fcc17e7999a0bf3abd1819e34f89bef482f5bf05dfc3484e807151ac0c5eb246.scope - libcontainer container fcc17e7999a0bf3abd1819e34f89bef482f5bf05dfc3484e807151ac0c5eb246. May 16 00:21:49.322211 containerd[1554]: time="2025-05-16T00:21:49.321941645Z" level=info msg="Container c19fba2497c70695d96bd191a8851891d7a886cd528cd81fbce9e13d13102c3e: CDI devices from CRI Config.CDIDevices: []" May 16 00:21:49.331891 containerd[1554]: time="2025-05-16T00:21:49.331804974Z" level=info msg="CreateContainer within sandbox \"67644e37372872c852b4e8b2c52af863465f9f26f5814eacddb3a0d9f5e7f999\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c19fba2497c70695d96bd191a8851891d7a886cd528cd81fbce9e13d13102c3e\"" May 16 00:21:49.332424 containerd[1554]: time="2025-05-16T00:21:49.332397034Z" level=info msg="StartContainer for \"c19fba2497c70695d96bd191a8851891d7a886cd528cd81fbce9e13d13102c3e\"" May 16 00:21:49.333082 containerd[1554]: time="2025-05-16T00:21:49.333069968Z" level=info msg="connecting to shim c19fba2497c70695d96bd191a8851891d7a886cd528cd81fbce9e13d13102c3e" address="unix:///run/containerd/s/d0bbdfc3c5270d65198b311ad84a5f0f21f481142d48e13e06ccdf4651e8565f" protocol=ttrpc version=3 May 16 00:21:49.333758 containerd[1554]: time="2025-05-16T00:21:49.333739707Z" level=info msg="Container cdf9ad942ddfba25f70c74ac4506a4bffd57574325702250991b80ce0761c0ef: CDI devices from CRI Config.CDIDevices: []" May 16 00:21:49.338403 containerd[1554]: time="2025-05-16T00:21:49.338312929Z" level=info msg="CreateContainer within sandbox \"0255af789bfb61bc7e41349b2f550f6b944093bfe369f190f0dfc90e2c50f791\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cdf9ad942ddfba25f70c74ac4506a4bffd57574325702250991b80ce0761c0ef\"" May 16 00:21:49.338842 containerd[1554]: time="2025-05-16T00:21:49.338681216Z" level=info msg="StartContainer for \"cdf9ad942ddfba25f70c74ac4506a4bffd57574325702250991b80ce0761c0ef\"" May 16 00:21:49.339775 containerd[1554]: time="2025-05-16T00:21:49.339616178Z" level=info msg="connecting to shim cdf9ad942ddfba25f70c74ac4506a4bffd57574325702250991b80ce0761c0ef" address="unix:///run/containerd/s/c0115c9f16370b047f304cba63b548e52ff8aa468c4daa443e2f180040d19631" protocol=ttrpc version=3 May 16 00:21:49.346842 containerd[1554]: time="2025-05-16T00:21:49.346806556Z" level=info msg="StartContainer for \"fcc17e7999a0bf3abd1819e34f89bef482f5bf05dfc3484e807151ac0c5eb246\" returns successfully" May 16 00:21:49.349446 systemd[1]: Started cri-containerd-c19fba2497c70695d96bd191a8851891d7a886cd528cd81fbce9e13d13102c3e.scope - libcontainer container c19fba2497c70695d96bd191a8851891d7a886cd528cd81fbce9e13d13102c3e. May 16 00:21:49.355212 systemd[1]: Started cri-containerd-cdf9ad942ddfba25f70c74ac4506a4bffd57574325702250991b80ce0761c0ef.scope - libcontainer container cdf9ad942ddfba25f70c74ac4506a4bffd57574325702250991b80ce0761c0ef. May 16 00:21:49.365990 kubelet[2391]: I0516 00:21:49.365967 2391 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 00:21:49.366589 kubelet[2391]: E0516 00:21:49.366574 2391 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" May 16 00:21:49.389143 containerd[1554]: time="2025-05-16T00:21:49.389105068Z" level=info msg="StartContainer for \"c19fba2497c70695d96bd191a8851891d7a886cd528cd81fbce9e13d13102c3e\" returns successfully" May 16 00:21:49.401303 containerd[1554]: time="2025-05-16T00:21:49.401249484Z" level=info msg="StartContainer for \"cdf9ad942ddfba25f70c74ac4506a4bffd57574325702250991b80ce0761c0ef\" returns successfully" May 16 00:21:49.484728 kubelet[2391]: E0516 00:21:49.484701 2391 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 16 00:21:49.537683 kubelet[2391]: E0516 00:21:49.537657 2391 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 16 00:21:49.633948 kubelet[2391]: E0516 00:21:49.633926 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:21:49.636065 kubelet[2391]: E0516 00:21:49.635962 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:21:49.639040 kubelet[2391]: E0516 00:21:49.639015 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:21:49.786979 kubelet[2391]: E0516 00:21:49.786951 2391 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 16 00:21:50.018806 kubelet[2391]: E0516 00:21:50.018576 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="1.6s" May 16 00:21:50.168233 kubelet[2391]: I0516 00:21:50.168006 2391 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 00:21:50.639629 kubelet[2391]: E0516 00:21:50.639374 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:21:50.639629 kubelet[2391]: E0516 00:21:50.639607 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:21:50.640383 kubelet[2391]: E0516 00:21:50.640335 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:21:51.111241 kubelet[2391]: I0516 00:21:51.111161 2391 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 16 00:21:51.111241 kubelet[2391]: E0516 00:21:51.111187 2391 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 16 00:21:51.132238 kubelet[2391]: E0516 00:21:51.132218 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:51.232939 kubelet[2391]: E0516 00:21:51.232912 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:51.333778 kubelet[2391]: E0516 00:21:51.333736 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:51.433887 kubelet[2391]: E0516 00:21:51.433865 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:51.534831 kubelet[2391]: E0516 00:21:51.534804 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:51.597377 kubelet[2391]: I0516 00:21:51.597287 2391 apiserver.go:52] "Watching apiserver" May 16 00:21:51.611994 kubelet[2391]: I0516 00:21:51.611845 2391 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 00:21:51.615039 kubelet[2391]: I0516 00:21:51.615017 2391 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 00:21:51.630146 kubelet[2391]: E0516 00:21:51.630115 2391 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 16 00:21:51.630146 kubelet[2391]: I0516 00:21:51.630139 2391 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 00:21:51.632150 kubelet[2391]: E0516 00:21:51.631949 2391 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 16 00:21:51.632150 kubelet[2391]: I0516 00:21:51.631965 2391 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 16 00:21:51.639385 kubelet[2391]: E0516 00:21:51.639369 2391 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 16 00:21:51.639785 kubelet[2391]: I0516 00:21:51.639768 2391 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 00:21:51.640898 kubelet[2391]: E0516 00:21:51.640878 2391 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 16 00:21:52.831385 systemd[1]: Reload requested from client PID 2667 ('systemctl') (unit session-9.scope)... May 16 00:21:52.831401 systemd[1]: Reloading... May 16 00:21:52.896411 zram_generator::config[2715]: No configuration found. May 16 00:21:52.956560 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 16 00:21:52.974402 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:21:53.047490 systemd[1]: Reloading finished in 215 ms. May 16 00:21:53.066763 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:21:53.073148 systemd[1]: kubelet.service: Deactivated successfully. May 16 00:21:53.073301 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:21:53.073334 systemd[1]: kubelet.service: Consumed 443ms CPU time, 127.6M memory peak. May 16 00:21:53.074727 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:21:53.915535 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:21:53.925637 (kubelet)[2779]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 00:21:53.998021 kubelet[2779]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:21:53.998021 kubelet[2779]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 00:21:53.998021 kubelet[2779]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:21:53.998784 kubelet[2779]: I0516 00:21:53.998068 2779 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 00:21:54.006156 kubelet[2779]: I0516 00:21:54.006130 2779 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 16 00:21:54.006156 kubelet[2779]: I0516 00:21:54.006149 2779 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 00:21:54.006302 kubelet[2779]: I0516 00:21:54.006290 2779 server.go:956] "Client rotation is on, will bootstrap in background" May 16 00:21:54.007221 kubelet[2779]: I0516 00:21:54.007209 2779 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" May 16 00:21:54.015032 kubelet[2779]: I0516 00:21:54.014722 2779 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 00:21:54.021254 kubelet[2779]: I0516 00:21:54.020655 2779 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 16 00:21:54.024193 kubelet[2779]: I0516 00:21:54.023744 2779 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 00:21:54.024193 kubelet[2779]: I0516 00:21:54.023878 2779 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 00:21:54.024193 kubelet[2779]: I0516 00:21:54.023892 2779 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 00:21:54.024193 kubelet[2779]: I0516 00:21:54.024047 2779 topology_manager.go:138] "Creating topology manager with none policy" May 16 00:21:54.024333 kubelet[2779]: I0516 00:21:54.024055 2779 container_manager_linux.go:303] "Creating device plugin manager" May 16 00:21:54.026651 kubelet[2779]: I0516 00:21:54.026602 2779 state_mem.go:36] "Initialized new in-memory state store" May 16 00:21:54.028414 kubelet[2779]: I0516 00:21:54.028094 2779 kubelet.go:480] "Attempting to sync node with API server" May 16 00:21:54.028414 kubelet[2779]: I0516 00:21:54.028108 2779 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 00:21:54.028414 kubelet[2779]: I0516 00:21:54.028124 2779 kubelet.go:386] "Adding apiserver pod source" May 16 00:21:54.028414 kubelet[2779]: I0516 00:21:54.028134 2779 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 00:21:54.029763 kubelet[2779]: I0516 00:21:54.029749 2779 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 16 00:21:54.030278 kubelet[2779]: I0516 00:21:54.030043 2779 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 16 00:21:54.038157 kubelet[2779]: I0516 00:21:54.038105 2779 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 00:21:54.038157 kubelet[2779]: I0516 00:21:54.038139 2779 server.go:1289] "Started kubelet" May 16 00:21:54.038451 kubelet[2779]: I0516 00:21:54.038284 2779 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 16 00:21:54.039623 kubelet[2779]: I0516 00:21:54.039049 2779 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 00:21:54.039623 kubelet[2779]: I0516 00:21:54.039209 2779 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 00:21:54.041188 kubelet[2779]: I0516 00:21:54.039922 2779 server.go:317] "Adding debug handlers to kubelet server" May 16 00:21:54.043309 kubelet[2779]: I0516 00:21:54.043299 2779 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 00:21:54.049293 kubelet[2779]: I0516 00:21:54.049123 2779 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 00:21:54.053146 kubelet[2779]: I0516 00:21:54.053131 2779 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 00:21:54.053220 kubelet[2779]: I0516 00:21:54.053209 2779 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 00:21:54.053300 kubelet[2779]: I0516 00:21:54.053282 2779 reconciler.go:26] "Reconciler: start to sync state" May 16 00:21:54.055365 kubelet[2779]: I0516 00:21:54.055258 2779 factory.go:223] Registration of the systemd container factory successfully May 16 00:21:54.055365 kubelet[2779]: I0516 00:21:54.055318 2779 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 00:21:54.056337 kubelet[2779]: E0516 00:21:54.056143 2779 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 00:21:54.057431 kubelet[2779]: I0516 00:21:54.056983 2779 factory.go:223] Registration of the containerd container factory successfully May 16 00:21:54.061475 kubelet[2779]: I0516 00:21:54.061394 2779 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 16 00:21:54.062425 kubelet[2779]: I0516 00:21:54.062182 2779 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 16 00:21:54.062425 kubelet[2779]: I0516 00:21:54.062198 2779 status_manager.go:230] "Starting to sync pod status with apiserver" May 16 00:21:54.062425 kubelet[2779]: I0516 00:21:54.062212 2779 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 00:21:54.062425 kubelet[2779]: I0516 00:21:54.062217 2779 kubelet.go:2436] "Starting kubelet main sync loop" May 16 00:21:54.062425 kubelet[2779]: E0516 00:21:54.062237 2779 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 00:21:54.103609 kubelet[2779]: I0516 00:21:54.103590 2779 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 00:21:54.103609 kubelet[2779]: I0516 00:21:54.103601 2779 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 00:21:54.103609 kubelet[2779]: I0516 00:21:54.103612 2779 state_mem.go:36] "Initialized new in-memory state store" May 16 00:21:54.103720 kubelet[2779]: I0516 00:21:54.103699 2779 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 16 00:21:54.103720 kubelet[2779]: I0516 00:21:54.103706 2779 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 16 00:21:54.103720 kubelet[2779]: I0516 00:21:54.103717 2779 policy_none.go:49] "None policy: Start" May 16 00:21:54.103772 kubelet[2779]: I0516 00:21:54.103722 2779 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 00:21:54.103772 kubelet[2779]: I0516 00:21:54.103728 2779 state_mem.go:35] "Initializing new in-memory state store" May 16 00:21:54.103808 kubelet[2779]: I0516 00:21:54.103780 2779 state_mem.go:75] "Updated machine memory state" May 16 00:21:54.106751 kubelet[2779]: E0516 00:21:54.106741 2779 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 16 00:21:54.107423 kubelet[2779]: I0516 00:21:54.107196 2779 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 00:21:54.107423 kubelet[2779]: I0516 00:21:54.107205 2779 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 00:21:54.107476 kubelet[2779]: I0516 00:21:54.107457 2779 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 00:21:54.109182 kubelet[2779]: E0516 00:21:54.108577 2779 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 00:21:54.164919 kubelet[2779]: I0516 00:21:54.162773 2779 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 00:21:54.164919 kubelet[2779]: I0516 00:21:54.163529 2779 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 16 00:21:54.164919 kubelet[2779]: I0516 00:21:54.163921 2779 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 00:21:54.210096 kubelet[2779]: I0516 00:21:54.210019 2779 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 00:21:54.215558 kubelet[2779]: I0516 00:21:54.215084 2779 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 16 00:21:54.215558 kubelet[2779]: I0516 00:21:54.215135 2779 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 16 00:21:54.254900 kubelet[2779]: I0516 00:21:54.254875 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ba8760e1334d10136adb18f11c6e2445-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ba8760e1334d10136adb18f11c6e2445\") " pod="kube-system/kube-apiserver-localhost" May 16 00:21:54.254900 kubelet[2779]: I0516 00:21:54.254898 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ba8760e1334d10136adb18f11c6e2445-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ba8760e1334d10136adb18f11c6e2445\") " pod="kube-system/kube-apiserver-localhost" May 16 00:21:54.255028 kubelet[2779]: I0516 00:21:54.254910 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ba8760e1334d10136adb18f11c6e2445-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ba8760e1334d10136adb18f11c6e2445\") " pod="kube-system/kube-apiserver-localhost" May 16 00:21:54.255028 kubelet[2779]: I0516 00:21:54.254920 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:21:54.255028 kubelet[2779]: I0516 00:21:54.254929 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:21:54.255028 kubelet[2779]: I0516 00:21:54.254939 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:21:54.255028 kubelet[2779]: I0516 00:21:54.254947 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:21:54.255127 kubelet[2779]: I0516 00:21:54.254956 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:21:54.255127 kubelet[2779]: I0516 00:21:54.254965 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8fba52155e63f70cc922ab7cc8c200fd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8fba52155e63f70cc922ab7cc8c200fd\") " pod="kube-system/kube-scheduler-localhost" May 16 00:21:55.033534 kubelet[2779]: I0516 00:21:55.033497 2779 apiserver.go:52] "Watching apiserver" May 16 00:21:55.053542 kubelet[2779]: I0516 00:21:55.053500 2779 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 00:21:55.092807 kubelet[2779]: I0516 00:21:55.092762 2779 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 00:21:55.098551 kubelet[2779]: E0516 00:21:55.098502 2779 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 16 00:21:55.109610 kubelet[2779]: I0516 00:21:55.109492 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.109473982 podStartE2EDuration="1.109473982s" podCreationTimestamp="2025-05-16 00:21:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:21:55.098625161 +0000 UTC m=+1.157410784" watchObservedRunningTime="2025-05-16 00:21:55.109473982 +0000 UTC m=+1.168259620" May 16 00:21:55.121611 kubelet[2779]: I0516 00:21:55.121492 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.121481152 podStartE2EDuration="1.121481152s" podCreationTimestamp="2025-05-16 00:21:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:21:55.121472276 +0000 UTC m=+1.180257905" watchObservedRunningTime="2025-05-16 00:21:55.121481152 +0000 UTC m=+1.180266789" May 16 00:21:55.121611 kubelet[2779]: I0516 00:21:55.121544 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.121541011 podStartE2EDuration="1.121541011s" podCreationTimestamp="2025-05-16 00:21:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:21:55.109857059 +0000 UTC m=+1.168642681" watchObservedRunningTime="2025-05-16 00:21:55.121541011 +0000 UTC m=+1.180326648" May 16 00:21:58.294645 kubelet[2779]: I0516 00:21:58.294536 2779 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 16 00:21:58.294935 kubelet[2779]: I0516 00:21:58.294868 2779 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 16 00:21:58.294970 containerd[1554]: time="2025-05-16T00:21:58.294758041Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 00:21:59.304883 systemd[1]: Created slice kubepods-besteffort-podeb1f04b0_0654_47da_a1f5_d45c934ccb00.slice - libcontainer container kubepods-besteffort-podeb1f04b0_0654_47da_a1f5_d45c934ccb00.slice. May 16 00:21:59.383375 kubelet[2779]: I0516 00:21:59.383333 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brx2n\" (UniqueName: \"kubernetes.io/projected/eb1f04b0-0654-47da-a1f5-d45c934ccb00-kube-api-access-brx2n\") pod \"kube-proxy-qf9bh\" (UID: \"eb1f04b0-0654-47da-a1f5-d45c934ccb00\") " pod="kube-system/kube-proxy-qf9bh" May 16 00:21:59.384177 kubelet[2779]: I0516 00:21:59.383393 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/eb1f04b0-0654-47da-a1f5-d45c934ccb00-kube-proxy\") pod \"kube-proxy-qf9bh\" (UID: \"eb1f04b0-0654-47da-a1f5-d45c934ccb00\") " pod="kube-system/kube-proxy-qf9bh" May 16 00:21:59.384177 kubelet[2779]: I0516 00:21:59.383411 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb1f04b0-0654-47da-a1f5-d45c934ccb00-xtables-lock\") pod \"kube-proxy-qf9bh\" (UID: \"eb1f04b0-0654-47da-a1f5-d45c934ccb00\") " pod="kube-system/kube-proxy-qf9bh" May 16 00:21:59.384177 kubelet[2779]: I0516 00:21:59.383423 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb1f04b0-0654-47da-a1f5-d45c934ccb00-lib-modules\") pod \"kube-proxy-qf9bh\" (UID: \"eb1f04b0-0654-47da-a1f5-d45c934ccb00\") " pod="kube-system/kube-proxy-qf9bh" May 16 00:21:59.413707 systemd[1]: Created slice kubepods-besteffort-pode1b13b10_e423_4cc2_8d61_ccb6ac257378.slice - libcontainer container kubepods-besteffort-pode1b13b10_e423_4cc2_8d61_ccb6ac257378.slice. May 16 00:21:59.483609 kubelet[2779]: I0516 00:21:59.483570 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdfs2\" (UniqueName: \"kubernetes.io/projected/e1b13b10-e423-4cc2-8d61-ccb6ac257378-kube-api-access-kdfs2\") pod \"tigera-operator-844669ff44-bf7jl\" (UID: \"e1b13b10-e423-4cc2-8d61-ccb6ac257378\") " pod="tigera-operator/tigera-operator-844669ff44-bf7jl" May 16 00:21:59.484056 kubelet[2779]: I0516 00:21:59.483650 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e1b13b10-e423-4cc2-8d61-ccb6ac257378-var-lib-calico\") pod \"tigera-operator-844669ff44-bf7jl\" (UID: \"e1b13b10-e423-4cc2-8d61-ccb6ac257378\") " pod="tigera-operator/tigera-operator-844669ff44-bf7jl" May 16 00:21:59.611743 containerd[1554]: time="2025-05-16T00:21:59.611671306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qf9bh,Uid:eb1f04b0-0654-47da-a1f5-d45c934ccb00,Namespace:kube-system,Attempt:0,}" May 16 00:21:59.634293 containerd[1554]: time="2025-05-16T00:21:59.633959008Z" level=info msg="connecting to shim c783788e78acd0528cf22df7ad5151a58cf592915f4ff7dadc56b67171112502" address="unix:///run/containerd/s/e572a9d99c4ff0781b07ef6402f481d9d5af4d802b6b52677e0338977439a882" namespace=k8s.io protocol=ttrpc version=3 May 16 00:21:59.653440 systemd[1]: Started cri-containerd-c783788e78acd0528cf22df7ad5151a58cf592915f4ff7dadc56b67171112502.scope - libcontainer container c783788e78acd0528cf22df7ad5151a58cf592915f4ff7dadc56b67171112502. May 16 00:21:59.668708 containerd[1554]: time="2025-05-16T00:21:59.668653955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qf9bh,Uid:eb1f04b0-0654-47da-a1f5-d45c934ccb00,Namespace:kube-system,Attempt:0,} returns sandbox id \"c783788e78acd0528cf22df7ad5151a58cf592915f4ff7dadc56b67171112502\"" May 16 00:21:59.671280 containerd[1554]: time="2025-05-16T00:21:59.671176380Z" level=info msg="CreateContainer within sandbox \"c783788e78acd0528cf22df7ad5151a58cf592915f4ff7dadc56b67171112502\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 00:21:59.677581 containerd[1554]: time="2025-05-16T00:21:59.677531073Z" level=info msg="Container 253a33a5fae2d87627818229cb75387da170d5519a327f314718502cb01d3e86: CDI devices from CRI Config.CDIDevices: []" May 16 00:21:59.683408 containerd[1554]: time="2025-05-16T00:21:59.683380589Z" level=info msg="CreateContainer within sandbox \"c783788e78acd0528cf22df7ad5151a58cf592915f4ff7dadc56b67171112502\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"253a33a5fae2d87627818229cb75387da170d5519a327f314718502cb01d3e86\"" May 16 00:21:59.688812 containerd[1554]: time="2025-05-16T00:21:59.688780349Z" level=info msg="StartContainer for \"253a33a5fae2d87627818229cb75387da170d5519a327f314718502cb01d3e86\"" May 16 00:21:59.689730 containerd[1554]: time="2025-05-16T00:21:59.689709891Z" level=info msg="connecting to shim 253a33a5fae2d87627818229cb75387da170d5519a327f314718502cb01d3e86" address="unix:///run/containerd/s/e572a9d99c4ff0781b07ef6402f481d9d5af4d802b6b52677e0338977439a882" protocol=ttrpc version=3 May 16 00:21:59.706539 systemd[1]: Started cri-containerd-253a33a5fae2d87627818229cb75387da170d5519a327f314718502cb01d3e86.scope - libcontainer container 253a33a5fae2d87627818229cb75387da170d5519a327f314718502cb01d3e86. May 16 00:21:59.715759 containerd[1554]: time="2025-05-16T00:21:59.715702727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-bf7jl,Uid:e1b13b10-e423-4cc2-8d61-ccb6ac257378,Namespace:tigera-operator,Attempt:0,}" May 16 00:21:59.740766 containerd[1554]: time="2025-05-16T00:21:59.740742929Z" level=info msg="StartContainer for \"253a33a5fae2d87627818229cb75387da170d5519a327f314718502cb01d3e86\" returns successfully" May 16 00:21:59.762378 containerd[1554]: time="2025-05-16T00:21:59.762288625Z" level=info msg="connecting to shim a636663c4f4f47e10dee50ab53090ca8e283bb1d1482e8750e38c8056769fa4f" address="unix:///run/containerd/s/9eadc766810fc1f12652e8bc60fc19e2f1afadeafb36a5acc36acd342ba07d52" namespace=k8s.io protocol=ttrpc version=3 May 16 00:21:59.778565 systemd[1]: Started cri-containerd-a636663c4f4f47e10dee50ab53090ca8e283bb1d1482e8750e38c8056769fa4f.scope - libcontainer container a636663c4f4f47e10dee50ab53090ca8e283bb1d1482e8750e38c8056769fa4f. May 16 00:21:59.810708 containerd[1554]: time="2025-05-16T00:21:59.810685755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-bf7jl,Uid:e1b13b10-e423-4cc2-8d61-ccb6ac257378,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a636663c4f4f47e10dee50ab53090ca8e283bb1d1482e8750e38c8056769fa4f\"" May 16 00:21:59.812163 containerd[1554]: time="2025-05-16T00:21:59.811954923Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 16 00:22:00.162375 kubelet[2779]: I0516 00:22:00.162315 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qf9bh" podStartSLOduration=1.162301486 podStartE2EDuration="1.162301486s" podCreationTimestamp="2025-05-16 00:21:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:22:00.116168972 +0000 UTC m=+6.174954605" watchObservedRunningTime="2025-05-16 00:22:00.162301486 +0000 UTC m=+6.221087125" May 16 00:22:00.492994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1705002723.mount: Deactivated successfully. May 16 00:22:01.338970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2295967224.mount: Deactivated successfully. May 16 00:22:01.950272 containerd[1554]: time="2025-05-16T00:22:01.950245488Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:22:01.950901 containerd[1554]: time="2025-05-16T00:22:01.950854039Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 16 00:22:01.951192 containerd[1554]: time="2025-05-16T00:22:01.951177695Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:22:01.952074 containerd[1554]: time="2025-05-16T00:22:01.952051392Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:22:01.952495 containerd[1554]: time="2025-05-16T00:22:01.952427994Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 2.140455736s" May 16 00:22:01.952495 containerd[1554]: time="2025-05-16T00:22:01.952445017Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 16 00:22:01.954082 containerd[1554]: time="2025-05-16T00:22:01.954064260Z" level=info msg="CreateContainer within sandbox \"a636663c4f4f47e10dee50ab53090ca8e283bb1d1482e8750e38c8056769fa4f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 16 00:22:01.958528 containerd[1554]: time="2025-05-16T00:22:01.958177119Z" level=info msg="Container f543143a01339b7c68e48f74ba960c3e8d170003835808df8071fb61560b8988: CDI devices from CRI Config.CDIDevices: []" May 16 00:22:01.971312 containerd[1554]: time="2025-05-16T00:22:01.971268382Z" level=info msg="CreateContainer within sandbox \"a636663c4f4f47e10dee50ab53090ca8e283bb1d1482e8750e38c8056769fa4f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f543143a01339b7c68e48f74ba960c3e8d170003835808df8071fb61560b8988\"" May 16 00:22:01.971647 containerd[1554]: time="2025-05-16T00:22:01.971585508Z" level=info msg="StartContainer for \"f543143a01339b7c68e48f74ba960c3e8d170003835808df8071fb61560b8988\"" May 16 00:22:01.972528 containerd[1554]: time="2025-05-16T00:22:01.972467929Z" level=info msg="connecting to shim f543143a01339b7c68e48f74ba960c3e8d170003835808df8071fb61560b8988" address="unix:///run/containerd/s/9eadc766810fc1f12652e8bc60fc19e2f1afadeafb36a5acc36acd342ba07d52" protocol=ttrpc version=3 May 16 00:22:01.993433 systemd[1]: Started cri-containerd-f543143a01339b7c68e48f74ba960c3e8d170003835808df8071fb61560b8988.scope - libcontainer container f543143a01339b7c68e48f74ba960c3e8d170003835808df8071fb61560b8988. May 16 00:22:02.008542 containerd[1554]: time="2025-05-16T00:22:02.008396963Z" level=info msg="StartContainer for \"f543143a01339b7c68e48f74ba960c3e8d170003835808df8071fb61560b8988\" returns successfully" May 16 00:22:06.992160 sudo[1865]: pam_unix(sudo:session): session closed for user root May 16 00:22:06.992972 sshd[1864]: Connection closed by 147.75.109.163 port 53574 May 16 00:22:06.994595 sshd-session[1861]: pam_unix(sshd:session): session closed for user core May 16 00:22:06.997732 systemd-logind[1535]: Session 9 logged out. Waiting for processes to exit. May 16 00:22:06.997949 systemd[1]: sshd@6-139.178.70.108:22-147.75.109.163:53574.service: Deactivated successfully. May 16 00:22:06.999366 systemd[1]: session-9.scope: Deactivated successfully. May 16 00:22:06.999520 systemd[1]: session-9.scope: Consumed 3.524s CPU time, 151.4M memory peak. May 16 00:22:07.003782 systemd-logind[1535]: Removed session 9. May 16 00:22:08.384620 kubelet[2779]: I0516 00:22:08.383359 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-bf7jl" podStartSLOduration=7.242070665 podStartE2EDuration="9.383339892s" podCreationTimestamp="2025-05-16 00:21:59 +0000 UTC" firstStartedPulling="2025-05-16 00:21:59.811585389 +0000 UTC m=+5.870371009" lastFinishedPulling="2025-05-16 00:22:01.952854617 +0000 UTC m=+8.011640236" observedRunningTime="2025-05-16 00:22:02.109911227 +0000 UTC m=+8.168696871" watchObservedRunningTime="2025-05-16 00:22:08.383339892 +0000 UTC m=+14.442125516" May 16 00:22:09.756712 systemd[1]: Created slice kubepods-besteffort-pode60d0807_2ce8_415d_b089_3487df5f2668.slice - libcontainer container kubepods-besteffort-pode60d0807_2ce8_415d_b089_3487df5f2668.slice. May 16 00:22:09.858014 kubelet[2779]: I0516 00:22:09.857859 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56mgx\" (UniqueName: \"kubernetes.io/projected/e60d0807-2ce8-415d-b089-3487df5f2668-kube-api-access-56mgx\") pod \"calico-typha-64bd9997d8-7skdd\" (UID: \"e60d0807-2ce8-415d-b089-3487df5f2668\") " pod="calico-system/calico-typha-64bd9997d8-7skdd" May 16 00:22:09.858014 kubelet[2779]: I0516 00:22:09.857891 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e60d0807-2ce8-415d-b089-3487df5f2668-tigera-ca-bundle\") pod \"calico-typha-64bd9997d8-7skdd\" (UID: \"e60d0807-2ce8-415d-b089-3487df5f2668\") " pod="calico-system/calico-typha-64bd9997d8-7skdd" May 16 00:22:09.858014 kubelet[2779]: I0516 00:22:09.857908 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e60d0807-2ce8-415d-b089-3487df5f2668-typha-certs\") pod \"calico-typha-64bd9997d8-7skdd\" (UID: \"e60d0807-2ce8-415d-b089-3487df5f2668\") " pod="calico-system/calico-typha-64bd9997d8-7skdd" May 16 00:22:09.955078 systemd[1]: Created slice kubepods-besteffort-pod75b9e71a_f767_4232_936e_91d6da8e72cd.slice - libcontainer container kubepods-besteffort-pod75b9e71a_f767_4232_936e_91d6da8e72cd.slice. May 16 00:22:09.959370 kubelet[2779]: I0516 00:22:09.958123 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/75b9e71a-f767-4232-936e-91d6da8e72cd-cni-bin-dir\") pod \"calico-node-68rlj\" (UID: \"75b9e71a-f767-4232-936e-91d6da8e72cd\") " pod="calico-system/calico-node-68rlj" May 16 00:22:09.959370 kubelet[2779]: I0516 00:22:09.958143 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/75b9e71a-f767-4232-936e-91d6da8e72cd-cni-log-dir\") pod \"calico-node-68rlj\" (UID: \"75b9e71a-f767-4232-936e-91d6da8e72cd\") " pod="calico-system/calico-node-68rlj" May 16 00:22:09.959370 kubelet[2779]: I0516 00:22:09.958154 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/75b9e71a-f767-4232-936e-91d6da8e72cd-flexvol-driver-host\") pod \"calico-node-68rlj\" (UID: \"75b9e71a-f767-4232-936e-91d6da8e72cd\") " pod="calico-system/calico-node-68rlj" May 16 00:22:09.959370 kubelet[2779]: I0516 00:22:09.958162 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/75b9e71a-f767-4232-936e-91d6da8e72cd-policysync\") pod \"calico-node-68rlj\" (UID: \"75b9e71a-f767-4232-936e-91d6da8e72cd\") " pod="calico-system/calico-node-68rlj" May 16 00:22:09.959370 kubelet[2779]: I0516 00:22:09.958179 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/75b9e71a-f767-4232-936e-91d6da8e72cd-var-run-calico\") pod \"calico-node-68rlj\" (UID: \"75b9e71a-f767-4232-936e-91d6da8e72cd\") " pod="calico-system/calico-node-68rlj" May 16 00:22:09.959505 kubelet[2779]: I0516 00:22:09.958189 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75b9e71a-f767-4232-936e-91d6da8e72cd-xtables-lock\") pod \"calico-node-68rlj\" (UID: \"75b9e71a-f767-4232-936e-91d6da8e72cd\") " pod="calico-system/calico-node-68rlj" May 16 00:22:09.959505 kubelet[2779]: I0516 00:22:09.958203 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82s8m\" (UniqueName: \"kubernetes.io/projected/75b9e71a-f767-4232-936e-91d6da8e72cd-kube-api-access-82s8m\") pod \"calico-node-68rlj\" (UID: \"75b9e71a-f767-4232-936e-91d6da8e72cd\") " pod="calico-system/calico-node-68rlj" May 16 00:22:09.959505 kubelet[2779]: I0516 00:22:09.958213 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/75b9e71a-f767-4232-936e-91d6da8e72cd-cni-net-dir\") pod \"calico-node-68rlj\" (UID: \"75b9e71a-f767-4232-936e-91d6da8e72cd\") " pod="calico-system/calico-node-68rlj" May 16 00:22:09.959505 kubelet[2779]: I0516 00:22:09.958221 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/75b9e71a-f767-4232-936e-91d6da8e72cd-node-certs\") pod \"calico-node-68rlj\" (UID: \"75b9e71a-f767-4232-936e-91d6da8e72cd\") " pod="calico-system/calico-node-68rlj" May 16 00:22:09.959505 kubelet[2779]: I0516 00:22:09.958231 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75b9e71a-f767-4232-936e-91d6da8e72cd-tigera-ca-bundle\") pod \"calico-node-68rlj\" (UID: \"75b9e71a-f767-4232-936e-91d6da8e72cd\") " pod="calico-system/calico-node-68rlj" May 16 00:22:09.966616 kubelet[2779]: I0516 00:22:09.958241 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75b9e71a-f767-4232-936e-91d6da8e72cd-lib-modules\") pod \"calico-node-68rlj\" (UID: \"75b9e71a-f767-4232-936e-91d6da8e72cd\") " pod="calico-system/calico-node-68rlj" May 16 00:22:09.966616 kubelet[2779]: I0516 00:22:09.958249 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/75b9e71a-f767-4232-936e-91d6da8e72cd-var-lib-calico\") pod \"calico-node-68rlj\" (UID: \"75b9e71a-f767-4232-936e-91d6da8e72cd\") " pod="calico-system/calico-node-68rlj" May 16 00:22:10.064458 kubelet[2779]: E0516 00:22:10.064403 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.064605 kubelet[2779]: W0516 00:22:10.064595 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.064678 kubelet[2779]: E0516 00:22:10.064669 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.071709 kubelet[2779]: E0516 00:22:10.071692 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.071709 kubelet[2779]: W0516 00:22:10.071703 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.071800 kubelet[2779]: E0516 00:22:10.071714 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.084653 containerd[1554]: time="2025-05-16T00:22:10.084626908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64bd9997d8-7skdd,Uid:e60d0807-2ce8-415d-b089-3487df5f2668,Namespace:calico-system,Attempt:0,}" May 16 00:22:10.097534 containerd[1554]: time="2025-05-16T00:22:10.097402462Z" level=info msg="connecting to shim 22e66a9492f4003f15f17524e34e9deb9fd4564381c1a0a8d28c712d80a28786" address="unix:///run/containerd/s/6d05f6263e901d5289f861414230d79e0366ec329a977fd446f58f33b40dcbad" namespace=k8s.io protocol=ttrpc version=3 May 16 00:22:10.114478 systemd[1]: Started cri-containerd-22e66a9492f4003f15f17524e34e9deb9fd4564381c1a0a8d28c712d80a28786.scope - libcontainer container 22e66a9492f4003f15f17524e34e9deb9fd4564381c1a0a8d28c712d80a28786. May 16 00:22:10.146882 containerd[1554]: time="2025-05-16T00:22:10.146856196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64bd9997d8-7skdd,Uid:e60d0807-2ce8-415d-b089-3487df5f2668,Namespace:calico-system,Attempt:0,} returns sandbox id \"22e66a9492f4003f15f17524e34e9deb9fd4564381c1a0a8d28c712d80a28786\"" May 16 00:22:10.152653 containerd[1554]: time="2025-05-16T00:22:10.152630157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 16 00:22:10.203588 kubelet[2779]: E0516 00:22:10.203405 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sv6vl" podUID="67e150e9-6407-4d80-99aa-db86a8170146" May 16 00:22:10.258438 containerd[1554]: time="2025-05-16T00:22:10.258411878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-68rlj,Uid:75b9e71a-f767-4232-936e-91d6da8e72cd,Namespace:calico-system,Attempt:0,}" May 16 00:22:10.260662 kubelet[2779]: E0516 00:22:10.260641 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.260662 kubelet[2779]: W0516 00:22:10.260656 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.260833 kubelet[2779]: E0516 00:22:10.260669 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.260833 kubelet[2779]: E0516 00:22:10.260771 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.260833 kubelet[2779]: W0516 00:22:10.260776 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.260833 kubelet[2779]: E0516 00:22:10.260781 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.261634 kubelet[2779]: E0516 00:22:10.260870 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.261634 kubelet[2779]: W0516 00:22:10.260875 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.261634 kubelet[2779]: E0516 00:22:10.260879 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.268099 containerd[1554]: time="2025-05-16T00:22:10.267821714Z" level=info msg="connecting to shim 0dd8687389d7fa838365556af79c72659b24559d76f2cea2bbcef4cd270fcfd8" address="unix:///run/containerd/s/97ffde6db70a736c692bd9c5d60e3a5de98cd26b1e3b7224e97204dc075323f3" namespace=k8s.io protocol=ttrpc version=3 May 16 00:22:10.279838 kubelet[2779]: E0516 00:22:10.279818 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.279838 kubelet[2779]: W0516 00:22:10.279833 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.279931 kubelet[2779]: E0516 00:22:10.279847 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.279993 kubelet[2779]: E0516 00:22:10.279982 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.279993 kubelet[2779]: W0516 00:22:10.279990 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.280035 kubelet[2779]: E0516 00:22:10.279995 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.280085 kubelet[2779]: E0516 00:22:10.280077 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.280108 kubelet[2779]: W0516 00:22:10.280087 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.280108 kubelet[2779]: E0516 00:22:10.280093 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.280182 kubelet[2779]: E0516 00:22:10.280174 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.280182 kubelet[2779]: W0516 00:22:10.280181 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.280219 kubelet[2779]: E0516 00:22:10.280186 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.280276 kubelet[2779]: E0516 00:22:10.280268 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.280276 kubelet[2779]: W0516 00:22:10.280274 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.280315 kubelet[2779]: E0516 00:22:10.280280 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.280393 kubelet[2779]: E0516 00:22:10.280383 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.280393 kubelet[2779]: W0516 00:22:10.280390 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.280435 kubelet[2779]: E0516 00:22:10.280395 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.280480 kubelet[2779]: E0516 00:22:10.280469 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.280480 kubelet[2779]: W0516 00:22:10.280476 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.280523 kubelet[2779]: E0516 00:22:10.280481 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.280563 kubelet[2779]: E0516 00:22:10.280554 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.280563 kubelet[2779]: W0516 00:22:10.280562 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.280601 kubelet[2779]: E0516 00:22:10.280566 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.280652 kubelet[2779]: E0516 00:22:10.280643 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.280652 kubelet[2779]: W0516 00:22:10.280650 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.280693 kubelet[2779]: E0516 00:22:10.280655 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.280747 kubelet[2779]: E0516 00:22:10.280738 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.280747 kubelet[2779]: W0516 00:22:10.280746 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.280800 kubelet[2779]: E0516 00:22:10.280789 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.280918 kubelet[2779]: E0516 00:22:10.280908 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.280918 kubelet[2779]: W0516 00:22:10.280916 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.280959 kubelet[2779]: E0516 00:22:10.280921 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.281023 kubelet[2779]: E0516 00:22:10.281014 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.281023 kubelet[2779]: W0516 00:22:10.281022 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.281064 kubelet[2779]: E0516 00:22:10.281027 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.281125 kubelet[2779]: E0516 00:22:10.281117 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.281125 kubelet[2779]: W0516 00:22:10.281123 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.281162 kubelet[2779]: E0516 00:22:10.281128 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.281238 kubelet[2779]: E0516 00:22:10.281232 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.281238 kubelet[2779]: W0516 00:22:10.281236 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.281275 kubelet[2779]: E0516 00:22:10.281241 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.281370 kubelet[2779]: E0516 00:22:10.281358 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.281370 kubelet[2779]: W0516 00:22:10.281366 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.281370 kubelet[2779]: E0516 00:22:10.281371 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.281615 kubelet[2779]: E0516 00:22:10.281605 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.281615 kubelet[2779]: W0516 00:22:10.281612 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.281657 kubelet[2779]: E0516 00:22:10.281617 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.281853 kubelet[2779]: E0516 00:22:10.281699 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.281853 kubelet[2779]: W0516 00:22:10.281706 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.281853 kubelet[2779]: E0516 00:22:10.281711 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.281937 kubelet[2779]: E0516 00:22:10.281863 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.281937 kubelet[2779]: W0516 00:22:10.281868 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.281937 kubelet[2779]: E0516 00:22:10.281874 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.281937 kubelet[2779]: I0516 00:22:10.281886 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/67e150e9-6407-4d80-99aa-db86a8170146-socket-dir\") pod \"csi-node-driver-sv6vl\" (UID: \"67e150e9-6407-4d80-99aa-db86a8170146\") " pod="calico-system/csi-node-driver-sv6vl" May 16 00:22:10.282000 kubelet[2779]: E0516 00:22:10.281966 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.282000 kubelet[2779]: W0516 00:22:10.281971 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.282000 kubelet[2779]: E0516 00:22:10.281976 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.282000 kubelet[2779]: I0516 00:22:10.281984 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/67e150e9-6407-4d80-99aa-db86a8170146-varrun\") pod \"csi-node-driver-sv6vl\" (UID: \"67e150e9-6407-4d80-99aa-db86a8170146\") " pod="calico-system/csi-node-driver-sv6vl" May 16 00:22:10.282329 kubelet[2779]: E0516 00:22:10.282068 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.282329 kubelet[2779]: W0516 00:22:10.282075 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.282329 kubelet[2779]: E0516 00:22:10.282080 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.282329 kubelet[2779]: I0516 00:22:10.282089 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67e150e9-6407-4d80-99aa-db86a8170146-kubelet-dir\") pod \"csi-node-driver-sv6vl\" (UID: \"67e150e9-6407-4d80-99aa-db86a8170146\") " pod="calico-system/csi-node-driver-sv6vl" May 16 00:22:10.282329 kubelet[2779]: E0516 00:22:10.282170 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.282329 kubelet[2779]: W0516 00:22:10.282175 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.282329 kubelet[2779]: E0516 00:22:10.282179 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.282329 kubelet[2779]: I0516 00:22:10.282188 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/67e150e9-6407-4d80-99aa-db86a8170146-registration-dir\") pod \"csi-node-driver-sv6vl\" (UID: \"67e150e9-6407-4d80-99aa-db86a8170146\") " pod="calico-system/csi-node-driver-sv6vl" May 16 00:22:10.282329 kubelet[2779]: E0516 00:22:10.282266 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.282568 kubelet[2779]: W0516 00:22:10.282271 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.282568 kubelet[2779]: E0516 00:22:10.282277 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.282568 kubelet[2779]: I0516 00:22:10.282285 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqdbq\" (UniqueName: \"kubernetes.io/projected/67e150e9-6407-4d80-99aa-db86a8170146-kube-api-access-qqdbq\") pod \"csi-node-driver-sv6vl\" (UID: \"67e150e9-6407-4d80-99aa-db86a8170146\") " pod="calico-system/csi-node-driver-sv6vl" May 16 00:22:10.282568 kubelet[2779]: E0516 00:22:10.282401 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.282568 kubelet[2779]: W0516 00:22:10.282406 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.282568 kubelet[2779]: E0516 00:22:10.282411 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.282568 kubelet[2779]: E0516 00:22:10.282487 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.282568 kubelet[2779]: W0516 00:22:10.282491 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.282568 kubelet[2779]: E0516 00:22:10.282496 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.282704 kubelet[2779]: E0516 00:22:10.282577 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.282704 kubelet[2779]: W0516 00:22:10.282581 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.282704 kubelet[2779]: E0516 00:22:10.282585 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.282704 kubelet[2779]: E0516 00:22:10.282659 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.282704 kubelet[2779]: W0516 00:22:10.282663 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.282704 kubelet[2779]: E0516 00:22:10.282667 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.282793 kubelet[2779]: E0516 00:22:10.282744 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.282793 kubelet[2779]: W0516 00:22:10.282748 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.282793 kubelet[2779]: E0516 00:22:10.282752 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.282844 kubelet[2779]: E0516 00:22:10.282826 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.282844 kubelet[2779]: W0516 00:22:10.282830 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.282844 kubelet[2779]: E0516 00:22:10.282834 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.283462 kubelet[2779]: E0516 00:22:10.282908 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.283462 kubelet[2779]: W0516 00:22:10.282915 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.283462 kubelet[2779]: E0516 00:22:10.282919 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.283462 kubelet[2779]: E0516 00:22:10.282994 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.283462 kubelet[2779]: W0516 00:22:10.282998 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.283462 kubelet[2779]: E0516 00:22:10.283002 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.283462 kubelet[2779]: E0516 00:22:10.283075 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.283462 kubelet[2779]: W0516 00:22:10.283079 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.283462 kubelet[2779]: E0516 00:22:10.283083 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.283462 kubelet[2779]: E0516 00:22:10.283317 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.282937 systemd[1]: Started cri-containerd-0dd8687389d7fa838365556af79c72659b24559d76f2cea2bbcef4cd270fcfd8.scope - libcontainer container 0dd8687389d7fa838365556af79c72659b24559d76f2cea2bbcef4cd270fcfd8. May 16 00:22:10.283957 kubelet[2779]: W0516 00:22:10.283323 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.283957 kubelet[2779]: E0516 00:22:10.283328 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.301807 containerd[1554]: time="2025-05-16T00:22:10.301757087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-68rlj,Uid:75b9e71a-f767-4232-936e-91d6da8e72cd,Namespace:calico-system,Attempt:0,} returns sandbox id \"0dd8687389d7fa838365556af79c72659b24559d76f2cea2bbcef4cd270fcfd8\"" May 16 00:22:10.383157 kubelet[2779]: E0516 00:22:10.383139 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.383157 kubelet[2779]: W0516 00:22:10.383153 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.383331 kubelet[2779]: E0516 00:22:10.383166 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.383331 kubelet[2779]: E0516 00:22:10.383326 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.383439 kubelet[2779]: W0516 00:22:10.383332 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.383439 kubelet[2779]: E0516 00:22:10.383337 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.383486 kubelet[2779]: E0516 00:22:10.383473 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.383486 kubelet[2779]: W0516 00:22:10.383478 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.383486 kubelet[2779]: E0516 00:22:10.383483 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.386730 kubelet[2779]: E0516 00:22:10.383582 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.386730 kubelet[2779]: W0516 00:22:10.383587 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.386730 kubelet[2779]: E0516 00:22:10.383592 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.386730 kubelet[2779]: E0516 00:22:10.383693 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.386730 kubelet[2779]: W0516 00:22:10.383698 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.386730 kubelet[2779]: E0516 00:22:10.383704 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.386730 kubelet[2779]: E0516 00:22:10.383863 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.386730 kubelet[2779]: W0516 00:22:10.383872 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.386730 kubelet[2779]: E0516 00:22:10.383881 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.386730 kubelet[2779]: E0516 00:22:10.383991 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.386898 kubelet[2779]: W0516 00:22:10.383997 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.386898 kubelet[2779]: E0516 00:22:10.384002 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.386898 kubelet[2779]: E0516 00:22:10.384100 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.386898 kubelet[2779]: W0516 00:22:10.384110 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.386898 kubelet[2779]: E0516 00:22:10.384117 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.386898 kubelet[2779]: E0516 00:22:10.384291 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.386898 kubelet[2779]: W0516 00:22:10.384296 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.386898 kubelet[2779]: E0516 00:22:10.384301 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.386898 kubelet[2779]: E0516 00:22:10.384436 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.386898 kubelet[2779]: W0516 00:22:10.384441 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.387066 kubelet[2779]: E0516 00:22:10.384446 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.387066 kubelet[2779]: E0516 00:22:10.384555 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.387066 kubelet[2779]: W0516 00:22:10.384560 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.387066 kubelet[2779]: E0516 00:22:10.384565 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.387066 kubelet[2779]: E0516 00:22:10.384658 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.387066 kubelet[2779]: W0516 00:22:10.384663 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.387066 kubelet[2779]: E0516 00:22:10.384669 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.387066 kubelet[2779]: E0516 00:22:10.384786 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.387066 kubelet[2779]: W0516 00:22:10.384813 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.387066 kubelet[2779]: E0516 00:22:10.384821 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.387209 kubelet[2779]: E0516 00:22:10.384924 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.387209 kubelet[2779]: W0516 00:22:10.384930 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.387209 kubelet[2779]: E0516 00:22:10.384934 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.387209 kubelet[2779]: E0516 00:22:10.385046 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.387209 kubelet[2779]: W0516 00:22:10.385057 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.387209 kubelet[2779]: E0516 00:22:10.385062 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.388289 kubelet[2779]: E0516 00:22:10.388228 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.388289 kubelet[2779]: W0516 00:22:10.388237 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.388289 kubelet[2779]: E0516 00:22:10.388243 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.388396 kubelet[2779]: E0516 00:22:10.388367 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.388396 kubelet[2779]: W0516 00:22:10.388373 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.388396 kubelet[2779]: E0516 00:22:10.388378 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.388480 kubelet[2779]: E0516 00:22:10.388469 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.388480 kubelet[2779]: W0516 00:22:10.388478 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.388524 kubelet[2779]: E0516 00:22:10.388484 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.388605 kubelet[2779]: E0516 00:22:10.388595 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.388605 kubelet[2779]: W0516 00:22:10.388603 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.388662 kubelet[2779]: E0516 00:22:10.388613 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.388714 kubelet[2779]: E0516 00:22:10.388703 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.388745 kubelet[2779]: W0516 00:22:10.388718 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.388745 kubelet[2779]: E0516 00:22:10.388723 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.388826 kubelet[2779]: E0516 00:22:10.388817 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.388826 kubelet[2779]: W0516 00:22:10.388823 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.388949 kubelet[2779]: E0516 00:22:10.388828 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.388977 kubelet[2779]: E0516 00:22:10.388958 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.388977 kubelet[2779]: W0516 00:22:10.388962 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.388977 kubelet[2779]: E0516 00:22:10.388967 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.389090 kubelet[2779]: E0516 00:22:10.389080 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.389090 kubelet[2779]: W0516 00:22:10.389087 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.389138 kubelet[2779]: E0516 00:22:10.389092 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.389208 kubelet[2779]: E0516 00:22:10.389199 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.389208 kubelet[2779]: W0516 00:22:10.389207 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.389254 kubelet[2779]: E0516 00:22:10.389213 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.389328 kubelet[2779]: E0516 00:22:10.389319 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.389328 kubelet[2779]: W0516 00:22:10.389326 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.389382 kubelet[2779]: E0516 00:22:10.389330 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:10.394176 kubelet[2779]: E0516 00:22:10.394163 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:10.394176 kubelet[2779]: W0516 00:22:10.394177 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:10.394248 kubelet[2779]: E0516 00:22:10.394187 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:11.856142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount621546290.mount: Deactivated successfully. May 16 00:22:12.063161 kubelet[2779]: E0516 00:22:12.063132 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sv6vl" podUID="67e150e9-6407-4d80-99aa-db86a8170146" May 16 00:22:12.445034 containerd[1554]: time="2025-05-16T00:22:12.445005017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:22:12.445728 containerd[1554]: time="2025-05-16T00:22:12.445638523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 16 00:22:12.446038 containerd[1554]: time="2025-05-16T00:22:12.446018486Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:22:12.447038 containerd[1554]: time="2025-05-16T00:22:12.447010087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:22:12.447629 containerd[1554]: time="2025-05-16T00:22:12.447534750Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 2.294881574s" May 16 00:22:12.447629 containerd[1554]: time="2025-05-16T00:22:12.447556352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 16 00:22:12.453209 containerd[1554]: time="2025-05-16T00:22:12.453159861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 16 00:22:12.479967 containerd[1554]: time="2025-05-16T00:22:12.479764060Z" level=info msg="CreateContainer within sandbox \"22e66a9492f4003f15f17524e34e9deb9fd4564381c1a0a8d28c712d80a28786\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 16 00:22:12.483650 containerd[1554]: time="2025-05-16T00:22:12.483632851Z" level=info msg="Container 6fb748ac82f196c78ee1b258a9d35a705e21d7126f3aaa808100a018759cad03: CDI devices from CRI Config.CDIDevices: []" May 16 00:22:12.489806 containerd[1554]: time="2025-05-16T00:22:12.489723339Z" level=info msg="CreateContainer within sandbox \"22e66a9492f4003f15f17524e34e9deb9fd4564381c1a0a8d28c712d80a28786\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6fb748ac82f196c78ee1b258a9d35a705e21d7126f3aaa808100a018759cad03\"" May 16 00:22:12.491228 containerd[1554]: time="2025-05-16T00:22:12.490259737Z" level=info msg="StartContainer for \"6fb748ac82f196c78ee1b258a9d35a705e21d7126f3aaa808100a018759cad03\"" May 16 00:22:12.491228 containerd[1554]: time="2025-05-16T00:22:12.491094260Z" level=info msg="connecting to shim 6fb748ac82f196c78ee1b258a9d35a705e21d7126f3aaa808100a018759cad03" address="unix:///run/containerd/s/6d05f6263e901d5289f861414230d79e0366ec329a977fd446f58f33b40dcbad" protocol=ttrpc version=3 May 16 00:22:12.511466 systemd[1]: Started cri-containerd-6fb748ac82f196c78ee1b258a9d35a705e21d7126f3aaa808100a018759cad03.scope - libcontainer container 6fb748ac82f196c78ee1b258a9d35a705e21d7126f3aaa808100a018759cad03. May 16 00:22:12.546755 containerd[1554]: time="2025-05-16T00:22:12.546735967Z" level=info msg="StartContainer for \"6fb748ac82f196c78ee1b258a9d35a705e21d7126f3aaa808100a018759cad03\" returns successfully" May 16 00:22:13.180320 kubelet[2779]: I0516 00:22:13.180274 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-64bd9997d8-7skdd" podStartSLOduration=1.8771914440000002 podStartE2EDuration="4.180263215s" podCreationTimestamp="2025-05-16 00:22:09 +0000 UTC" firstStartedPulling="2025-05-16 00:22:10.149971647 +0000 UTC m=+16.208757269" lastFinishedPulling="2025-05-16 00:22:12.453043418 +0000 UTC m=+18.511829040" observedRunningTime="2025-05-16 00:22:13.179989351 +0000 UTC m=+19.238774983" watchObservedRunningTime="2025-05-16 00:22:13.180263215 +0000 UTC m=+19.239048865" May 16 00:22:13.215539 kubelet[2779]: E0516 00:22:13.215513 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:13.215539 kubelet[2779]: W0516 00:22:13.215530 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:13.222112 kubelet[2779]: E0516 00:22:13.222083 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:13.222299 kubelet[2779]: E0516 00:22:13.222286 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:13.222299 kubelet[2779]: W0516 00:22:13.222297 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:13.222459 kubelet[2779]: E0516 00:22:13.222307 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:13.222487 kubelet[2779]: E0516 00:22:13.222465 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:13.222487 kubelet[2779]: W0516 00:22:13.222471 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:13.222487 kubelet[2779]: E0516 00:22:13.222477 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:13.222758 kubelet[2779]: E0516 00:22:13.222619 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:13.222758 kubelet[2779]: W0516 00:22:13.222627 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:13.222758 kubelet[2779]: E0516 00:22:13.222633 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:13.222758 kubelet[2779]: E0516 00:22:13.222749 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:13.222758 kubelet[2779]: W0516 00:22:13.222753 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:13.222758 kubelet[2779]: E0516 00:22:13.222759 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:13.223044 kubelet[2779]: E0516 00:22:13.222848 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:13.223044 kubelet[2779]: W0516 00:22:13.222853 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:13.223044 kubelet[2779]: E0516 00:22:13.222857 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:13.223044 kubelet[2779]: E0516 00:22:13.222969 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:13.223044 kubelet[2779]: W0516 00:22:13.222975 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:13.223044 kubelet[2779]: E0516 00:22:13.222980 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:13.223163 kubelet[2779]: E0516 00:22:13.223084 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:13.223163 kubelet[2779]: W0516 00:22:13.223089 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:13.223163 kubelet[2779]: E0516 00:22:13.223094 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:13.223422 kubelet[2779]: E0516 00:22:13.223197 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:13.223422 kubelet[2779]: W0516 00:22:13.223202 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:13.223422 kubelet[2779]: E0516 00:22:13.223207 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:13.223422 kubelet[2779]: E0516 00:22:13.223292 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:13.223422 kubelet[2779]: W0516 00:22:13.223303 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:13.223422 kubelet[2779]: E0516 00:22:13.223312 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:13.223422 kubelet[2779]: E0516 00:22:13.223412 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:13.223795 kubelet[2779]: W0516 00:22:13.223428 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:13.223795 kubelet[2779]: E0516 00:22:13.223434 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:13.223795 kubelet[2779]: E0516 00:22:13.223551 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:13.223795 kubelet[2779]: W0516 00:22:13.223556 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:13.223795 kubelet[2779]: E0516 00:22:13.223561 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:13.223795 kubelet[2779]: E0516 00:22:13.223660 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:13.223795 kubelet[2779]: W0516 00:22:13.223664 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:13.223795 kubelet[2779]: E0516 00:22:13.223668 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:13.223795 kubelet[2779]: E0516 00:22:13.223757 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:13.223795 kubelet[2779]: W0516 00:22:13.223761 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:13.224281 kubelet[2779]: E0516 00:22:13.223766 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:13.224281 kubelet[2779]: E0516 00:22:13.223847 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:13.224281 kubelet[2779]: W0516 00:22:13.223852 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:13.224281 kubelet[2779]: E0516 00:22:13.223856 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:13.317147 kubelet[2779]: E0516 00:22:13.317085 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:13.317147 kubelet[2779]: W0516 00:22:13.317115 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:13.317313 kubelet[2779]: E0516 00:22:13.317197 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:13.318541 kubelet[2779]: E0516 00:22:13.317414 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:13.318541 kubelet[2779]: W0516 00:22:13.317426 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:13.318541 kubelet[2779]: E0516 00:22:13.317439 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:13.318541 kubelet[2779]: E0516 00:22:13.317586 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:13.318541 kubelet[2779]: W0516 00:22:13.317595 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:13.318541 kubelet[2779]: E0516 00:22:13.317605 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:13.318541 kubelet[2779]: E0516 00:22:13.317797 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:13.318541 kubelet[2779]: W0516 00:22:13.317806 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:13.318541 kubelet[2779]: E0516 00:22:13.317814 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:13.318541 kubelet[2779]: E0516 00:22:13.317969 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:13.320063 kubelet[2779]: W0516 00:22:13.317984 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:13.320063 kubelet[2779]: E0516 00:22:13.317991 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:13.320063 kubelet[2779]: E0516 00:22:13.318260 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:13.320063 kubelet[2779]: W0516 00:22:13.318270 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:13.320063 kubelet[2779]: E0516 00:22:13.318280 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:13.320063 kubelet[2779]: E0516 00:22:13.318535 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:13.320063 kubelet[2779]: W0516 00:22:13.318562 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:13.320063 kubelet[2779]: E0516 00:22:13.318573 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:13.320063 kubelet[2779]: E0516 00:22:13.318780 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:13.320063 kubelet[2779]: W0516 00:22:13.318792 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:13.321118 kubelet[2779]: E0516 00:22:13.318805 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:13.321118 kubelet[2779]: E0516 00:22:13.319411 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:13.321118 kubelet[2779]: W0516 00:22:13.319427 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:13.321118 kubelet[2779]: E0516 00:22:13.319645 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:13.321118 kubelet[2779]: E0516 00:22:13.320569 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:13.321118 kubelet[2779]: W0516 00:22:13.320577 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:13.321118 kubelet[2779]: E0516 00:22:13.320596 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:13.321118 kubelet[2779]: E0516 00:22:13.320774 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:13.321118 kubelet[2779]: W0516 00:22:13.320781 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:13.321118 kubelet[2779]: E0516 00:22:13.320788 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:13.322404 kubelet[2779]: E0516 00:22:13.321214 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:13.322404 kubelet[2779]: W0516 00:22:13.321221 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:13.322404 kubelet[2779]: E0516 00:22:13.321227 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:13.322404 kubelet[2779]: E0516 00:22:13.321789 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:13.322404 kubelet[2779]: W0516 00:22:13.321799 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:13.322404 kubelet[2779]: E0516 00:22:13.321808 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:13.322404 kubelet[2779]: E0516 00:22:13.322223 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:13.322404 kubelet[2779]: W0516 00:22:13.322235 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:13.322404 kubelet[2779]: E0516 00:22:13.322248 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:13.322927 kubelet[2779]: E0516 00:22:13.322905 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:13.322927 kubelet[2779]: W0516 00:22:13.322922 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:13.323059 kubelet[2779]: E0516 00:22:13.322936 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:13.323186 kubelet[2779]: E0516 00:22:13.323168 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:13.323186 kubelet[2779]: W0516 00:22:13.323181 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:13.323283 kubelet[2779]: E0516 00:22:13.323262 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:13.323720 kubelet[2779]: E0516 00:22:13.323705 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:13.323720 kubelet[2779]: W0516 00:22:13.323718 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:13.323790 kubelet[2779]: E0516 00:22:13.323730 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:13.324075 kubelet[2779]: E0516 00:22:13.324061 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:22:13.324130 kubelet[2779]: W0516 00:22:13.324077 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:22:13.324130 kubelet[2779]: E0516 00:22:13.324087 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:22:13.909175 containerd[1554]: time="2025-05-16T00:22:13.909139901Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:22:13.909700 containerd[1554]: time="2025-05-16T00:22:13.909605721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 16 00:22:13.910364 containerd[1554]: time="2025-05-16T00:22:13.909980116Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:22:13.910833 containerd[1554]: time="2025-05-16T00:22:13.910811195Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:22:13.911383 containerd[1554]: time="2025-05-16T00:22:13.911193110Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 1.458016401s" May 16 00:22:13.911383 containerd[1554]: time="2025-05-16T00:22:13.911213234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 16 00:22:13.913113 containerd[1554]: time="2025-05-16T00:22:13.913088638Z" level=info msg="CreateContainer within sandbox \"0dd8687389d7fa838365556af79c72659b24559d76f2cea2bbcef4cd270fcfd8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 16 00:22:13.920313 containerd[1554]: time="2025-05-16T00:22:13.919677397Z" level=info msg="Container a383a0484435457725798bba95d0d6189cf9ffa9f3ea6a7bb23da395daa777a1: CDI devices from CRI Config.CDIDevices: []" May 16 00:22:13.920281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount175254464.mount: Deactivated successfully. May 16 00:22:13.932191 containerd[1554]: time="2025-05-16T00:22:13.932160611Z" level=info msg="CreateContainer within sandbox \"0dd8687389d7fa838365556af79c72659b24559d76f2cea2bbcef4cd270fcfd8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a383a0484435457725798bba95d0d6189cf9ffa9f3ea6a7bb23da395daa777a1\"" May 16 00:22:13.932685 containerd[1554]: time="2025-05-16T00:22:13.932667731Z" level=info msg="StartContainer for \"a383a0484435457725798bba95d0d6189cf9ffa9f3ea6a7bb23da395daa777a1\"" May 16 00:22:13.934703 containerd[1554]: time="2025-05-16T00:22:13.934674301Z" level=info msg="connecting to shim a383a0484435457725798bba95d0d6189cf9ffa9f3ea6a7bb23da395daa777a1" address="unix:///run/containerd/s/97ffde6db70a736c692bd9c5d60e3a5de98cd26b1e3b7224e97204dc075323f3" protocol=ttrpc version=3 May 16 00:22:13.963439 systemd[1]: Started cri-containerd-a383a0484435457725798bba95d0d6189cf9ffa9f3ea6a7bb23da395daa777a1.scope - libcontainer container a383a0484435457725798bba95d0d6189cf9ffa9f3ea6a7bb23da395daa777a1. May 16 00:22:13.993209 containerd[1554]: time="2025-05-16T00:22:13.993179164Z" level=info msg="StartContainer for \"a383a0484435457725798bba95d0d6189cf9ffa9f3ea6a7bb23da395daa777a1\" returns successfully" May 16 00:22:14.003866 systemd[1]: cri-containerd-a383a0484435457725798bba95d0d6189cf9ffa9f3ea6a7bb23da395daa777a1.scope: Deactivated successfully. May 16 00:22:14.004673 systemd[1]: cri-containerd-a383a0484435457725798bba95d0d6189cf9ffa9f3ea6a7bb23da395daa777a1.scope: Consumed 18ms CPU time, 6.2M memory peak, 3.4M written to disk. May 16 00:22:14.020644 containerd[1554]: time="2025-05-16T00:22:14.020601589Z" level=info msg="received exit event container_id:\"a383a0484435457725798bba95d0d6189cf9ffa9f3ea6a7bb23da395daa777a1\" id:\"a383a0484435457725798bba95d0d6189cf9ffa9f3ea6a7bb23da395daa777a1\" pid:3441 exited_at:{seconds:1747354934 nanos:6150818}" May 16 00:22:14.065691 kubelet[2779]: E0516 00:22:14.065379 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sv6vl" podUID="67e150e9-6407-4d80-99aa-db86a8170146" May 16 00:22:14.077932 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a383a0484435457725798bba95d0d6189cf9ffa9f3ea6a7bb23da395daa777a1-rootfs.mount: Deactivated successfully. May 16 00:22:14.079299 containerd[1554]: time="2025-05-16T00:22:14.078655086Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a383a0484435457725798bba95d0d6189cf9ffa9f3ea6a7bb23da395daa777a1\" id:\"a383a0484435457725798bba95d0d6189cf9ffa9f3ea6a7bb23da395daa777a1\" pid:3441 exited_at:{seconds:1747354934 nanos:6150818}" May 16 00:22:14.164006 kubelet[2779]: I0516 00:22:14.163864 2779 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 00:22:15.167681 containerd[1554]: time="2025-05-16T00:22:15.167632113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 16 00:22:16.063517 kubelet[2779]: E0516 00:22:16.062947 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sv6vl" podUID="67e150e9-6407-4d80-99aa-db86a8170146" May 16 00:22:18.063909 kubelet[2779]: E0516 00:22:18.063865 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sv6vl" podUID="67e150e9-6407-4d80-99aa-db86a8170146" May 16 00:22:18.892683 containerd[1554]: time="2025-05-16T00:22:18.892643246Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:22:18.893113 containerd[1554]: time="2025-05-16T00:22:18.893078857Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 16 00:22:18.893703 containerd[1554]: time="2025-05-16T00:22:18.893379050Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:22:18.894359 containerd[1554]: time="2025-05-16T00:22:18.894213289Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:22:18.894890 containerd[1554]: time="2025-05-16T00:22:18.894609265Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 3.72695387s" May 16 00:22:18.894890 containerd[1554]: time="2025-05-16T00:22:18.894624604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 16 00:22:18.900163 containerd[1554]: time="2025-05-16T00:22:18.899631690Z" level=info msg="CreateContainer within sandbox \"0dd8687389d7fa838365556af79c72659b24559d76f2cea2bbcef4cd270fcfd8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 16 00:22:18.905832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3980482238.mount: Deactivated successfully. May 16 00:22:18.906607 containerd[1554]: time="2025-05-16T00:22:18.906593387Z" level=info msg="Container 7bbc0276a0d856f9bb227b0c7062cfec3df35e60864917a6d78688fd6a638db1: CDI devices from CRI Config.CDIDevices: []" May 16 00:22:18.916032 containerd[1554]: time="2025-05-16T00:22:18.916012959Z" level=info msg="CreateContainer within sandbox \"0dd8687389d7fa838365556af79c72659b24559d76f2cea2bbcef4cd270fcfd8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7bbc0276a0d856f9bb227b0c7062cfec3df35e60864917a6d78688fd6a638db1\"" May 16 00:22:18.916841 containerd[1554]: time="2025-05-16T00:22:18.916408932Z" level=info msg="StartContainer for \"7bbc0276a0d856f9bb227b0c7062cfec3df35e60864917a6d78688fd6a638db1\"" May 16 00:22:18.917653 containerd[1554]: time="2025-05-16T00:22:18.917193576Z" level=info msg="connecting to shim 7bbc0276a0d856f9bb227b0c7062cfec3df35e60864917a6d78688fd6a638db1" address="unix:///run/containerd/s/97ffde6db70a736c692bd9c5d60e3a5de98cd26b1e3b7224e97204dc075323f3" protocol=ttrpc version=3 May 16 00:22:18.943432 systemd[1]: Started cri-containerd-7bbc0276a0d856f9bb227b0c7062cfec3df35e60864917a6d78688fd6a638db1.scope - libcontainer container 7bbc0276a0d856f9bb227b0c7062cfec3df35e60864917a6d78688fd6a638db1. May 16 00:22:18.968872 containerd[1554]: time="2025-05-16T00:22:18.968844855Z" level=info msg="StartContainer for \"7bbc0276a0d856f9bb227b0c7062cfec3df35e60864917a6d78688fd6a638db1\" returns successfully" May 16 00:22:20.019658 systemd[1]: cri-containerd-7bbc0276a0d856f9bb227b0c7062cfec3df35e60864917a6d78688fd6a638db1.scope: Deactivated successfully. May 16 00:22:20.019963 systemd[1]: cri-containerd-7bbc0276a0d856f9bb227b0c7062cfec3df35e60864917a6d78688fd6a638db1.scope: Consumed 287ms CPU time, 164.5M memory peak, 16K read from disk, 170.9M written to disk. May 16 00:22:20.022598 containerd[1554]: time="2025-05-16T00:22:20.022285830Z" level=info msg="received exit event container_id:\"7bbc0276a0d856f9bb227b0c7062cfec3df35e60864917a6d78688fd6a638db1\" id:\"7bbc0276a0d856f9bb227b0c7062cfec3df35e60864917a6d78688fd6a638db1\" pid:3503 exited_at:{seconds:1747354940 nanos:22095312}" May 16 00:22:20.025646 containerd[1554]: time="2025-05-16T00:22:20.025633767Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7bbc0276a0d856f9bb227b0c7062cfec3df35e60864917a6d78688fd6a638db1\" id:\"7bbc0276a0d856f9bb227b0c7062cfec3df35e60864917a6d78688fd6a638db1\" pid:3503 exited_at:{seconds:1747354940 nanos:22095312}" May 16 00:22:20.043847 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7bbc0276a0d856f9bb227b0c7062cfec3df35e60864917a6d78688fd6a638db1-rootfs.mount: Deactivated successfully. May 16 00:22:20.062537 kubelet[2779]: E0516 00:22:20.062512 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sv6vl" podUID="67e150e9-6407-4d80-99aa-db86a8170146" May 16 00:22:20.119127 kubelet[2779]: I0516 00:22:20.119075 2779 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 16 00:22:20.276496 systemd[1]: Created slice kubepods-burstable-pod0e107303_c733_4385_b560_7b7a1c214620.slice - libcontainer container kubepods-burstable-pod0e107303_c733_4385_b560_7b7a1c214620.slice. May 16 00:22:20.299478 systemd[1]: Created slice kubepods-burstable-podb44c391c_7cde_4dcf_b519_756e8bbaeb58.slice - libcontainer container kubepods-burstable-podb44c391c_7cde_4dcf_b519_756e8bbaeb58.slice. May 16 00:22:20.316893 systemd[1]: Created slice kubepods-besteffort-pod9d770ce6_58c4_46a7_9547_3ce0bd7d1645.slice - libcontainer container kubepods-besteffort-pod9d770ce6_58c4_46a7_9547_3ce0bd7d1645.slice. May 16 00:22:20.321850 systemd[1]: Created slice kubepods-besteffort-podd8af785c_b34f_443e_8ddd_87c4ecca1bac.slice - libcontainer container kubepods-besteffort-podd8af785c_b34f_443e_8ddd_87c4ecca1bac.slice. May 16 00:22:20.327301 systemd[1]: Created slice kubepods-besteffort-pod52758d11_1828_43a1_82dc_8636be4d16ce.slice - libcontainer container kubepods-besteffort-pod52758d11_1828_43a1_82dc_8636be4d16ce.slice. May 16 00:22:20.339159 systemd[1]: Created slice kubepods-besteffort-podf8823d20_0cc7_41eb_b83e_818f1ec7a773.slice - libcontainer container kubepods-besteffort-podf8823d20_0cc7_41eb_b83e_818f1ec7a773.slice. May 16 00:22:20.345868 systemd[1]: Created slice kubepods-besteffort-pod8eb7a5fd_489c_4f9d_bbb4_c5102881a604.slice - libcontainer container kubepods-besteffort-pod8eb7a5fd_489c_4f9d_bbb4_c5102881a604.slice. May 16 00:22:20.350444 systemd[1]: Created slice kubepods-besteffort-podb31223af_d117_49cb_b138_65ed4e7ea99c.slice - libcontainer container kubepods-besteffort-podb31223af_d117_49cb_b138_65ed4e7ea99c.slice. May 16 00:22:20.360146 kubelet[2779]: I0516 00:22:20.360084 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b44c391c-7cde-4dcf-b519-756e8bbaeb58-config-volume\") pod \"coredns-674b8bbfcf-4wc9z\" (UID: \"b44c391c-7cde-4dcf-b519-756e8bbaeb58\") " pod="kube-system/coredns-674b8bbfcf-4wc9z" May 16 00:22:20.360328 kubelet[2779]: I0516 00:22:20.360235 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52758d11-1828-43a1-82dc-8636be4d16ce-config\") pod \"goldmane-78d55f7ddc-52cwp\" (UID: \"52758d11-1828-43a1-82dc-8636be4d16ce\") " pod="calico-system/goldmane-78d55f7ddc-52cwp" May 16 00:22:20.360328 kubelet[2779]: I0516 00:22:20.360247 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52758d11-1828-43a1-82dc-8636be4d16ce-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-52cwp\" (UID: \"52758d11-1828-43a1-82dc-8636be4d16ce\") " pod="calico-system/goldmane-78d55f7ddc-52cwp" May 16 00:22:20.360328 kubelet[2779]: I0516 00:22:20.360260 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e107303-c733-4385-b560-7b7a1c214620-config-volume\") pod \"coredns-674b8bbfcf-jvpnm\" (UID: \"0e107303-c733-4385-b560-7b7a1c214620\") " pod="kube-system/coredns-674b8bbfcf-jvpnm" May 16 00:22:20.360328 kubelet[2779]: I0516 00:22:20.360270 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqp4n\" (UniqueName: \"kubernetes.io/projected/0e107303-c733-4385-b560-7b7a1c214620-kube-api-access-nqp4n\") pod \"coredns-674b8bbfcf-jvpnm\" (UID: \"0e107303-c733-4385-b560-7b7a1c214620\") " pod="kube-system/coredns-674b8bbfcf-jvpnm" May 16 00:22:20.360328 kubelet[2779]: I0516 00:22:20.360281 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98jmw\" (UniqueName: \"kubernetes.io/projected/9d770ce6-58c4-46a7-9547-3ce0bd7d1645-kube-api-access-98jmw\") pod \"calico-apiserver-69f746874b-lqglx\" (UID: \"9d770ce6-58c4-46a7-9547-3ce0bd7d1645\") " pod="calico-apiserver/calico-apiserver-69f746874b-lqglx" May 16 00:22:20.360501 kubelet[2779]: I0516 00:22:20.360482 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/52758d11-1828-43a1-82dc-8636be4d16ce-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-52cwp\" (UID: \"52758d11-1828-43a1-82dc-8636be4d16ce\") " pod="calico-system/goldmane-78d55f7ddc-52cwp" May 16 00:22:20.360528 kubelet[2779]: I0516 00:22:20.360513 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9d770ce6-58c4-46a7-9547-3ce0bd7d1645-calico-apiserver-certs\") pod \"calico-apiserver-69f746874b-lqglx\" (UID: \"9d770ce6-58c4-46a7-9547-3ce0bd7d1645\") " pod="calico-apiserver/calico-apiserver-69f746874b-lqglx" May 16 00:22:20.360979 kubelet[2779]: I0516 00:22:20.360587 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whqqw\" (UniqueName: \"kubernetes.io/projected/d8af785c-b34f-443e-8ddd-87c4ecca1bac-kube-api-access-whqqw\") pod \"calico-kube-controllers-87cc6cc95-w8v7r\" (UID: \"d8af785c-b34f-443e-8ddd-87c4ecca1bac\") " pod="calico-system/calico-kube-controllers-87cc6cc95-w8v7r" May 16 00:22:20.360979 kubelet[2779]: I0516 00:22:20.360600 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8af785c-b34f-443e-8ddd-87c4ecca1bac-tigera-ca-bundle\") pod \"calico-kube-controllers-87cc6cc95-w8v7r\" (UID: \"d8af785c-b34f-443e-8ddd-87c4ecca1bac\") " pod="calico-system/calico-kube-controllers-87cc6cc95-w8v7r" May 16 00:22:20.360979 kubelet[2779]: I0516 00:22:20.360609 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zwxj\" (UniqueName: \"kubernetes.io/projected/52758d11-1828-43a1-82dc-8636be4d16ce-kube-api-access-4zwxj\") pod \"goldmane-78d55f7ddc-52cwp\" (UID: \"52758d11-1828-43a1-82dc-8636be4d16ce\") " pod="calico-system/goldmane-78d55f7ddc-52cwp" May 16 00:22:20.360979 kubelet[2779]: I0516 00:22:20.360933 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh2nb\" (UniqueName: \"kubernetes.io/projected/b44c391c-7cde-4dcf-b519-756e8bbaeb58-kube-api-access-lh2nb\") pod \"coredns-674b8bbfcf-4wc9z\" (UID: \"b44c391c-7cde-4dcf-b519-756e8bbaeb58\") " pod="kube-system/coredns-674b8bbfcf-4wc9z" May 16 00:22:20.462402 kubelet[2779]: I0516 00:22:20.462012 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5h8c\" (UniqueName: \"kubernetes.io/projected/b31223af-d117-49cb-b138-65ed4e7ea99c-kube-api-access-c5h8c\") pod \"calico-apiserver-d8c456b68-lvb6f\" (UID: \"b31223af-d117-49cb-b138-65ed4e7ea99c\") " pod="calico-apiserver/calico-apiserver-d8c456b68-lvb6f" May 16 00:22:20.462402 kubelet[2779]: I0516 00:22:20.462059 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f8823d20-0cc7-41eb-b83e-818f1ec7a773-calico-apiserver-certs\") pod \"calico-apiserver-69f746874b-drrg8\" (UID: \"f8823d20-0cc7-41eb-b83e-818f1ec7a773\") " pod="calico-apiserver/calico-apiserver-69f746874b-drrg8" May 16 00:22:20.462402 kubelet[2779]: I0516 00:22:20.462109 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b31223af-d117-49cb-b138-65ed4e7ea99c-calico-apiserver-certs\") pod \"calico-apiserver-d8c456b68-lvb6f\" (UID: \"b31223af-d117-49cb-b138-65ed4e7ea99c\") " pod="calico-apiserver/calico-apiserver-d8c456b68-lvb6f" May 16 00:22:20.462402 kubelet[2779]: I0516 00:22:20.462127 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2mr4\" (UniqueName: \"kubernetes.io/projected/8eb7a5fd-489c-4f9d-bbb4-c5102881a604-kube-api-access-n2mr4\") pod \"whisker-f8995ffc6-9xdcl\" (UID: \"8eb7a5fd-489c-4f9d-bbb4-c5102881a604\") " pod="calico-system/whisker-f8995ffc6-9xdcl" May 16 00:22:20.462402 kubelet[2779]: I0516 00:22:20.462153 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8eb7a5fd-489c-4f9d-bbb4-c5102881a604-whisker-backend-key-pair\") pod \"whisker-f8995ffc6-9xdcl\" (UID: \"8eb7a5fd-489c-4f9d-bbb4-c5102881a604\") " pod="calico-system/whisker-f8995ffc6-9xdcl" May 16 00:22:20.462566 kubelet[2779]: I0516 00:22:20.462162 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8eb7a5fd-489c-4f9d-bbb4-c5102881a604-whisker-ca-bundle\") pod \"whisker-f8995ffc6-9xdcl\" (UID: \"8eb7a5fd-489c-4f9d-bbb4-c5102881a604\") " pod="calico-system/whisker-f8995ffc6-9xdcl" May 16 00:22:20.462566 kubelet[2779]: I0516 00:22:20.462191 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8798d\" (UniqueName: \"kubernetes.io/projected/f8823d20-0cc7-41eb-b83e-818f1ec7a773-kube-api-access-8798d\") pod \"calico-apiserver-69f746874b-drrg8\" (UID: \"f8823d20-0cc7-41eb-b83e-818f1ec7a773\") " pod="calico-apiserver/calico-apiserver-69f746874b-drrg8" May 16 00:22:20.581569 containerd[1554]: time="2025-05-16T00:22:20.581220248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jvpnm,Uid:0e107303-c733-4385-b560-7b7a1c214620,Namespace:kube-system,Attempt:0,}" May 16 00:22:20.609784 containerd[1554]: time="2025-05-16T00:22:20.608346343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4wc9z,Uid:b44c391c-7cde-4dcf-b519-756e8bbaeb58,Namespace:kube-system,Attempt:0,}" May 16 00:22:20.622329 containerd[1554]: time="2025-05-16T00:22:20.622092345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69f746874b-lqglx,Uid:9d770ce6-58c4-46a7-9547-3ce0bd7d1645,Namespace:calico-apiserver,Attempt:0,}" May 16 00:22:20.625153 containerd[1554]: time="2025-05-16T00:22:20.625122059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-87cc6cc95-w8v7r,Uid:d8af785c-b34f-443e-8ddd-87c4ecca1bac,Namespace:calico-system,Attempt:0,}" May 16 00:22:20.647861 containerd[1554]: time="2025-05-16T00:22:20.647639493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69f746874b-drrg8,Uid:f8823d20-0cc7-41eb-b83e-818f1ec7a773,Namespace:calico-apiserver,Attempt:0,}" May 16 00:22:20.647861 containerd[1554]: time="2025-05-16T00:22:20.647663694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-52cwp,Uid:52758d11-1828-43a1-82dc-8636be4d16ce,Namespace:calico-system,Attempt:0,}" May 16 00:22:20.648476 containerd[1554]: time="2025-05-16T00:22:20.648460347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f8995ffc6-9xdcl,Uid:8eb7a5fd-489c-4f9d-bbb4-c5102881a604,Namespace:calico-system,Attempt:0,}" May 16 00:22:20.654607 containerd[1554]: time="2025-05-16T00:22:20.654578336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8c456b68-lvb6f,Uid:b31223af-d117-49cb-b138-65ed4e7ea99c,Namespace:calico-apiserver,Attempt:0,}" May 16 00:22:20.846425 containerd[1554]: time="2025-05-16T00:22:20.846271945Z" level=error msg="Failed to destroy network for sandbox \"607901a845f67c6f1891b1522cc9706488e4c4a13dc34e9a2609076cb4204028\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:22:20.852550 kubelet[2779]: I0516 00:22:20.851980 2779 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 00:22:20.855379 containerd[1554]: time="2025-05-16T00:22:20.855332893Z" level=error msg="Failed to destroy network for sandbox \"eba88e73a155bb8ddc1b56d15de0178919ff7a17c203a43df9f9329af262bb4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:22:20.857378 containerd[1554]: time="2025-05-16T00:22:20.857354764Z" level=error msg="Failed to destroy network for sandbox \"7b3c153b39654609cdeaba0e5079104801bdcbce09d501b05de2a471d015cd45\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:22:20.880447 containerd[1554]: time="2025-05-16T00:22:20.860127216Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-52cwp,Uid:52758d11-1828-43a1-82dc-8636be4d16ce,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eba88e73a155bb8ddc1b56d15de0178919ff7a17c203a43df9f9329af262bb4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:22:20.880803 containerd[1554]: time="2025-05-16T00:22:20.862076993Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f8995ffc6-9xdcl,Uid:8eb7a5fd-489c-4f9d-bbb4-c5102881a604,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b3c153b39654609cdeaba0e5079104801bdcbce09d501b05de2a471d015cd45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:22:20.886955 containerd[1554]: time="2025-05-16T00:22:20.862384219Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-87cc6cc95-w8v7r,Uid:d8af785c-b34f-443e-8ddd-87c4ecca1bac,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"607901a845f67c6f1891b1522cc9706488e4c4a13dc34e9a2609076cb4204028\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:22:20.887363 containerd[1554]: time="2025-05-16T00:22:20.862815902Z" level=error msg="Failed to destroy network for sandbox \"cd034d4321ea600a426ad79768d4799c4a93606e9d6ac40760e4ee19f415a9f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:22:20.887679 containerd[1554]: time="2025-05-16T00:22:20.887600677Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4wc9z,Uid:b44c391c-7cde-4dcf-b519-756e8bbaeb58,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd034d4321ea600a426ad79768d4799c4a93606e9d6ac40760e4ee19f415a9f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:22:20.887679 containerd[1554]: time="2025-05-16T00:22:20.865467894Z" level=error msg="Failed to destroy network for sandbox \"d049572f4ebe4ba8fd5f3fa5b13ddc5e0ef67089950ddb74d212445ff805a0dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:22:20.888042 containerd[1554]: time="2025-05-16T00:22:20.888005459Z" level=error msg="Failed to destroy network for sandbox \"b15033c8ee10dbd05c0fb100857edb517681b302b65765fe5ee24e96a5c29418\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:22:20.888390 containerd[1554]: time="2025-05-16T00:22:20.888242739Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69f746874b-lqglx,Uid:9d770ce6-58c4-46a7-9547-3ce0bd7d1645,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d049572f4ebe4ba8fd5f3fa5b13ddc5e0ef67089950ddb74d212445ff805a0dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:22:20.888390 containerd[1554]: time="2025-05-16T00:22:20.865767730Z" level=error msg="Failed to destroy network for sandbox \"44bfcea52977d469e1823b4976dec1e6861968517bf7daafdb6bd9f4d69aba9d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:22:20.888987 containerd[1554]: time="2025-05-16T00:22:20.876298587Z" level=error msg="Failed to destroy network for sandbox \"c38bfe18eec9b427e6010e802dff9f89065a0262da90895ca98db5ee9dc58c59\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:22:20.889317 kubelet[2779]: E0516 00:22:20.888609 2779 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eba88e73a155bb8ddc1b56d15de0178919ff7a17c203a43df9f9329af262bb4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:22:20.889317 kubelet[2779]: E0516 00:22:20.888680 2779 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eba88e73a155bb8ddc1b56d15de0178919ff7a17c203a43df9f9329af262bb4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-52cwp" May 16 00:22:20.889317 kubelet[2779]: E0516 00:22:20.888698 2779 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eba88e73a155bb8ddc1b56d15de0178919ff7a17c203a43df9f9329af262bb4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-52cwp" May 16 00:22:20.889424 kubelet[2779]: E0516 00:22:20.888732 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-52cwp_calico-system(52758d11-1828-43a1-82dc-8636be4d16ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-52cwp_calico-system(52758d11-1828-43a1-82dc-8636be4d16ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eba88e73a155bb8ddc1b56d15de0178919ff7a17c203a43df9f9329af262bb4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-52cwp" podUID="52758d11-1828-43a1-82dc-8636be4d16ce" May 16 00:22:20.889922 kubelet[2779]: E0516 00:22:20.889907 2779 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d049572f4ebe4ba8fd5f3fa5b13ddc5e0ef67089950ddb74d212445ff805a0dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:22:20.889973 kubelet[2779]: E0516 00:22:20.889939 2779 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d049572f4ebe4ba8fd5f3fa5b13ddc5e0ef67089950ddb74d212445ff805a0dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69f746874b-lqglx" May 16 00:22:20.889973 kubelet[2779]: E0516 00:22:20.889951 2779 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d049572f4ebe4ba8fd5f3fa5b13ddc5e0ef67089950ddb74d212445ff805a0dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69f746874b-lqglx" May 16 00:22:20.890483 kubelet[2779]: E0516 00:22:20.889972 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69f746874b-lqglx_calico-apiserver(9d770ce6-58c4-46a7-9547-3ce0bd7d1645)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69f746874b-lqglx_calico-apiserver(9d770ce6-58c4-46a7-9547-3ce0bd7d1645)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d049572f4ebe4ba8fd5f3fa5b13ddc5e0ef67089950ddb74d212445ff805a0dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69f746874b-lqglx" podUID="9d770ce6-58c4-46a7-9547-3ce0bd7d1645" May 16 00:22:20.890483 kubelet[2779]: E0516 00:22:20.889993 2779 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"607901a845f67c6f1891b1522cc9706488e4c4a13dc34e9a2609076cb4204028\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:22:20.890483 kubelet[2779]: E0516 00:22:20.890011 2779 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"607901a845f67c6f1891b1522cc9706488e4c4a13dc34e9a2609076cb4204028\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-87cc6cc95-w8v7r" May 16 00:22:20.890686 containerd[1554]: time="2025-05-16T00:22:20.890119708Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jvpnm,Uid:0e107303-c733-4385-b560-7b7a1c214620,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b15033c8ee10dbd05c0fb100857edb517681b302b65765fe5ee24e96a5c29418\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:22:20.890723 kubelet[2779]: E0516 00:22:20.890021 2779 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"607901a845f67c6f1891b1522cc9706488e4c4a13dc34e9a2609076cb4204028\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-87cc6cc95-w8v7r" May 16 00:22:20.890723 kubelet[2779]: E0516 00:22:20.890037 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-87cc6cc95-w8v7r_calico-system(d8af785c-b34f-443e-8ddd-87c4ecca1bac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-87cc6cc95-w8v7r_calico-system(d8af785c-b34f-443e-8ddd-87c4ecca1bac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"607901a845f67c6f1891b1522cc9706488e4c4a13dc34e9a2609076cb4204028\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-87cc6cc95-w8v7r" podUID="d8af785c-b34f-443e-8ddd-87c4ecca1bac" May 16 00:22:20.890723 kubelet[2779]: E0516 00:22:20.890054 2779 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd034d4321ea600a426ad79768d4799c4a93606e9d6ac40760e4ee19f415a9f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:22:20.890794 kubelet[2779]: E0516 00:22:20.890064 2779 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd034d4321ea600a426ad79768d4799c4a93606e9d6ac40760e4ee19f415a9f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4wc9z" May 16 00:22:20.890794 kubelet[2779]: E0516 00:22:20.890073 2779 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd034d4321ea600a426ad79768d4799c4a93606e9d6ac40760e4ee19f415a9f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4wc9z" May 16 00:22:20.890794 kubelet[2779]: E0516 00:22:20.890103 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-4wc9z_kube-system(b44c391c-7cde-4dcf-b519-756e8bbaeb58)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-4wc9z_kube-system(b44c391c-7cde-4dcf-b519-756e8bbaeb58)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd034d4321ea600a426ad79768d4799c4a93606e9d6ac40760e4ee19f415a9f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-4wc9z" podUID="b44c391c-7cde-4dcf-b519-756e8bbaeb58" May 16 00:22:20.890873 kubelet[2779]: E0516 00:22:20.890189 2779 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b15033c8ee10dbd05c0fb100857edb517681b302b65765fe5ee24e96a5c29418\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:22:20.890873 kubelet[2779]: E0516 00:22:20.890223 2779 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b15033c8ee10dbd05c0fb100857edb517681b302b65765fe5ee24e96a5c29418\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jvpnm" May 16 00:22:20.890873 kubelet[2779]: E0516 00:22:20.890234 2779 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b15033c8ee10dbd05c0fb100857edb517681b302b65765fe5ee24e96a5c29418\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jvpnm" May 16 00:22:20.890942 kubelet[2779]: E0516 00:22:20.890261 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jvpnm_kube-system(0e107303-c733-4385-b560-7b7a1c214620)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jvpnm_kube-system(0e107303-c733-4385-b560-7b7a1c214620)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b15033c8ee10dbd05c0fb100857edb517681b302b65765fe5ee24e96a5c29418\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jvpnm" podUID="0e107303-c733-4385-b560-7b7a1c214620" May 16 00:22:20.890975 containerd[1554]: time="2025-05-16T00:22:20.890889642Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8c456b68-lvb6f,Uid:b31223af-d117-49cb-b138-65ed4e7ea99c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"44bfcea52977d469e1823b4976dec1e6861968517bf7daafdb6bd9f4d69aba9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:22:20.892090 kubelet[2779]: E0516 00:22:20.891284 2779 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44bfcea52977d469e1823b4976dec1e6861968517bf7daafdb6bd9f4d69aba9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:22:20.892090 kubelet[2779]: E0516 00:22:20.891304 2779 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44bfcea52977d469e1823b4976dec1e6861968517bf7daafdb6bd9f4d69aba9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d8c456b68-lvb6f" May 16 00:22:20.892090 kubelet[2779]: E0516 00:22:20.891313 2779 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44bfcea52977d469e1823b4976dec1e6861968517bf7daafdb6bd9f4d69aba9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d8c456b68-lvb6f" May 16 00:22:20.892684 containerd[1554]: time="2025-05-16T00:22:20.891918008Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69f746874b-drrg8,Uid:f8823d20-0cc7-41eb-b83e-818f1ec7a773,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c38bfe18eec9b427e6010e802dff9f89065a0262da90895ca98db5ee9dc58c59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:22:20.892822 kubelet[2779]: E0516 00:22:20.891333 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d8c456b68-lvb6f_calico-apiserver(b31223af-d117-49cb-b138-65ed4e7ea99c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d8c456b68-lvb6f_calico-apiserver(b31223af-d117-49cb-b138-65ed4e7ea99c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44bfcea52977d469e1823b4976dec1e6861968517bf7daafdb6bd9f4d69aba9d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d8c456b68-lvb6f" podUID="b31223af-d117-49cb-b138-65ed4e7ea99c" May 16 00:22:20.892822 kubelet[2779]: E0516 00:22:20.891992 2779 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c38bfe18eec9b427e6010e802dff9f89065a0262da90895ca98db5ee9dc58c59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:22:20.892822 kubelet[2779]: E0516 00:22:20.892009 2779 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c38bfe18eec9b427e6010e802dff9f89065a0262da90895ca98db5ee9dc58c59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69f746874b-drrg8" May 16 00:22:20.892903 kubelet[2779]: E0516 00:22:20.892018 2779 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c38bfe18eec9b427e6010e802dff9f89065a0262da90895ca98db5ee9dc58c59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69f746874b-drrg8" May 16 00:22:20.892903 kubelet[2779]: E0516 00:22:20.892048 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69f746874b-drrg8_calico-apiserver(f8823d20-0cc7-41eb-b83e-818f1ec7a773)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69f746874b-drrg8_calico-apiserver(f8823d20-0cc7-41eb-b83e-818f1ec7a773)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c38bfe18eec9b427e6010e802dff9f89065a0262da90895ca98db5ee9dc58c59\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69f746874b-drrg8" podUID="f8823d20-0cc7-41eb-b83e-818f1ec7a773" May 16 00:22:20.892903 kubelet[2779]: E0516 00:22:20.892711 2779 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b3c153b39654609cdeaba0e5079104801bdcbce09d501b05de2a471d015cd45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:22:20.892974 kubelet[2779]: E0516 00:22:20.892735 2779 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b3c153b39654609cdeaba0e5079104801bdcbce09d501b05de2a471d015cd45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f8995ffc6-9xdcl" May 16 00:22:20.892974 kubelet[2779]: E0516 00:22:20.892745 2779 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b3c153b39654609cdeaba0e5079104801bdcbce09d501b05de2a471d015cd45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f8995ffc6-9xdcl" May 16 00:22:20.892974 kubelet[2779]: E0516 00:22:20.892763 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-f8995ffc6-9xdcl_calico-system(8eb7a5fd-489c-4f9d-bbb4-c5102881a604)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-f8995ffc6-9xdcl_calico-system(8eb7a5fd-489c-4f9d-bbb4-c5102881a604)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b3c153b39654609cdeaba0e5079104801bdcbce09d501b05de2a471d015cd45\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-f8995ffc6-9xdcl" podUID="8eb7a5fd-489c-4f9d-bbb4-c5102881a604" May 16 00:22:21.187757 containerd[1554]: time="2025-05-16T00:22:21.187731965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 16 00:22:22.067188 systemd[1]: Created slice kubepods-besteffort-pod67e150e9_6407_4d80_99aa_db86a8170146.slice - libcontainer container kubepods-besteffort-pod67e150e9_6407_4d80_99aa_db86a8170146.slice. May 16 00:22:22.068758 containerd[1554]: time="2025-05-16T00:22:22.068706611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sv6vl,Uid:67e150e9-6407-4d80-99aa-db86a8170146,Namespace:calico-system,Attempt:0,}" May 16 00:22:22.109789 containerd[1554]: time="2025-05-16T00:22:22.109753444Z" level=error msg="Failed to destroy network for sandbox \"24bb0af89863cdb6a77a109bf4579bec9ea07c4c57bc8929cbb5949a127c7d8c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:22:22.111120 systemd[1]: run-netns-cni\x2da1870e1a\x2d07c8\x2dba82\x2d94cd\x2d17628bff5920.mount: Deactivated successfully. May 16 00:22:22.115430 containerd[1554]: time="2025-05-16T00:22:22.115407953Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sv6vl,Uid:67e150e9-6407-4d80-99aa-db86a8170146,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"24bb0af89863cdb6a77a109bf4579bec9ea07c4c57bc8929cbb5949a127c7d8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:22:22.115934 kubelet[2779]: E0516 00:22:22.115733 2779 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24bb0af89863cdb6a77a109bf4579bec9ea07c4c57bc8929cbb5949a127c7d8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:22:22.115934 kubelet[2779]: E0516 00:22:22.115784 2779 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24bb0af89863cdb6a77a109bf4579bec9ea07c4c57bc8929cbb5949a127c7d8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sv6vl" May 16 00:22:22.115934 kubelet[2779]: E0516 00:22:22.115797 2779 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24bb0af89863cdb6a77a109bf4579bec9ea07c4c57bc8929cbb5949a127c7d8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sv6vl" May 16 00:22:22.116122 kubelet[2779]: E0516 00:22:22.115834 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sv6vl_calico-system(67e150e9-6407-4d80-99aa-db86a8170146)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sv6vl_calico-system(67e150e9-6407-4d80-99aa-db86a8170146)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24bb0af89863cdb6a77a109bf4579bec9ea07c4c57bc8929cbb5949a127c7d8c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sv6vl" podUID="67e150e9-6407-4d80-99aa-db86a8170146" May 16 00:22:26.052476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount561538635.mount: Deactivated successfully. May 16 00:22:26.132575 containerd[1554]: time="2025-05-16T00:22:26.120215567Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:22:26.159400 containerd[1554]: time="2025-05-16T00:22:26.143481938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 16 00:22:26.159400 containerd[1554]: time="2025-05-16T00:22:26.148445463Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:22:26.159765 containerd[1554]: time="2025-05-16T00:22:26.151747612Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 4.961796628s" May 16 00:22:26.159765 containerd[1554]: time="2025-05-16T00:22:26.159664288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 16 00:22:26.159814 containerd[1554]: time="2025-05-16T00:22:26.159784475Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:22:26.217823 containerd[1554]: time="2025-05-16T00:22:26.217790851Z" level=info msg="CreateContainer within sandbox \"0dd8687389d7fa838365556af79c72659b24559d76f2cea2bbcef4cd270fcfd8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 16 00:22:26.238887 containerd[1554]: time="2025-05-16T00:22:26.238777750Z" level=info msg="Container ebf84c1e17d60aa205041eea0d2c683ea9732b65fd86a93185f8c277845711b3: CDI devices from CRI Config.CDIDevices: []" May 16 00:22:26.239704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1888231365.mount: Deactivated successfully. May 16 00:22:26.271757 containerd[1554]: time="2025-05-16T00:22:26.271725076Z" level=info msg="CreateContainer within sandbox \"0dd8687389d7fa838365556af79c72659b24559d76f2cea2bbcef4cd270fcfd8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ebf84c1e17d60aa205041eea0d2c683ea9732b65fd86a93185f8c277845711b3\"" May 16 00:22:26.276822 containerd[1554]: time="2025-05-16T00:22:26.276795278Z" level=info msg="StartContainer for \"ebf84c1e17d60aa205041eea0d2c683ea9732b65fd86a93185f8c277845711b3\"" May 16 00:22:26.277772 containerd[1554]: time="2025-05-16T00:22:26.277755912Z" level=info msg="connecting to shim ebf84c1e17d60aa205041eea0d2c683ea9732b65fd86a93185f8c277845711b3" address="unix:///run/containerd/s/97ffde6db70a736c692bd9c5d60e3a5de98cd26b1e3b7224e97204dc075323f3" protocol=ttrpc version=3 May 16 00:22:26.431470 systemd[1]: Started cri-containerd-ebf84c1e17d60aa205041eea0d2c683ea9732b65fd86a93185f8c277845711b3.scope - libcontainer container ebf84c1e17d60aa205041eea0d2c683ea9732b65fd86a93185f8c277845711b3. May 16 00:22:26.480160 containerd[1554]: time="2025-05-16T00:22:26.479955078Z" level=info msg="StartContainer for \"ebf84c1e17d60aa205041eea0d2c683ea9732b65fd86a93185f8c277845711b3\" returns successfully" May 16 00:22:26.934700 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 16 00:22:26.936203 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 16 00:22:27.406160 containerd[1554]: time="2025-05-16T00:22:27.406126740Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ebf84c1e17d60aa205041eea0d2c683ea9732b65fd86a93185f8c277845711b3\" id:\"8854d19ccd2262a495c7a4c607385623955fb5f37f5477471f558ce977c98cfe\" pid:3866 exit_status:1 exited_at:{seconds:1747354947 nanos:405709990}" May 16 00:22:27.434209 kubelet[2779]: I0516 00:22:27.434062 2779 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2mr4\" (UniqueName: \"kubernetes.io/projected/8eb7a5fd-489c-4f9d-bbb4-c5102881a604-kube-api-access-n2mr4\") pod \"8eb7a5fd-489c-4f9d-bbb4-c5102881a604\" (UID: \"8eb7a5fd-489c-4f9d-bbb4-c5102881a604\") " May 16 00:22:27.434209 kubelet[2779]: I0516 00:22:27.434111 2779 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8eb7a5fd-489c-4f9d-bbb4-c5102881a604-whisker-ca-bundle\") pod \"8eb7a5fd-489c-4f9d-bbb4-c5102881a604\" (UID: \"8eb7a5fd-489c-4f9d-bbb4-c5102881a604\") " May 16 00:22:27.434209 kubelet[2779]: I0516 00:22:27.434147 2779 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8eb7a5fd-489c-4f9d-bbb4-c5102881a604-whisker-backend-key-pair\") pod \"8eb7a5fd-489c-4f9d-bbb4-c5102881a604\" (UID: \"8eb7a5fd-489c-4f9d-bbb4-c5102881a604\") " May 16 00:22:27.453473 kubelet[2779]: I0516 00:22:27.453276 2779 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8eb7a5fd-489c-4f9d-bbb4-c5102881a604-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8eb7a5fd-489c-4f9d-bbb4-c5102881a604" (UID: "8eb7a5fd-489c-4f9d-bbb4-c5102881a604"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 00:22:27.462339 systemd[1]: var-lib-kubelet-pods-8eb7a5fd\x2d489c\x2d4f9d\x2dbbb4\x2dc5102881a604-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 16 00:22:27.462978 kubelet[2779]: I0516 00:22:27.462874 2779 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eb7a5fd-489c-4f9d-bbb4-c5102881a604-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8eb7a5fd-489c-4f9d-bbb4-c5102881a604" (UID: "8eb7a5fd-489c-4f9d-bbb4-c5102881a604"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 16 00:22:27.464144 kubelet[2779]: I0516 00:22:27.464076 2779 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eb7a5fd-489c-4f9d-bbb4-c5102881a604-kube-api-access-n2mr4" (OuterVolumeSpecName: "kube-api-access-n2mr4") pod "8eb7a5fd-489c-4f9d-bbb4-c5102881a604" (UID: "8eb7a5fd-489c-4f9d-bbb4-c5102881a604"). InnerVolumeSpecName "kube-api-access-n2mr4". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 00:22:27.464799 systemd[1]: var-lib-kubelet-pods-8eb7a5fd\x2d489c\x2d4f9d\x2dbbb4\x2dc5102881a604-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn2mr4.mount: Deactivated successfully. May 16 00:22:27.544753 kubelet[2779]: I0516 00:22:27.544722 2779 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8eb7a5fd-489c-4f9d-bbb4-c5102881a604-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" May 16 00:22:27.544753 kubelet[2779]: I0516 00:22:27.544751 2779 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n2mr4\" (UniqueName: \"kubernetes.io/projected/8eb7a5fd-489c-4f9d-bbb4-c5102881a604-kube-api-access-n2mr4\") on node \"localhost\" DevicePath \"\"" May 16 00:22:27.544753 kubelet[2779]: I0516 00:22:27.544763 2779 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8eb7a5fd-489c-4f9d-bbb4-c5102881a604-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 16 00:22:28.076410 systemd[1]: Removed slice kubepods-besteffort-pod8eb7a5fd_489c_4f9d_bbb4_c5102881a604.slice - libcontainer container kubepods-besteffort-pod8eb7a5fd_489c_4f9d_bbb4_c5102881a604.slice. May 16 00:22:28.302124 kubelet[2779]: I0516 00:22:28.300243 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-68rlj" podStartSLOduration=3.440277348 podStartE2EDuration="19.298071869s" podCreationTimestamp="2025-05-16 00:22:09 +0000 UTC" firstStartedPulling="2025-05-16 00:22:10.30250445 +0000 UTC m=+16.361290069" lastFinishedPulling="2025-05-16 00:22:26.160298968 +0000 UTC m=+32.219084590" observedRunningTime="2025-05-16 00:22:27.279699547 +0000 UTC m=+33.338485178" watchObservedRunningTime="2025-05-16 00:22:28.298071869 +0000 UTC m=+34.356857496" May 16 00:22:28.351209 containerd[1554]: time="2025-05-16T00:22:28.351057613Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ebf84c1e17d60aa205041eea0d2c683ea9732b65fd86a93185f8c277845711b3\" id:\"d3ef68507744dc0fc33388372f1a89bf5dea6e9843af026e9619e6aa5479c4b7\" pid:3893 exit_status:1 exited_at:{seconds:1747354948 nanos:350837732}" May 16 00:22:28.379155 systemd[1]: Created slice kubepods-besteffort-pod9824a9f5_eda0_4e88_bed1_bf9b90951f1d.slice - libcontainer container kubepods-besteffort-pod9824a9f5_eda0_4e88_bed1_bf9b90951f1d.slice. May 16 00:22:28.474996 kubelet[2779]: I0516 00:22:28.474963 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bslh6\" (UniqueName: \"kubernetes.io/projected/9824a9f5-eda0-4e88-bed1-bf9b90951f1d-kube-api-access-bslh6\") pod \"whisker-786d5f7786-2pbzk\" (UID: \"9824a9f5-eda0-4e88-bed1-bf9b90951f1d\") " pod="calico-system/whisker-786d5f7786-2pbzk" May 16 00:22:28.474996 kubelet[2779]: I0516 00:22:28.474996 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9824a9f5-eda0-4e88-bed1-bf9b90951f1d-whisker-backend-key-pair\") pod \"whisker-786d5f7786-2pbzk\" (UID: \"9824a9f5-eda0-4e88-bed1-bf9b90951f1d\") " pod="calico-system/whisker-786d5f7786-2pbzk" May 16 00:22:28.483178 kubelet[2779]: I0516 00:22:28.475007 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9824a9f5-eda0-4e88-bed1-bf9b90951f1d-whisker-ca-bundle\") pod \"whisker-786d5f7786-2pbzk\" (UID: \"9824a9f5-eda0-4e88-bed1-bf9b90951f1d\") " pod="calico-system/whisker-786d5f7786-2pbzk" May 16 00:22:28.683983 containerd[1554]: time="2025-05-16T00:22:28.683950934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-786d5f7786-2pbzk,Uid:9824a9f5-eda0-4e88-bed1-bf9b90951f1d,Namespace:calico-system,Attempt:0,}" May 16 00:22:28.944375 kernel: bpftool[4021]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 16 00:22:29.266541 systemd-networkd[1440]: vxlan.calico: Link UP May 16 00:22:29.266545 systemd-networkd[1440]: vxlan.calico: Gained carrier May 16 00:22:29.925820 systemd-networkd[1440]: cali6066364af18: Link UP May 16 00:22:29.926168 systemd-networkd[1440]: cali6066364af18: Gained carrier May 16 00:22:29.937383 containerd[1554]: 2025-05-16 00:22:28.748 [INFO][4003] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 16 00:22:29.937383 containerd[1554]: 2025-05-16 00:22:29.374 [INFO][4003] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--786d5f7786--2pbzk-eth0 whisker-786d5f7786- calico-system 9824a9f5-eda0-4e88-bed1-bf9b90951f1d 902 0 2025-05-16 00:22:28 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:786d5f7786 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-786d5f7786-2pbzk eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6066364af18 [] [] }} ContainerID="36dfd42e934546c67cd6c07f3bca5016a8bb4885babe2bc1216f04bd4ce1d2bd" Namespace="calico-system" Pod="whisker-786d5f7786-2pbzk" WorkloadEndpoint="localhost-k8s-whisker--786d5f7786--2pbzk-" May 16 00:22:29.937383 containerd[1554]: 2025-05-16 00:22:29.374 [INFO][4003] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="36dfd42e934546c67cd6c07f3bca5016a8bb4885babe2bc1216f04bd4ce1d2bd" Namespace="calico-system" Pod="whisker-786d5f7786-2pbzk" WorkloadEndpoint="localhost-k8s-whisker--786d5f7786--2pbzk-eth0" May 16 00:22:29.937383 containerd[1554]: 2025-05-16 00:22:29.881 [INFO][4079] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="36dfd42e934546c67cd6c07f3bca5016a8bb4885babe2bc1216f04bd4ce1d2bd" HandleID="k8s-pod-network.36dfd42e934546c67cd6c07f3bca5016a8bb4885babe2bc1216f04bd4ce1d2bd" Workload="localhost-k8s-whisker--786d5f7786--2pbzk-eth0" May 16 00:22:29.938855 containerd[1554]: 2025-05-16 00:22:29.883 [INFO][4079] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="36dfd42e934546c67cd6c07f3bca5016a8bb4885babe2bc1216f04bd4ce1d2bd" HandleID="k8s-pod-network.36dfd42e934546c67cd6c07f3bca5016a8bb4885babe2bc1216f04bd4ce1d2bd" Workload="localhost-k8s-whisker--786d5f7786--2pbzk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034cc10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-786d5f7786-2pbzk", "timestamp":"2025-05-16 00:22:29.881209024 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:22:29.938855 containerd[1554]: 2025-05-16 00:22:29.883 [INFO][4079] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:22:29.938855 containerd[1554]: 2025-05-16 00:22:29.884 [INFO][4079] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:22:29.938855 containerd[1554]: 2025-05-16 00:22:29.884 [INFO][4079] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 00:22:29.938855 containerd[1554]: 2025-05-16 00:22:29.895 [INFO][4079] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.36dfd42e934546c67cd6c07f3bca5016a8bb4885babe2bc1216f04bd4ce1d2bd" host="localhost" May 16 00:22:29.938855 containerd[1554]: 2025-05-16 00:22:29.905 [INFO][4079] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 00:22:29.938855 containerd[1554]: 2025-05-16 00:22:29.908 [INFO][4079] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 00:22:29.938855 containerd[1554]: 2025-05-16 00:22:29.909 [INFO][4079] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 00:22:29.938855 containerd[1554]: 2025-05-16 00:22:29.911 [INFO][4079] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 00:22:29.938855 containerd[1554]: 2025-05-16 00:22:29.911 [INFO][4079] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.36dfd42e934546c67cd6c07f3bca5016a8bb4885babe2bc1216f04bd4ce1d2bd" host="localhost" May 16 00:22:29.940101 containerd[1554]: 2025-05-16 00:22:29.912 [INFO][4079] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.36dfd42e934546c67cd6c07f3bca5016a8bb4885babe2bc1216f04bd4ce1d2bd May 16 00:22:29.940101 containerd[1554]: 2025-05-16 00:22:29.914 [INFO][4079] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.36dfd42e934546c67cd6c07f3bca5016a8bb4885babe2bc1216f04bd4ce1d2bd" host="localhost" May 16 00:22:29.940101 containerd[1554]: 2025-05-16 00:22:29.918 [INFO][4079] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.36dfd42e934546c67cd6c07f3bca5016a8bb4885babe2bc1216f04bd4ce1d2bd" host="localhost" May 16 00:22:29.940101 containerd[1554]: 2025-05-16 00:22:29.918 [INFO][4079] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.36dfd42e934546c67cd6c07f3bca5016a8bb4885babe2bc1216f04bd4ce1d2bd" host="localhost" May 16 00:22:29.940101 containerd[1554]: 2025-05-16 00:22:29.918 [INFO][4079] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:22:29.940101 containerd[1554]: 2025-05-16 00:22:29.918 [INFO][4079] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="36dfd42e934546c67cd6c07f3bca5016a8bb4885babe2bc1216f04bd4ce1d2bd" HandleID="k8s-pod-network.36dfd42e934546c67cd6c07f3bca5016a8bb4885babe2bc1216f04bd4ce1d2bd" Workload="localhost-k8s-whisker--786d5f7786--2pbzk-eth0" May 16 00:22:29.940909 containerd[1554]: 2025-05-16 00:22:29.920 [INFO][4003] cni-plugin/k8s.go 418: Populated endpoint ContainerID="36dfd42e934546c67cd6c07f3bca5016a8bb4885babe2bc1216f04bd4ce1d2bd" Namespace="calico-system" Pod="whisker-786d5f7786-2pbzk" WorkloadEndpoint="localhost-k8s-whisker--786d5f7786--2pbzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--786d5f7786--2pbzk-eth0", GenerateName:"whisker-786d5f7786-", Namespace:"calico-system", SelfLink:"", UID:"9824a9f5-eda0-4e88-bed1-bf9b90951f1d", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 22, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"786d5f7786", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-786d5f7786-2pbzk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6066364af18", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:22:29.940909 containerd[1554]: 2025-05-16 00:22:29.920 [INFO][4003] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="36dfd42e934546c67cd6c07f3bca5016a8bb4885babe2bc1216f04bd4ce1d2bd" Namespace="calico-system" Pod="whisker-786d5f7786-2pbzk" WorkloadEndpoint="localhost-k8s-whisker--786d5f7786--2pbzk-eth0" May 16 00:22:29.941287 containerd[1554]: 2025-05-16 00:22:29.920 [INFO][4003] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6066364af18 ContainerID="36dfd42e934546c67cd6c07f3bca5016a8bb4885babe2bc1216f04bd4ce1d2bd" Namespace="calico-system" Pod="whisker-786d5f7786-2pbzk" WorkloadEndpoint="localhost-k8s-whisker--786d5f7786--2pbzk-eth0" May 16 00:22:29.941287 containerd[1554]: 2025-05-16 00:22:29.927 [INFO][4003] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="36dfd42e934546c67cd6c07f3bca5016a8bb4885babe2bc1216f04bd4ce1d2bd" Namespace="calico-system" Pod="whisker-786d5f7786-2pbzk" WorkloadEndpoint="localhost-k8s-whisker--786d5f7786--2pbzk-eth0" May 16 00:22:29.941324 containerd[1554]: 2025-05-16 00:22:29.927 [INFO][4003] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="36dfd42e934546c67cd6c07f3bca5016a8bb4885babe2bc1216f04bd4ce1d2bd" Namespace="calico-system" Pod="whisker-786d5f7786-2pbzk" WorkloadEndpoint="localhost-k8s-whisker--786d5f7786--2pbzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--786d5f7786--2pbzk-eth0", GenerateName:"whisker-786d5f7786-", Namespace:"calico-system", SelfLink:"", UID:"9824a9f5-eda0-4e88-bed1-bf9b90951f1d", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 22, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"786d5f7786", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"36dfd42e934546c67cd6c07f3bca5016a8bb4885babe2bc1216f04bd4ce1d2bd", Pod:"whisker-786d5f7786-2pbzk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6066364af18", MAC:"82:76:da:16:92:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:22:29.941386 containerd[1554]: 2025-05-16 00:22:29.934 [INFO][4003] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="36dfd42e934546c67cd6c07f3bca5016a8bb4885babe2bc1216f04bd4ce1d2bd" Namespace="calico-system" Pod="whisker-786d5f7786-2pbzk" WorkloadEndpoint="localhost-k8s-whisker--786d5f7786--2pbzk-eth0" May 16 00:22:29.989880 containerd[1554]: time="2025-05-16T00:22:29.989796202Z" level=info msg="connecting to shim 36dfd42e934546c67cd6c07f3bca5016a8bb4885babe2bc1216f04bd4ce1d2bd" address="unix:///run/containerd/s/67eabca003f5daafcb15b519b40cdc664c33b043994a16492904031d7d486752" namespace=k8s.io protocol=ttrpc version=3 May 16 00:22:30.010490 systemd[1]: Started cri-containerd-36dfd42e934546c67cd6c07f3bca5016a8bb4885babe2bc1216f04bd4ce1d2bd.scope - libcontainer container 36dfd42e934546c67cd6c07f3bca5016a8bb4885babe2bc1216f04bd4ce1d2bd. May 16 00:22:30.018565 systemd-resolved[1441]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:22:30.050057 containerd[1554]: time="2025-05-16T00:22:30.050005672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-786d5f7786-2pbzk,Uid:9824a9f5-eda0-4e88-bed1-bf9b90951f1d,Namespace:calico-system,Attempt:0,} returns sandbox id \"36dfd42e934546c67cd6c07f3bca5016a8bb4885babe2bc1216f04bd4ce1d2bd\"" May 16 00:22:30.071522 kubelet[2779]: I0516 00:22:30.071368 2779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eb7a5fd-489c-4f9d-bbb4-c5102881a604" path="/var/lib/kubelet/pods/8eb7a5fd-489c-4f9d-bbb4-c5102881a604/volumes" May 16 00:22:30.080392 containerd[1554]: time="2025-05-16T00:22:30.080339441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 16 00:22:30.390866 containerd[1554]: time="2025-05-16T00:22:30.390828557Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 00:22:30.391202 containerd[1554]: time="2025-05-16T00:22:30.391174770Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 16 00:22:30.392272 containerd[1554]: time="2025-05-16T00:22:30.392235610Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 16 00:22:30.401441 kubelet[2779]: E0516 00:22:30.401368 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 00:22:30.401701 kubelet[2779]: E0516 00:22:30.401423 2779 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 00:22:30.438179 kubelet[2779]: E0516 00:22:30.438140 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f2939758c3c44271a54467e3945f6783,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bslh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-786d5f7786-2pbzk_calico-system(9824a9f5-eda0-4e88-bed1-bf9b90951f1d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 00:22:30.451101 systemd-networkd[1440]: vxlan.calico: Gained IPv6LL May 16 00:22:30.458867 containerd[1554]: time="2025-05-16T00:22:30.458634055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 16 00:22:30.703086 containerd[1554]: time="2025-05-16T00:22:30.702924570Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 00:22:30.704397 containerd[1554]: time="2025-05-16T00:22:30.704255590Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 16 00:22:30.704397 containerd[1554]: time="2025-05-16T00:22:30.704371763Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 16 00:22:30.704955 kubelet[2779]: E0516 00:22:30.704606 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 00:22:30.704955 kubelet[2779]: E0516 00:22:30.704641 2779 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 00:22:30.705062 kubelet[2779]: E0516 00:22:30.704736 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bslh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-786d5f7786-2pbzk_calico-system(9824a9f5-eda0-4e88-bed1-bf9b90951f1d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 00:22:30.706018 kubelet[2779]: E0516 00:22:30.705991 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-786d5f7786-2pbzk" podUID="9824a9f5-eda0-4e88-bed1-bf9b90951f1d" May 16 00:22:31.275587 kubelet[2779]: E0516 00:22:31.275512 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-786d5f7786-2pbzk" podUID="9824a9f5-eda0-4e88-bed1-bf9b90951f1d" May 16 00:22:31.666480 systemd-networkd[1440]: cali6066364af18: Gained IPv6LL May 16 00:22:32.064691 containerd[1554]: time="2025-05-16T00:22:32.064273217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69f746874b-drrg8,Uid:f8823d20-0cc7-41eb-b83e-818f1ec7a773,Namespace:calico-apiserver,Attempt:0,}" May 16 00:22:32.070056 containerd[1554]: time="2025-05-16T00:22:32.069665347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4wc9z,Uid:b44c391c-7cde-4dcf-b519-756e8bbaeb58,Namespace:kube-system,Attempt:0,}" May 16 00:22:32.070164 containerd[1554]: time="2025-05-16T00:22:32.070125194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jvpnm,Uid:0e107303-c733-4385-b560-7b7a1c214620,Namespace:kube-system,Attempt:0,}" May 16 00:22:32.237286 systemd-networkd[1440]: calia76c8e5bfd0: Link UP May 16 00:22:32.237877 systemd-networkd[1440]: calia76c8e5bfd0: Gained carrier May 16 00:22:32.259240 containerd[1554]: 2025-05-16 00:22:32.125 [INFO][4217] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--jvpnm-eth0 coredns-674b8bbfcf- kube-system 0e107303-c733-4385-b560-7b7a1c214620 815 0 2025-05-16 00:21:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-jvpnm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia76c8e5bfd0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="39c8390e378860d77bce781f6733510c24e5128300ef2378c24534638fa120f3" Namespace="kube-system" Pod="coredns-674b8bbfcf-jvpnm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jvpnm-" May 16 00:22:32.259240 containerd[1554]: 2025-05-16 00:22:32.125 [INFO][4217] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="39c8390e378860d77bce781f6733510c24e5128300ef2378c24534638fa120f3" Namespace="kube-system" Pod="coredns-674b8bbfcf-jvpnm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jvpnm-eth0" May 16 00:22:32.259240 containerd[1554]: 2025-05-16 00:22:32.170 [INFO][4233] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="39c8390e378860d77bce781f6733510c24e5128300ef2378c24534638fa120f3" HandleID="k8s-pod-network.39c8390e378860d77bce781f6733510c24e5128300ef2378c24534638fa120f3" Workload="localhost-k8s-coredns--674b8bbfcf--jvpnm-eth0" May 16 00:22:32.259526 containerd[1554]: 2025-05-16 00:22:32.171 [INFO][4233] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="39c8390e378860d77bce781f6733510c24e5128300ef2378c24534638fa120f3" HandleID="k8s-pod-network.39c8390e378860d77bce781f6733510c24e5128300ef2378c24534638fa120f3" Workload="localhost-k8s-coredns--674b8bbfcf--jvpnm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9900), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-jvpnm", "timestamp":"2025-05-16 00:22:32.170800292 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:22:32.259526 containerd[1554]: 2025-05-16 00:22:32.171 [INFO][4233] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:22:32.259526 containerd[1554]: 2025-05-16 00:22:32.171 [INFO][4233] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:22:32.259526 containerd[1554]: 2025-05-16 00:22:32.171 [INFO][4233] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 00:22:32.259526 containerd[1554]: 2025-05-16 00:22:32.182 [INFO][4233] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.39c8390e378860d77bce781f6733510c24e5128300ef2378c24534638fa120f3" host="localhost" May 16 00:22:32.259526 containerd[1554]: 2025-05-16 00:22:32.185 [INFO][4233] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 00:22:32.259526 containerd[1554]: 2025-05-16 00:22:32.189 [INFO][4233] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 00:22:32.259526 containerd[1554]: 2025-05-16 00:22:32.190 [INFO][4233] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 00:22:32.259526 containerd[1554]: 2025-05-16 00:22:32.191 [INFO][4233] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 00:22:32.259526 containerd[1554]: 2025-05-16 00:22:32.191 [INFO][4233] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.39c8390e378860d77bce781f6733510c24e5128300ef2378c24534638fa120f3" host="localhost" May 16 00:22:32.262840 containerd[1554]: 2025-05-16 00:22:32.192 [INFO][4233] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.39c8390e378860d77bce781f6733510c24e5128300ef2378c24534638fa120f3 May 16 00:22:32.262840 containerd[1554]: 2025-05-16 00:22:32.210 [INFO][4233] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.39c8390e378860d77bce781f6733510c24e5128300ef2378c24534638fa120f3" host="localhost" May 16 00:22:32.262840 containerd[1554]: 2025-05-16 00:22:32.232 [INFO][4233] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.39c8390e378860d77bce781f6733510c24e5128300ef2378c24534638fa120f3" host="localhost" May 16 00:22:32.262840 containerd[1554]: 2025-05-16 00:22:32.232 [INFO][4233] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.39c8390e378860d77bce781f6733510c24e5128300ef2378c24534638fa120f3" host="localhost" May 16 00:22:32.262840 containerd[1554]: 2025-05-16 00:22:32.232 [INFO][4233] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:22:32.262840 containerd[1554]: 2025-05-16 00:22:32.232 [INFO][4233] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="39c8390e378860d77bce781f6733510c24e5128300ef2378c24534638fa120f3" HandleID="k8s-pod-network.39c8390e378860d77bce781f6733510c24e5128300ef2378c24534638fa120f3" Workload="localhost-k8s-coredns--674b8bbfcf--jvpnm-eth0" May 16 00:22:32.266693 containerd[1554]: 2025-05-16 00:22:32.235 [INFO][4217] cni-plugin/k8s.go 418: Populated endpoint ContainerID="39c8390e378860d77bce781f6733510c24e5128300ef2378c24534638fa120f3" Namespace="kube-system" Pod="coredns-674b8bbfcf-jvpnm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jvpnm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--jvpnm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0e107303-c733-4385-b560-7b7a1c214620", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 21, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-jvpnm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia76c8e5bfd0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:22:32.266785 containerd[1554]: 2025-05-16 00:22:32.235 [INFO][4217] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="39c8390e378860d77bce781f6733510c24e5128300ef2378c24534638fa120f3" Namespace="kube-system" Pod="coredns-674b8bbfcf-jvpnm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jvpnm-eth0" May 16 00:22:32.266785 containerd[1554]: 2025-05-16 00:22:32.235 [INFO][4217] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia76c8e5bfd0 ContainerID="39c8390e378860d77bce781f6733510c24e5128300ef2378c24534638fa120f3" Namespace="kube-system" Pod="coredns-674b8bbfcf-jvpnm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jvpnm-eth0" May 16 00:22:32.266785 containerd[1554]: 2025-05-16 00:22:32.238 [INFO][4217] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="39c8390e378860d77bce781f6733510c24e5128300ef2378c24534638fa120f3" Namespace="kube-system" Pod="coredns-674b8bbfcf-jvpnm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jvpnm-eth0" May 16 00:22:32.266859 containerd[1554]: 2025-05-16 00:22:32.239 [INFO][4217] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="39c8390e378860d77bce781f6733510c24e5128300ef2378c24534638fa120f3" Namespace="kube-system" Pod="coredns-674b8bbfcf-jvpnm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jvpnm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--jvpnm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0e107303-c733-4385-b560-7b7a1c214620", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 21, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"39c8390e378860d77bce781f6733510c24e5128300ef2378c24534638fa120f3", Pod:"coredns-674b8bbfcf-jvpnm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia76c8e5bfd0", MAC:"a6:06:6a:a1:77:12", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:22:32.266859 containerd[1554]: 2025-05-16 00:22:32.257 [INFO][4217] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="39c8390e378860d77bce781f6733510c24e5128300ef2378c24534638fa120f3" Namespace="kube-system" Pod="coredns-674b8bbfcf-jvpnm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jvpnm-eth0" May 16 00:22:32.301162 containerd[1554]: time="2025-05-16T00:22:32.301123552Z" level=info msg="connecting to shim 39c8390e378860d77bce781f6733510c24e5128300ef2378c24534638fa120f3" address="unix:///run/containerd/s/e0034dcfee25481aeb0d187f8d85a592877812d6ac0f4b425f64d61327a41eec" namespace=k8s.io protocol=ttrpc version=3 May 16 00:22:32.325496 systemd[1]: Started cri-containerd-39c8390e378860d77bce781f6733510c24e5128300ef2378c24534638fa120f3.scope - libcontainer container 39c8390e378860d77bce781f6733510c24e5128300ef2378c24534638fa120f3. May 16 00:22:32.336916 systemd-networkd[1440]: calid82ce67765d: Link UP May 16 00:22:32.341411 systemd-networkd[1440]: calid82ce67765d: Gained carrier May 16 00:22:32.344946 systemd-resolved[1441]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:22:32.360702 containerd[1554]: 2025-05-16 00:22:32.134 [INFO][4198] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--69f746874b--drrg8-eth0 calico-apiserver-69f746874b- calico-apiserver f8823d20-0cc7-41eb-b83e-818f1ec7a773 828 0 2025-05-16 00:22:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69f746874b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-69f746874b-drrg8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid82ce67765d [] [] }} ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" Namespace="calico-apiserver" Pod="calico-apiserver-69f746874b-drrg8" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f746874b--drrg8-" May 16 00:22:32.360702 containerd[1554]: 2025-05-16 00:22:32.134 [INFO][4198] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" Namespace="calico-apiserver" Pod="calico-apiserver-69f746874b-drrg8" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f746874b--drrg8-eth0" May 16 00:22:32.360702 containerd[1554]: 2025-05-16 00:22:32.178 [INFO][4238] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" HandleID="k8s-pod-network.96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" Workload="localhost-k8s-calico--apiserver--69f746874b--drrg8-eth0" May 16 00:22:32.360702 containerd[1554]: 2025-05-16 00:22:32.178 [INFO][4238] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" HandleID="k8s-pod-network.96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" Workload="localhost-k8s-calico--apiserver--69f746874b--drrg8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9980), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-69f746874b-drrg8", "timestamp":"2025-05-16 00:22:32.178648754 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:22:32.360702 containerd[1554]: 2025-05-16 00:22:32.178 [INFO][4238] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:22:32.360702 containerd[1554]: 2025-05-16 00:22:32.232 [INFO][4238] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:22:32.360702 containerd[1554]: 2025-05-16 00:22:32.232 [INFO][4238] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 00:22:32.360702 containerd[1554]: 2025-05-16 00:22:32.282 [INFO][4238] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" host="localhost" May 16 00:22:32.360702 containerd[1554]: 2025-05-16 00:22:32.292 [INFO][4238] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 00:22:32.360702 containerd[1554]: 2025-05-16 00:22:32.301 [INFO][4238] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 00:22:32.360702 containerd[1554]: 2025-05-16 00:22:32.303 [INFO][4238] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 00:22:32.360702 containerd[1554]: 2025-05-16 00:22:32.304 [INFO][4238] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 00:22:32.360702 containerd[1554]: 2025-05-16 00:22:32.304 [INFO][4238] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" host="localhost" May 16 00:22:32.360702 containerd[1554]: 2025-05-16 00:22:32.305 [INFO][4238] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9 May 16 00:22:32.360702 containerd[1554]: 2025-05-16 00:22:32.315 [INFO][4238] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" host="localhost" May 16 00:22:32.360702 containerd[1554]: 2025-05-16 00:22:32.331 [INFO][4238] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" host="localhost" May 16 00:22:32.360702 containerd[1554]: 2025-05-16 00:22:32.331 [INFO][4238] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" host="localhost" May 16 00:22:32.360702 containerd[1554]: 2025-05-16 00:22:32.331 [INFO][4238] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:22:32.360702 containerd[1554]: 2025-05-16 00:22:32.331 [INFO][4238] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" HandleID="k8s-pod-network.96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" Workload="localhost-k8s-calico--apiserver--69f746874b--drrg8-eth0" May 16 00:22:32.361718 containerd[1554]: 2025-05-16 00:22:32.334 [INFO][4198] cni-plugin/k8s.go 418: Populated endpoint ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" Namespace="calico-apiserver" Pod="calico-apiserver-69f746874b-drrg8" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f746874b--drrg8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69f746874b--drrg8-eth0", GenerateName:"calico-apiserver-69f746874b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f8823d20-0cc7-41eb-b83e-818f1ec7a773", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 22, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69f746874b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-69f746874b-drrg8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid82ce67765d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:22:32.361718 containerd[1554]: 2025-05-16 00:22:32.334 [INFO][4198] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" Namespace="calico-apiserver" Pod="calico-apiserver-69f746874b-drrg8" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f746874b--drrg8-eth0" May 16 00:22:32.361718 containerd[1554]: 2025-05-16 00:22:32.334 [INFO][4198] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid82ce67765d ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" Namespace="calico-apiserver" Pod="calico-apiserver-69f746874b-drrg8" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f746874b--drrg8-eth0" May 16 00:22:32.361718 containerd[1554]: 2025-05-16 00:22:32.343 [INFO][4198] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" Namespace="calico-apiserver" Pod="calico-apiserver-69f746874b-drrg8" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f746874b--drrg8-eth0" May 16 00:22:32.361718 containerd[1554]: 2025-05-16 00:22:32.343 [INFO][4198] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" Namespace="calico-apiserver" Pod="calico-apiserver-69f746874b-drrg8" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f746874b--drrg8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69f746874b--drrg8-eth0", GenerateName:"calico-apiserver-69f746874b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f8823d20-0cc7-41eb-b83e-818f1ec7a773", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 22, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69f746874b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9", Pod:"calico-apiserver-69f746874b-drrg8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid82ce67765d", MAC:"8a:d6:61:10:cb:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:22:32.361718 containerd[1554]: 2025-05-16 00:22:32.358 [INFO][4198] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" Namespace="calico-apiserver" Pod="calico-apiserver-69f746874b-drrg8" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f746874b--drrg8-eth0" May 16 00:22:32.378037 containerd[1554]: time="2025-05-16T00:22:32.377986306Z" level=info msg="connecting to shim 96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" address="unix:///run/containerd/s/2359adeb47e48be9a8d89207c319499437acf1839527c5849bb0bc30cea6888a" namespace=k8s.io protocol=ttrpc version=3 May 16 00:22:32.392844 containerd[1554]: time="2025-05-16T00:22:32.392721137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jvpnm,Uid:0e107303-c733-4385-b560-7b7a1c214620,Namespace:kube-system,Attempt:0,} returns sandbox id \"39c8390e378860d77bce781f6733510c24e5128300ef2378c24534638fa120f3\"" May 16 00:22:32.397295 containerd[1554]: time="2025-05-16T00:22:32.397228032Z" level=info msg="CreateContainer within sandbox \"39c8390e378860d77bce781f6733510c24e5128300ef2378c24534638fa120f3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 00:22:32.412578 systemd[1]: Started cri-containerd-96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9.scope - libcontainer container 96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9. May 16 00:22:32.416363 containerd[1554]: time="2025-05-16T00:22:32.416320383Z" level=info msg="Container f411e072c477e5f967c1548622261cfd08f3213202bf32bfd60772283918a1cd: CDI devices from CRI Config.CDIDevices: []" May 16 00:22:32.421202 containerd[1554]: time="2025-05-16T00:22:32.421175928Z" level=info msg="CreateContainer within sandbox \"39c8390e378860d77bce781f6733510c24e5128300ef2378c24534638fa120f3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f411e072c477e5f967c1548622261cfd08f3213202bf32bfd60772283918a1cd\"" May 16 00:22:32.421849 containerd[1554]: time="2025-05-16T00:22:32.421778684Z" level=info msg="StartContainer for \"f411e072c477e5f967c1548622261cfd08f3213202bf32bfd60772283918a1cd\"" May 16 00:22:32.422662 containerd[1554]: time="2025-05-16T00:22:32.422645709Z" level=info msg="connecting to shim f411e072c477e5f967c1548622261cfd08f3213202bf32bfd60772283918a1cd" address="unix:///run/containerd/s/e0034dcfee25481aeb0d187f8d85a592877812d6ac0f4b425f64d61327a41eec" protocol=ttrpc version=3 May 16 00:22:32.432755 systemd-networkd[1440]: calif7c4b1fe9cb: Link UP May 16 00:22:32.432918 systemd-networkd[1440]: calif7c4b1fe9cb: Gained carrier May 16 00:22:32.442935 systemd-resolved[1441]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:22:32.444462 containerd[1554]: 2025-05-16 00:22:32.142 [INFO][4203] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--4wc9z-eth0 coredns-674b8bbfcf- kube-system b44c391c-7cde-4dcf-b519-756e8bbaeb58 817 0 2025-05-16 00:21:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-4wc9z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif7c4b1fe9cb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1743f02b2c80a72934024c3493ef0a08e0dd2241209c66ad7d10729a0794baca" Namespace="kube-system" Pod="coredns-674b8bbfcf-4wc9z" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4wc9z-" May 16 00:22:32.444462 containerd[1554]: 2025-05-16 00:22:32.142 [INFO][4203] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1743f02b2c80a72934024c3493ef0a08e0dd2241209c66ad7d10729a0794baca" Namespace="kube-system" Pod="coredns-674b8bbfcf-4wc9z" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4wc9z-eth0" May 16 00:22:32.444462 containerd[1554]: 2025-05-16 00:22:32.179 [INFO][4243] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1743f02b2c80a72934024c3493ef0a08e0dd2241209c66ad7d10729a0794baca" HandleID="k8s-pod-network.1743f02b2c80a72934024c3493ef0a08e0dd2241209c66ad7d10729a0794baca" Workload="localhost-k8s-coredns--674b8bbfcf--4wc9z-eth0" May 16 00:22:32.444462 containerd[1554]: 2025-05-16 00:22:32.179 [INFO][4243] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1743f02b2c80a72934024c3493ef0a08e0dd2241209c66ad7d10729a0794baca" HandleID="k8s-pod-network.1743f02b2c80a72934024c3493ef0a08e0dd2241209c66ad7d10729a0794baca" Workload="localhost-k8s-coredns--674b8bbfcf--4wc9z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9630), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-4wc9z", "timestamp":"2025-05-16 00:22:32.179217384 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:22:32.444462 containerd[1554]: 2025-05-16 00:22:32.179 [INFO][4243] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:22:32.444462 containerd[1554]: 2025-05-16 00:22:32.332 [INFO][4243] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:22:32.444462 containerd[1554]: 2025-05-16 00:22:32.332 [INFO][4243] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 00:22:32.444462 containerd[1554]: 2025-05-16 00:22:32.384 [INFO][4243] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1743f02b2c80a72934024c3493ef0a08e0dd2241209c66ad7d10729a0794baca" host="localhost" May 16 00:22:32.444462 containerd[1554]: 2025-05-16 00:22:32.396 [INFO][4243] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 00:22:32.444462 containerd[1554]: 2025-05-16 00:22:32.405 [INFO][4243] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 00:22:32.444462 containerd[1554]: 2025-05-16 00:22:32.408 [INFO][4243] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 00:22:32.444462 containerd[1554]: 2025-05-16 00:22:32.410 [INFO][4243] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 00:22:32.444462 containerd[1554]: 2025-05-16 00:22:32.410 [INFO][4243] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1743f02b2c80a72934024c3493ef0a08e0dd2241209c66ad7d10729a0794baca" host="localhost" May 16 00:22:32.444462 containerd[1554]: 2025-05-16 00:22:32.412 [INFO][4243] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1743f02b2c80a72934024c3493ef0a08e0dd2241209c66ad7d10729a0794baca May 16 00:22:32.444462 containerd[1554]: 2025-05-16 00:22:32.419 [INFO][4243] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1743f02b2c80a72934024c3493ef0a08e0dd2241209c66ad7d10729a0794baca" host="localhost" May 16 00:22:32.444462 containerd[1554]: 2025-05-16 00:22:32.424 [INFO][4243] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.1743f02b2c80a72934024c3493ef0a08e0dd2241209c66ad7d10729a0794baca" host="localhost" May 16 00:22:32.444462 containerd[1554]: 2025-05-16 00:22:32.424 [INFO][4243] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.1743f02b2c80a72934024c3493ef0a08e0dd2241209c66ad7d10729a0794baca" host="localhost" May 16 00:22:32.444462 containerd[1554]: 2025-05-16 00:22:32.424 [INFO][4243] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:22:32.444462 containerd[1554]: 2025-05-16 00:22:32.424 [INFO][4243] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="1743f02b2c80a72934024c3493ef0a08e0dd2241209c66ad7d10729a0794baca" HandleID="k8s-pod-network.1743f02b2c80a72934024c3493ef0a08e0dd2241209c66ad7d10729a0794baca" Workload="localhost-k8s-coredns--674b8bbfcf--4wc9z-eth0" May 16 00:22:32.445284 containerd[1554]: 2025-05-16 00:22:32.427 [INFO][4203] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1743f02b2c80a72934024c3493ef0a08e0dd2241209c66ad7d10729a0794baca" Namespace="kube-system" Pod="coredns-674b8bbfcf-4wc9z" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4wc9z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--4wc9z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b44c391c-7cde-4dcf-b519-756e8bbaeb58", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 21, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-4wc9z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7c4b1fe9cb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:22:32.445284 containerd[1554]: 2025-05-16 00:22:32.427 [INFO][4203] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="1743f02b2c80a72934024c3493ef0a08e0dd2241209c66ad7d10729a0794baca" Namespace="kube-system" Pod="coredns-674b8bbfcf-4wc9z" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4wc9z-eth0" May 16 00:22:32.445284 containerd[1554]: 2025-05-16 00:22:32.427 [INFO][4203] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7c4b1fe9cb ContainerID="1743f02b2c80a72934024c3493ef0a08e0dd2241209c66ad7d10729a0794baca" Namespace="kube-system" Pod="coredns-674b8bbfcf-4wc9z" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4wc9z-eth0" May 16 00:22:32.445284 containerd[1554]: 2025-05-16 00:22:32.431 [INFO][4203] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1743f02b2c80a72934024c3493ef0a08e0dd2241209c66ad7d10729a0794baca" Namespace="kube-system" Pod="coredns-674b8bbfcf-4wc9z" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4wc9z-eth0" May 16 00:22:32.445284 containerd[1554]: 2025-05-16 00:22:32.431 [INFO][4203] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1743f02b2c80a72934024c3493ef0a08e0dd2241209c66ad7d10729a0794baca" Namespace="kube-system" Pod="coredns-674b8bbfcf-4wc9z" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4wc9z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--4wc9z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b44c391c-7cde-4dcf-b519-756e8bbaeb58", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 21, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1743f02b2c80a72934024c3493ef0a08e0dd2241209c66ad7d10729a0794baca", Pod:"coredns-674b8bbfcf-4wc9z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7c4b1fe9cb", MAC:"3a:e8:1d:54:b4:b4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:22:32.445284 containerd[1554]: 2025-05-16 00:22:32.441 [INFO][4203] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1743f02b2c80a72934024c3493ef0a08e0dd2241209c66ad7d10729a0794baca" Namespace="kube-system" Pod="coredns-674b8bbfcf-4wc9z" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4wc9z-eth0" May 16 00:22:32.463505 systemd[1]: Started cri-containerd-f411e072c477e5f967c1548622261cfd08f3213202bf32bfd60772283918a1cd.scope - libcontainer container f411e072c477e5f967c1548622261cfd08f3213202bf32bfd60772283918a1cd. May 16 00:22:32.474559 containerd[1554]: time="2025-05-16T00:22:32.473826199Z" level=info msg="connecting to shim 1743f02b2c80a72934024c3493ef0a08e0dd2241209c66ad7d10729a0794baca" address="unix:///run/containerd/s/d749fc1c79718c0870387c3f5f8d385098451f82450318642fcb79f8f5555294" namespace=k8s.io protocol=ttrpc version=3 May 16 00:22:32.504994 containerd[1554]: time="2025-05-16T00:22:32.504972552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69f746874b-drrg8,Uid:f8823d20-0cc7-41eb-b83e-818f1ec7a773,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9\"" May 16 00:22:32.508205 containerd[1554]: time="2025-05-16T00:22:32.508138532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 16 00:22:32.512548 systemd[1]: Started cri-containerd-1743f02b2c80a72934024c3493ef0a08e0dd2241209c66ad7d10729a0794baca.scope - libcontainer container 1743f02b2c80a72934024c3493ef0a08e0dd2241209c66ad7d10729a0794baca. May 16 00:22:32.517180 containerd[1554]: time="2025-05-16T00:22:32.517158895Z" level=info msg="StartContainer for \"f411e072c477e5f967c1548622261cfd08f3213202bf32bfd60772283918a1cd\" returns successfully" May 16 00:22:32.523437 systemd-resolved[1441]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:22:32.548387 containerd[1554]: time="2025-05-16T00:22:32.548364988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4wc9z,Uid:b44c391c-7cde-4dcf-b519-756e8bbaeb58,Namespace:kube-system,Attempt:0,} returns sandbox id \"1743f02b2c80a72934024c3493ef0a08e0dd2241209c66ad7d10729a0794baca\"" May 16 00:22:32.556895 containerd[1554]: time="2025-05-16T00:22:32.556866608Z" level=info msg="CreateContainer within sandbox \"1743f02b2c80a72934024c3493ef0a08e0dd2241209c66ad7d10729a0794baca\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 00:22:32.565895 containerd[1554]: time="2025-05-16T00:22:32.565864768Z" level=info msg="Container 3f592d4f008b41b7481bd59b7777d45a217bacceb0bc32026e2aa529fc759ef5: CDI devices from CRI Config.CDIDevices: []" May 16 00:22:32.568109 containerd[1554]: time="2025-05-16T00:22:32.568087556Z" level=info msg="CreateContainer within sandbox \"1743f02b2c80a72934024c3493ef0a08e0dd2241209c66ad7d10729a0794baca\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3f592d4f008b41b7481bd59b7777d45a217bacceb0bc32026e2aa529fc759ef5\"" May 16 00:22:32.568684 containerd[1554]: time="2025-05-16T00:22:32.568611379Z" level=info msg="StartContainer for \"3f592d4f008b41b7481bd59b7777d45a217bacceb0bc32026e2aa529fc759ef5\"" May 16 00:22:32.569267 containerd[1554]: time="2025-05-16T00:22:32.569254365Z" level=info msg="connecting to shim 3f592d4f008b41b7481bd59b7777d45a217bacceb0bc32026e2aa529fc759ef5" address="unix:///run/containerd/s/d749fc1c79718c0870387c3f5f8d385098451f82450318642fcb79f8f5555294" protocol=ttrpc version=3 May 16 00:22:32.584479 systemd[1]: Started cri-containerd-3f592d4f008b41b7481bd59b7777d45a217bacceb0bc32026e2aa529fc759ef5.scope - libcontainer container 3f592d4f008b41b7481bd59b7777d45a217bacceb0bc32026e2aa529fc759ef5. May 16 00:22:32.609341 containerd[1554]: time="2025-05-16T00:22:32.609243682Z" level=info msg="StartContainer for \"3f592d4f008b41b7481bd59b7777d45a217bacceb0bc32026e2aa529fc759ef5\" returns successfully" May 16 00:22:33.063669 containerd[1554]: time="2025-05-16T00:22:33.063623299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-52cwp,Uid:52758d11-1828-43a1-82dc-8636be4d16ce,Namespace:calico-system,Attempt:0,}" May 16 00:22:33.063969 containerd[1554]: time="2025-05-16T00:22:33.063623299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8c456b68-lvb6f,Uid:b31223af-d117-49cb-b138-65ed4e7ea99c,Namespace:calico-apiserver,Attempt:0,}" May 16 00:22:33.232426 systemd-networkd[1440]: cali8f5bb4bddd7: Link UP May 16 00:22:33.233095 systemd-networkd[1440]: cali8f5bb4bddd7: Gained carrier May 16 00:22:33.288710 containerd[1554]: 2025-05-16 00:22:33.112 [INFO][4480] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--78d55f7ddc--52cwp-eth0 goldmane-78d55f7ddc- calico-system 52758d11-1828-43a1-82dc-8636be4d16ce 825 0 2025-05-16 00:22:09 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-78d55f7ddc-52cwp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8f5bb4bddd7 [] [] }} ContainerID="1fff4919556da12e908c330184887b77a236ef82a6ddc8199fc71f7de77d3235" Namespace="calico-system" Pod="goldmane-78d55f7ddc-52cwp" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--52cwp-" May 16 00:22:33.288710 containerd[1554]: 2025-05-16 00:22:33.112 [INFO][4480] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1fff4919556da12e908c330184887b77a236ef82a6ddc8199fc71f7de77d3235" Namespace="calico-system" Pod="goldmane-78d55f7ddc-52cwp" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--52cwp-eth0" May 16 00:22:33.288710 containerd[1554]: 2025-05-16 00:22:33.132 [INFO][4503] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1fff4919556da12e908c330184887b77a236ef82a6ddc8199fc71f7de77d3235" HandleID="k8s-pod-network.1fff4919556da12e908c330184887b77a236ef82a6ddc8199fc71f7de77d3235" Workload="localhost-k8s-goldmane--78d55f7ddc--52cwp-eth0" May 16 00:22:33.288710 containerd[1554]: 2025-05-16 00:22:33.132 [INFO][4503] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1fff4919556da12e908c330184887b77a236ef82a6ddc8199fc71f7de77d3235" HandleID="k8s-pod-network.1fff4919556da12e908c330184887b77a236ef82a6ddc8199fc71f7de77d3235" Workload="localhost-k8s-goldmane--78d55f7ddc--52cwp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9020), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-78d55f7ddc-52cwp", "timestamp":"2025-05-16 00:22:33.132882187 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:22:33.288710 containerd[1554]: 2025-05-16 00:22:33.133 [INFO][4503] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:22:33.288710 containerd[1554]: 2025-05-16 00:22:33.133 [INFO][4503] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:22:33.288710 containerd[1554]: 2025-05-16 00:22:33.133 [INFO][4503] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 00:22:33.288710 containerd[1554]: 2025-05-16 00:22:33.137 [INFO][4503] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1fff4919556da12e908c330184887b77a236ef82a6ddc8199fc71f7de77d3235" host="localhost" May 16 00:22:33.288710 containerd[1554]: 2025-05-16 00:22:33.142 [INFO][4503] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 00:22:33.288710 containerd[1554]: 2025-05-16 00:22:33.147 [INFO][4503] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 00:22:33.288710 containerd[1554]: 2025-05-16 00:22:33.148 [INFO][4503] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 00:22:33.288710 containerd[1554]: 2025-05-16 00:22:33.149 [INFO][4503] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 00:22:33.288710 containerd[1554]: 2025-05-16 00:22:33.149 [INFO][4503] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1fff4919556da12e908c330184887b77a236ef82a6ddc8199fc71f7de77d3235" host="localhost" May 16 00:22:33.288710 containerd[1554]: 2025-05-16 00:22:33.150 [INFO][4503] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1fff4919556da12e908c330184887b77a236ef82a6ddc8199fc71f7de77d3235 May 16 00:22:33.288710 containerd[1554]: 2025-05-16 00:22:33.164 [INFO][4503] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1fff4919556da12e908c330184887b77a236ef82a6ddc8199fc71f7de77d3235" host="localhost" May 16 00:22:33.288710 containerd[1554]: 2025-05-16 00:22:33.197 [INFO][4503] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.1fff4919556da12e908c330184887b77a236ef82a6ddc8199fc71f7de77d3235" host="localhost" May 16 00:22:33.288710 containerd[1554]: 2025-05-16 00:22:33.197 [INFO][4503] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.1fff4919556da12e908c330184887b77a236ef82a6ddc8199fc71f7de77d3235" host="localhost" May 16 00:22:33.288710 containerd[1554]: 2025-05-16 00:22:33.197 [INFO][4503] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:22:33.288710 containerd[1554]: 2025-05-16 00:22:33.197 [INFO][4503] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="1fff4919556da12e908c330184887b77a236ef82a6ddc8199fc71f7de77d3235" HandleID="k8s-pod-network.1fff4919556da12e908c330184887b77a236ef82a6ddc8199fc71f7de77d3235" Workload="localhost-k8s-goldmane--78d55f7ddc--52cwp-eth0" May 16 00:22:33.292706 containerd[1554]: 2025-05-16 00:22:33.213 [INFO][4480] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1fff4919556da12e908c330184887b77a236ef82a6ddc8199fc71f7de77d3235" Namespace="calico-system" Pod="goldmane-78d55f7ddc-52cwp" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--52cwp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--78d55f7ddc--52cwp-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"52758d11-1828-43a1-82dc-8636be4d16ce", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 22, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-78d55f7ddc-52cwp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8f5bb4bddd7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:22:33.292706 containerd[1554]: 2025-05-16 00:22:33.213 [INFO][4480] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="1fff4919556da12e908c330184887b77a236ef82a6ddc8199fc71f7de77d3235" Namespace="calico-system" Pod="goldmane-78d55f7ddc-52cwp" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--52cwp-eth0" May 16 00:22:33.292706 containerd[1554]: 2025-05-16 00:22:33.213 [INFO][4480] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f5bb4bddd7 ContainerID="1fff4919556da12e908c330184887b77a236ef82a6ddc8199fc71f7de77d3235" Namespace="calico-system" Pod="goldmane-78d55f7ddc-52cwp" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--52cwp-eth0" May 16 00:22:33.292706 containerd[1554]: 2025-05-16 00:22:33.233 [INFO][4480] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1fff4919556da12e908c330184887b77a236ef82a6ddc8199fc71f7de77d3235" Namespace="calico-system" Pod="goldmane-78d55f7ddc-52cwp" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--52cwp-eth0" May 16 00:22:33.292706 containerd[1554]: 2025-05-16 00:22:33.233 [INFO][4480] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1fff4919556da12e908c330184887b77a236ef82a6ddc8199fc71f7de77d3235" Namespace="calico-system" Pod="goldmane-78d55f7ddc-52cwp" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--52cwp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--78d55f7ddc--52cwp-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"52758d11-1828-43a1-82dc-8636be4d16ce", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 22, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1fff4919556da12e908c330184887b77a236ef82a6ddc8199fc71f7de77d3235", Pod:"goldmane-78d55f7ddc-52cwp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8f5bb4bddd7", MAC:"5a:0e:64:a0:a6:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:22:33.292706 containerd[1554]: 2025-05-16 00:22:33.287 [INFO][4480] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1fff4919556da12e908c330184887b77a236ef82a6ddc8199fc71f7de77d3235" Namespace="calico-system" Pod="goldmane-78d55f7ddc-52cwp" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--52cwp-eth0" May 16 00:22:33.311885 systemd-networkd[1440]: calic7d9a09cd3d: Link UP May 16 00:22:33.314871 systemd-networkd[1440]: calic7d9a09cd3d: Gained carrier May 16 00:22:33.338850 kubelet[2779]: I0516 00:22:33.338540 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jvpnm" podStartSLOduration=34.338528602 podStartE2EDuration="34.338528602s" podCreationTimestamp="2025-05-16 00:21:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:22:33.309920965 +0000 UTC m=+39.368706596" watchObservedRunningTime="2025-05-16 00:22:33.338528602 +0000 UTC m=+39.397314228" May 16 00:22:33.341445 containerd[1554]: 2025-05-16 00:22:33.128 [INFO][4482] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--d8c456b68--lvb6f-eth0 calico-apiserver-d8c456b68- calico-apiserver b31223af-d117-49cb-b138-65ed4e7ea99c 827 0 2025-05-16 00:22:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d8c456b68 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-d8c456b68-lvb6f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic7d9a09cd3d [] [] }} ContainerID="7f0d4c4351045c88c7c56e7e658e8e400d8b3caffedca6d799eb53c291650985" Namespace="calico-apiserver" Pod="calico-apiserver-d8c456b68-lvb6f" WorkloadEndpoint="localhost-k8s-calico--apiserver--d8c456b68--lvb6f-" May 16 00:22:33.341445 containerd[1554]: 2025-05-16 00:22:33.128 [INFO][4482] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7f0d4c4351045c88c7c56e7e658e8e400d8b3caffedca6d799eb53c291650985" Namespace="calico-apiserver" Pod="calico-apiserver-d8c456b68-lvb6f" WorkloadEndpoint="localhost-k8s-calico--apiserver--d8c456b68--lvb6f-eth0" May 16 00:22:33.341445 containerd[1554]: 2025-05-16 00:22:33.159 [INFO][4510] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f0d4c4351045c88c7c56e7e658e8e400d8b3caffedca6d799eb53c291650985" HandleID="k8s-pod-network.7f0d4c4351045c88c7c56e7e658e8e400d8b3caffedca6d799eb53c291650985" Workload="localhost-k8s-calico--apiserver--d8c456b68--lvb6f-eth0" May 16 00:22:33.341445 containerd[1554]: 2025-05-16 00:22:33.159 [INFO][4510] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7f0d4c4351045c88c7c56e7e658e8e400d8b3caffedca6d799eb53c291650985" HandleID="k8s-pod-network.7f0d4c4351045c88c7c56e7e658e8e400d8b3caffedca6d799eb53c291650985" Workload="localhost-k8s-calico--apiserver--d8c456b68--lvb6f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000235c00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-d8c456b68-lvb6f", "timestamp":"2025-05-16 00:22:33.15920508 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:22:33.341445 containerd[1554]: 2025-05-16 00:22:33.159 [INFO][4510] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:22:33.341445 containerd[1554]: 2025-05-16 00:22:33.197 [INFO][4510] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:22:33.341445 containerd[1554]: 2025-05-16 00:22:33.197 [INFO][4510] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 00:22:33.341445 containerd[1554]: 2025-05-16 00:22:33.270 [INFO][4510] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7f0d4c4351045c88c7c56e7e658e8e400d8b3caffedca6d799eb53c291650985" host="localhost" May 16 00:22:33.341445 containerd[1554]: 2025-05-16 00:22:33.274 [INFO][4510] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 00:22:33.341445 containerd[1554]: 2025-05-16 00:22:33.277 [INFO][4510] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 00:22:33.341445 containerd[1554]: 2025-05-16 00:22:33.278 [INFO][4510] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 00:22:33.341445 containerd[1554]: 2025-05-16 00:22:33.279 [INFO][4510] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 00:22:33.341445 containerd[1554]: 2025-05-16 00:22:33.279 [INFO][4510] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7f0d4c4351045c88c7c56e7e658e8e400d8b3caffedca6d799eb53c291650985" host="localhost" May 16 00:22:33.341445 containerd[1554]: 2025-05-16 00:22:33.280 [INFO][4510] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7f0d4c4351045c88c7c56e7e658e8e400d8b3caffedca6d799eb53c291650985 May 16 00:22:33.341445 containerd[1554]: 2025-05-16 00:22:33.283 [INFO][4510] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7f0d4c4351045c88c7c56e7e658e8e400d8b3caffedca6d799eb53c291650985" host="localhost" May 16 00:22:33.341445 containerd[1554]: 2025-05-16 00:22:33.303 [INFO][4510] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.7f0d4c4351045c88c7c56e7e658e8e400d8b3caffedca6d799eb53c291650985" host="localhost" May 16 00:22:33.341445 containerd[1554]: 2025-05-16 00:22:33.305 [INFO][4510] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.7f0d4c4351045c88c7c56e7e658e8e400d8b3caffedca6d799eb53c291650985" host="localhost" May 16 00:22:33.341445 containerd[1554]: 2025-05-16 00:22:33.305 [INFO][4510] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:22:33.341445 containerd[1554]: 2025-05-16 00:22:33.306 [INFO][4510] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="7f0d4c4351045c88c7c56e7e658e8e400d8b3caffedca6d799eb53c291650985" HandleID="k8s-pod-network.7f0d4c4351045c88c7c56e7e658e8e400d8b3caffedca6d799eb53c291650985" Workload="localhost-k8s-calico--apiserver--d8c456b68--lvb6f-eth0" May 16 00:22:33.352279 containerd[1554]: 2025-05-16 00:22:33.307 [INFO][4482] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7f0d4c4351045c88c7c56e7e658e8e400d8b3caffedca6d799eb53c291650985" Namespace="calico-apiserver" Pod="calico-apiserver-d8c456b68-lvb6f" WorkloadEndpoint="localhost-k8s-calico--apiserver--d8c456b68--lvb6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d8c456b68--lvb6f-eth0", GenerateName:"calico-apiserver-d8c456b68-", Namespace:"calico-apiserver", SelfLink:"", UID:"b31223af-d117-49cb-b138-65ed4e7ea99c", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 22, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d8c456b68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-d8c456b68-lvb6f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic7d9a09cd3d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:22:33.352279 containerd[1554]: 2025-05-16 00:22:33.307 [INFO][4482] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="7f0d4c4351045c88c7c56e7e658e8e400d8b3caffedca6d799eb53c291650985" Namespace="calico-apiserver" Pod="calico-apiserver-d8c456b68-lvb6f" WorkloadEndpoint="localhost-k8s-calico--apiserver--d8c456b68--lvb6f-eth0" May 16 00:22:33.352279 containerd[1554]: 2025-05-16 00:22:33.307 [INFO][4482] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic7d9a09cd3d ContainerID="7f0d4c4351045c88c7c56e7e658e8e400d8b3caffedca6d799eb53c291650985" Namespace="calico-apiserver" Pod="calico-apiserver-d8c456b68-lvb6f" WorkloadEndpoint="localhost-k8s-calico--apiserver--d8c456b68--lvb6f-eth0" May 16 00:22:33.352279 containerd[1554]: 2025-05-16 00:22:33.314 [INFO][4482] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f0d4c4351045c88c7c56e7e658e8e400d8b3caffedca6d799eb53c291650985" Namespace="calico-apiserver" Pod="calico-apiserver-d8c456b68-lvb6f" WorkloadEndpoint="localhost-k8s-calico--apiserver--d8c456b68--lvb6f-eth0" May 16 00:22:33.352279 containerd[1554]: 2025-05-16 00:22:33.315 [INFO][4482] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7f0d4c4351045c88c7c56e7e658e8e400d8b3caffedca6d799eb53c291650985" Namespace="calico-apiserver" Pod="calico-apiserver-d8c456b68-lvb6f" WorkloadEndpoint="localhost-k8s-calico--apiserver--d8c456b68--lvb6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d8c456b68--lvb6f-eth0", GenerateName:"calico-apiserver-d8c456b68-", Namespace:"calico-apiserver", SelfLink:"", UID:"b31223af-d117-49cb-b138-65ed4e7ea99c", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 22, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d8c456b68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7f0d4c4351045c88c7c56e7e658e8e400d8b3caffedca6d799eb53c291650985", Pod:"calico-apiserver-d8c456b68-lvb6f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic7d9a09cd3d", MAC:"4a:5b:6b:1f:b9:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:22:33.352279 containerd[1554]: 2025-05-16 00:22:33.339 [INFO][4482] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7f0d4c4351045c88c7c56e7e658e8e400d8b3caffedca6d799eb53c291650985" Namespace="calico-apiserver" Pod="calico-apiserver-d8c456b68-lvb6f" WorkloadEndpoint="localhost-k8s-calico--apiserver--d8c456b68--lvb6f-eth0" May 16 00:22:33.390271 kubelet[2779]: I0516 00:22:33.387960 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4wc9z" podStartSLOduration=34.387947134 podStartE2EDuration="34.387947134s" podCreationTimestamp="2025-05-16 00:21:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:22:33.35698813 +0000 UTC m=+39.415773755" watchObservedRunningTime="2025-05-16 00:22:33.387947134 +0000 UTC m=+39.446732760" May 16 00:22:33.395235 containerd[1554]: time="2025-05-16T00:22:33.395206969Z" level=info msg="connecting to shim 1fff4919556da12e908c330184887b77a236ef82a6ddc8199fc71f7de77d3235" address="unix:///run/containerd/s/3bc31d1d28647c1f474332ae23e11c200a51e1e5cdfd0c952908244f05b30625" namespace=k8s.io protocol=ttrpc version=3 May 16 00:22:33.418578 systemd[1]: Started cri-containerd-1fff4919556da12e908c330184887b77a236ef82a6ddc8199fc71f7de77d3235.scope - libcontainer container 1fff4919556da12e908c330184887b77a236ef82a6ddc8199fc71f7de77d3235. May 16 00:22:33.432124 systemd-resolved[1441]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:22:33.435551 containerd[1554]: time="2025-05-16T00:22:33.435464359Z" level=info msg="connecting to shim 7f0d4c4351045c88c7c56e7e658e8e400d8b3caffedca6d799eb53c291650985" address="unix:///run/containerd/s/d7d4a476d12b0625996d19e5c6752dec7ab3d5ad17daf1fe93dae76da073fc32" namespace=k8s.io protocol=ttrpc version=3 May 16 00:22:33.449435 systemd[1]: Started cri-containerd-7f0d4c4351045c88c7c56e7e658e8e400d8b3caffedca6d799eb53c291650985.scope - libcontainer container 7f0d4c4351045c88c7c56e7e658e8e400d8b3caffedca6d799eb53c291650985. May 16 00:22:33.461697 systemd-resolved[1441]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:22:33.477644 containerd[1554]: time="2025-05-16T00:22:33.477619873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-52cwp,Uid:52758d11-1828-43a1-82dc-8636be4d16ce,Namespace:calico-system,Attempt:0,} returns sandbox id \"1fff4919556da12e908c330184887b77a236ef82a6ddc8199fc71f7de77d3235\"" May 16 00:22:33.521269 containerd[1554]: time="2025-05-16T00:22:33.521238954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8c456b68-lvb6f,Uid:b31223af-d117-49cb-b138-65ed4e7ea99c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7f0d4c4351045c88c7c56e7e658e8e400d8b3caffedca6d799eb53c291650985\"" May 16 00:22:33.651609 systemd-networkd[1440]: calid82ce67765d: Gained IPv6LL May 16 00:22:33.970457 systemd-networkd[1440]: calif7c4b1fe9cb: Gained IPv6LL May 16 00:22:33.971048 systemd-networkd[1440]: calia76c8e5bfd0: Gained IPv6LL May 16 00:22:34.067989 containerd[1554]: time="2025-05-16T00:22:34.066529111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-87cc6cc95-w8v7r,Uid:d8af785c-b34f-443e-8ddd-87c4ecca1bac,Namespace:calico-system,Attempt:0,}" May 16 00:22:34.083132 containerd[1554]: time="2025-05-16T00:22:34.083033393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69f746874b-lqglx,Uid:9d770ce6-58c4-46a7-9547-3ce0bd7d1645,Namespace:calico-apiserver,Attempt:0,}" May 16 00:22:34.217903 systemd-networkd[1440]: cali96a2521d1bd: Link UP May 16 00:22:34.218250 systemd-networkd[1440]: cali96a2521d1bd: Gained carrier May 16 00:22:34.248719 containerd[1554]: 2025-05-16 00:22:34.129 [INFO][4638] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--87cc6cc95--w8v7r-eth0 calico-kube-controllers-87cc6cc95- calico-system d8af785c-b34f-443e-8ddd-87c4ecca1bac 824 0 2025-05-16 00:22:10 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:87cc6cc95 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-87cc6cc95-w8v7r eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali96a2521d1bd [] [] }} ContainerID="2b6547256f9dc8f49a89e9b6feb297941f5508814a12aed01d3ad9c9f9429938" Namespace="calico-system" Pod="calico-kube-controllers-87cc6cc95-w8v7r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--87cc6cc95--w8v7r-" May 16 00:22:34.248719 containerd[1554]: 2025-05-16 00:22:34.130 [INFO][4638] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2b6547256f9dc8f49a89e9b6feb297941f5508814a12aed01d3ad9c9f9429938" Namespace="calico-system" Pod="calico-kube-controllers-87cc6cc95-w8v7r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--87cc6cc95--w8v7r-eth0" May 16 00:22:34.248719 containerd[1554]: 2025-05-16 00:22:34.165 [INFO][4666] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b6547256f9dc8f49a89e9b6feb297941f5508814a12aed01d3ad9c9f9429938" HandleID="k8s-pod-network.2b6547256f9dc8f49a89e9b6feb297941f5508814a12aed01d3ad9c9f9429938" Workload="localhost-k8s-calico--kube--controllers--87cc6cc95--w8v7r-eth0" May 16 00:22:34.248719 containerd[1554]: 2025-05-16 00:22:34.165 [INFO][4666] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2b6547256f9dc8f49a89e9b6feb297941f5508814a12aed01d3ad9c9f9429938" HandleID="k8s-pod-network.2b6547256f9dc8f49a89e9b6feb297941f5508814a12aed01d3ad9c9f9429938" Workload="localhost-k8s-calico--kube--controllers--87cc6cc95--w8v7r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-87cc6cc95-w8v7r", "timestamp":"2025-05-16 00:22:34.162772966 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:22:34.248719 containerd[1554]: 2025-05-16 00:22:34.165 [INFO][4666] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:22:34.248719 containerd[1554]: 2025-05-16 00:22:34.165 [INFO][4666] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:22:34.248719 containerd[1554]: 2025-05-16 00:22:34.165 [INFO][4666] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 00:22:34.248719 containerd[1554]: 2025-05-16 00:22:34.172 [INFO][4666] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2b6547256f9dc8f49a89e9b6feb297941f5508814a12aed01d3ad9c9f9429938" host="localhost" May 16 00:22:34.248719 containerd[1554]: 2025-05-16 00:22:34.183 [INFO][4666] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 00:22:34.248719 containerd[1554]: 2025-05-16 00:22:34.186 [INFO][4666] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 00:22:34.248719 containerd[1554]: 2025-05-16 00:22:34.188 [INFO][4666] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 00:22:34.248719 containerd[1554]: 2025-05-16 00:22:34.190 [INFO][4666] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 00:22:34.248719 containerd[1554]: 2025-05-16 00:22:34.190 [INFO][4666] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2b6547256f9dc8f49a89e9b6feb297941f5508814a12aed01d3ad9c9f9429938" host="localhost" May 16 00:22:34.248719 containerd[1554]: 2025-05-16 00:22:34.191 [INFO][4666] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2b6547256f9dc8f49a89e9b6feb297941f5508814a12aed01d3ad9c9f9429938 May 16 00:22:34.248719 containerd[1554]: 2025-05-16 00:22:34.205 [INFO][4666] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2b6547256f9dc8f49a89e9b6feb297941f5508814a12aed01d3ad9c9f9429938" host="localhost" May 16 00:22:34.248719 containerd[1554]: 2025-05-16 00:22:34.210 [INFO][4666] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.2b6547256f9dc8f49a89e9b6feb297941f5508814a12aed01d3ad9c9f9429938" host="localhost" May 16 00:22:34.248719 containerd[1554]: 2025-05-16 00:22:34.210 [INFO][4666] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.2b6547256f9dc8f49a89e9b6feb297941f5508814a12aed01d3ad9c9f9429938" host="localhost" May 16 00:22:34.248719 containerd[1554]: 2025-05-16 00:22:34.210 [INFO][4666] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:22:34.248719 containerd[1554]: 2025-05-16 00:22:34.210 [INFO][4666] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="2b6547256f9dc8f49a89e9b6feb297941f5508814a12aed01d3ad9c9f9429938" HandleID="k8s-pod-network.2b6547256f9dc8f49a89e9b6feb297941f5508814a12aed01d3ad9c9f9429938" Workload="localhost-k8s-calico--kube--controllers--87cc6cc95--w8v7r-eth0" May 16 00:22:34.259284 containerd[1554]: 2025-05-16 00:22:34.215 [INFO][4638] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2b6547256f9dc8f49a89e9b6feb297941f5508814a12aed01d3ad9c9f9429938" Namespace="calico-system" Pod="calico-kube-controllers-87cc6cc95-w8v7r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--87cc6cc95--w8v7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--87cc6cc95--w8v7r-eth0", GenerateName:"calico-kube-controllers-87cc6cc95-", Namespace:"calico-system", SelfLink:"", UID:"d8af785c-b34f-443e-8ddd-87c4ecca1bac", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 22, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"87cc6cc95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-87cc6cc95-w8v7r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali96a2521d1bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:22:34.259284 containerd[1554]: 2025-05-16 00:22:34.216 [INFO][4638] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="2b6547256f9dc8f49a89e9b6feb297941f5508814a12aed01d3ad9c9f9429938" Namespace="calico-system" Pod="calico-kube-controllers-87cc6cc95-w8v7r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--87cc6cc95--w8v7r-eth0" May 16 00:22:34.259284 containerd[1554]: 2025-05-16 00:22:34.216 [INFO][4638] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96a2521d1bd ContainerID="2b6547256f9dc8f49a89e9b6feb297941f5508814a12aed01d3ad9c9f9429938" Namespace="calico-system" Pod="calico-kube-controllers-87cc6cc95-w8v7r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--87cc6cc95--w8v7r-eth0" May 16 00:22:34.259284 containerd[1554]: 2025-05-16 00:22:34.217 [INFO][4638] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2b6547256f9dc8f49a89e9b6feb297941f5508814a12aed01d3ad9c9f9429938" Namespace="calico-system" Pod="calico-kube-controllers-87cc6cc95-w8v7r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--87cc6cc95--w8v7r-eth0" May 16 00:22:34.259284 containerd[1554]: 2025-05-16 00:22:34.218 [INFO][4638] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2b6547256f9dc8f49a89e9b6feb297941f5508814a12aed01d3ad9c9f9429938" Namespace="calico-system" Pod="calico-kube-controllers-87cc6cc95-w8v7r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--87cc6cc95--w8v7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--87cc6cc95--w8v7r-eth0", GenerateName:"calico-kube-controllers-87cc6cc95-", Namespace:"calico-system", SelfLink:"", UID:"d8af785c-b34f-443e-8ddd-87c4ecca1bac", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 22, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"87cc6cc95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2b6547256f9dc8f49a89e9b6feb297941f5508814a12aed01d3ad9c9f9429938", Pod:"calico-kube-controllers-87cc6cc95-w8v7r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali96a2521d1bd", MAC:"9a:6c:79:3f:d1:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:22:34.259284 containerd[1554]: 2025-05-16 00:22:34.246 [INFO][4638] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2b6547256f9dc8f49a89e9b6feb297941f5508814a12aed01d3ad9c9f9429938" Namespace="calico-system" Pod="calico-kube-controllers-87cc6cc95-w8v7r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--87cc6cc95--w8v7r-eth0" May 16 00:22:34.284366 containerd[1554]: time="2025-05-16T00:22:34.284117965Z" level=info msg="connecting to shim 2b6547256f9dc8f49a89e9b6feb297941f5508814a12aed01d3ad9c9f9429938" address="unix:///run/containerd/s/5c1600a42e78f07f6f77f05e36608b1ca6b0109ffceb941f9425d1c81b666eeb" namespace=k8s.io protocol=ttrpc version=3 May 16 00:22:34.332445 systemd[1]: Started cri-containerd-2b6547256f9dc8f49a89e9b6feb297941f5508814a12aed01d3ad9c9f9429938.scope - libcontainer container 2b6547256f9dc8f49a89e9b6feb297941f5508814a12aed01d3ad9c9f9429938. May 16 00:22:34.342029 systemd-networkd[1440]: cali9e2407ee734: Link UP May 16 00:22:34.342559 systemd-networkd[1440]: cali9e2407ee734: Gained carrier May 16 00:22:34.342725 systemd-resolved[1441]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:22:34.355538 systemd-networkd[1440]: calic7d9a09cd3d: Gained IPv6LL May 16 00:22:34.373127 containerd[1554]: 2025-05-16 00:22:34.123 [INFO][4648] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--69f746874b--lqglx-eth0 calico-apiserver-69f746874b- calico-apiserver 9d770ce6-58c4-46a7-9547-3ce0bd7d1645 823 0 2025-05-16 00:22:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69f746874b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-69f746874b-lqglx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9e2407ee734 [] [] }} ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" Namespace="calico-apiserver" Pod="calico-apiserver-69f746874b-lqglx" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f746874b--lqglx-" May 16 00:22:34.373127 containerd[1554]: 2025-05-16 00:22:34.124 [INFO][4648] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" Namespace="calico-apiserver" Pod="calico-apiserver-69f746874b-lqglx" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f746874b--lqglx-eth0" May 16 00:22:34.373127 containerd[1554]: 2025-05-16 00:22:34.173 [INFO][4661] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" HandleID="k8s-pod-network.f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" Workload="localhost-k8s-calico--apiserver--69f746874b--lqglx-eth0" May 16 00:22:34.373127 containerd[1554]: 2025-05-16 00:22:34.174 [INFO][4661] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" HandleID="k8s-pod-network.f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" Workload="localhost-k8s-calico--apiserver--69f746874b--lqglx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000235020), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-69f746874b-lqglx", "timestamp":"2025-05-16 00:22:34.173942098 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:22:34.373127 containerd[1554]: 2025-05-16 00:22:34.174 [INFO][4661] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:22:34.373127 containerd[1554]: 2025-05-16 00:22:34.211 [INFO][4661] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:22:34.373127 containerd[1554]: 2025-05-16 00:22:34.211 [INFO][4661] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 00:22:34.373127 containerd[1554]: 2025-05-16 00:22:34.273 [INFO][4661] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" host="localhost" May 16 00:22:34.373127 containerd[1554]: 2025-05-16 00:22:34.285 [INFO][4661] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 00:22:34.373127 containerd[1554]: 2025-05-16 00:22:34.288 [INFO][4661] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 00:22:34.373127 containerd[1554]: 2025-05-16 00:22:34.289 [INFO][4661] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 00:22:34.373127 containerd[1554]: 2025-05-16 00:22:34.294 [INFO][4661] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 00:22:34.373127 containerd[1554]: 2025-05-16 00:22:34.294 [INFO][4661] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" host="localhost" May 16 00:22:34.373127 containerd[1554]: 2025-05-16 00:22:34.297 [INFO][4661] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97 May 16 00:22:34.373127 containerd[1554]: 2025-05-16 00:22:34.306 [INFO][4661] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" host="localhost" May 16 00:22:34.373127 containerd[1554]: 2025-05-16 00:22:34.322 [INFO][4661] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" host="localhost" May 16 00:22:34.373127 containerd[1554]: 2025-05-16 00:22:34.323 [INFO][4661] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" host="localhost" May 16 00:22:34.373127 containerd[1554]: 2025-05-16 00:22:34.323 [INFO][4661] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:22:34.373127 containerd[1554]: 2025-05-16 00:22:34.323 [INFO][4661] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" HandleID="k8s-pod-network.f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" Workload="localhost-k8s-calico--apiserver--69f746874b--lqglx-eth0" May 16 00:22:34.373806 containerd[1554]: 2025-05-16 00:22:34.332 [INFO][4648] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" Namespace="calico-apiserver" Pod="calico-apiserver-69f746874b-lqglx" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f746874b--lqglx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69f746874b--lqglx-eth0", GenerateName:"calico-apiserver-69f746874b-", Namespace:"calico-apiserver", SelfLink:"", UID:"9d770ce6-58c4-46a7-9547-3ce0bd7d1645", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 22, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69f746874b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-69f746874b-lqglx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9e2407ee734", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:22:34.373806 containerd[1554]: 2025-05-16 00:22:34.339 [INFO][4648] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" Namespace="calico-apiserver" Pod="calico-apiserver-69f746874b-lqglx" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f746874b--lqglx-eth0" May 16 00:22:34.373806 containerd[1554]: 2025-05-16 00:22:34.339 [INFO][4648] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9e2407ee734 ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" Namespace="calico-apiserver" Pod="calico-apiserver-69f746874b-lqglx" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f746874b--lqglx-eth0" May 16 00:22:34.373806 containerd[1554]: 2025-05-16 00:22:34.342 [INFO][4648] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" Namespace="calico-apiserver" Pod="calico-apiserver-69f746874b-lqglx" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f746874b--lqglx-eth0" May 16 00:22:34.373806 containerd[1554]: 2025-05-16 00:22:34.342 [INFO][4648] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" Namespace="calico-apiserver" Pod="calico-apiserver-69f746874b-lqglx" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f746874b--lqglx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69f746874b--lqglx-eth0", GenerateName:"calico-apiserver-69f746874b-", Namespace:"calico-apiserver", SelfLink:"", UID:"9d770ce6-58c4-46a7-9547-3ce0bd7d1645", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 22, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69f746874b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97", Pod:"calico-apiserver-69f746874b-lqglx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9e2407ee734", MAC:"86:bd:46:6c:10:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:22:34.373806 containerd[1554]: 2025-05-16 00:22:34.371 [INFO][4648] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" Namespace="calico-apiserver" Pod="calico-apiserver-69f746874b-lqglx" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f746874b--lqglx-eth0" May 16 00:22:34.418663 systemd-networkd[1440]: cali8f5bb4bddd7: Gained IPv6LL May 16 00:22:34.469638 containerd[1554]: time="2025-05-16T00:22:34.469607515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-87cc6cc95-w8v7r,Uid:d8af785c-b34f-443e-8ddd-87c4ecca1bac,Namespace:calico-system,Attempt:0,} returns sandbox id \"2b6547256f9dc8f49a89e9b6feb297941f5508814a12aed01d3ad9c9f9429938\"" May 16 00:22:34.510295 containerd[1554]: time="2025-05-16T00:22:34.510207931Z" level=info msg="connecting to shim f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" address="unix:///run/containerd/s/42411c4ca41fcb511611e4f4025a40528ebf4d487bd182ee2796aecd813ba649" namespace=k8s.io protocol=ttrpc version=3 May 16 00:22:34.528459 systemd[1]: Started cri-containerd-f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97.scope - libcontainer container f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97. May 16 00:22:34.538277 systemd-resolved[1441]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:22:34.579968 containerd[1554]: time="2025-05-16T00:22:34.579940924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69f746874b-lqglx,Uid:9d770ce6-58c4-46a7-9547-3ce0bd7d1645,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97\"" May 16 00:22:35.065363 containerd[1554]: time="2025-05-16T00:22:35.064844486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sv6vl,Uid:67e150e9-6407-4d80-99aa-db86a8170146,Namespace:calico-system,Attempt:0,}" May 16 00:22:35.400874 systemd-networkd[1440]: cali34d7730d070: Link UP May 16 00:22:35.400993 systemd-networkd[1440]: cali34d7730d070: Gained carrier May 16 00:22:35.427935 containerd[1554]: 2025-05-16 00:22:35.290 [INFO][4797] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--sv6vl-eth0 csi-node-driver- calico-system 67e150e9-6407-4d80-99aa-db86a8170146 715 0 2025-05-16 00:22:10 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-sv6vl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali34d7730d070 [] [] }} ContainerID="f3020d901c36cf57093a2be6081892dc927c0fd970bd4ffffcac4f2487657a27" Namespace="calico-system" Pod="csi-node-driver-sv6vl" WorkloadEndpoint="localhost-k8s-csi--node--driver--sv6vl-" May 16 00:22:35.427935 containerd[1554]: 2025-05-16 00:22:35.290 [INFO][4797] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f3020d901c36cf57093a2be6081892dc927c0fd970bd4ffffcac4f2487657a27" Namespace="calico-system" Pod="csi-node-driver-sv6vl" WorkloadEndpoint="localhost-k8s-csi--node--driver--sv6vl-eth0" May 16 00:22:35.427935 containerd[1554]: 2025-05-16 00:22:35.345 [INFO][4810] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f3020d901c36cf57093a2be6081892dc927c0fd970bd4ffffcac4f2487657a27" HandleID="k8s-pod-network.f3020d901c36cf57093a2be6081892dc927c0fd970bd4ffffcac4f2487657a27" Workload="localhost-k8s-csi--node--driver--sv6vl-eth0" May 16 00:22:35.427935 containerd[1554]: 2025-05-16 00:22:35.346 [INFO][4810] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f3020d901c36cf57093a2be6081892dc927c0fd970bd4ffffcac4f2487657a27" HandleID="k8s-pod-network.f3020d901c36cf57093a2be6081892dc927c0fd970bd4ffffcac4f2487657a27" Workload="localhost-k8s-csi--node--driver--sv6vl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-sv6vl", "timestamp":"2025-05-16 00:22:35.345941771 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:22:35.427935 containerd[1554]: 2025-05-16 00:22:35.346 [INFO][4810] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:22:35.427935 containerd[1554]: 2025-05-16 00:22:35.346 [INFO][4810] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:22:35.427935 containerd[1554]: 2025-05-16 00:22:35.346 [INFO][4810] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 00:22:35.427935 containerd[1554]: 2025-05-16 00:22:35.350 [INFO][4810] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f3020d901c36cf57093a2be6081892dc927c0fd970bd4ffffcac4f2487657a27" host="localhost" May 16 00:22:35.427935 containerd[1554]: 2025-05-16 00:22:35.358 [INFO][4810] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 00:22:35.427935 containerd[1554]: 2025-05-16 00:22:35.362 [INFO][4810] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 00:22:35.427935 containerd[1554]: 2025-05-16 00:22:35.364 [INFO][4810] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 00:22:35.427935 containerd[1554]: 2025-05-16 00:22:35.366 [INFO][4810] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 00:22:35.427935 containerd[1554]: 2025-05-16 00:22:35.366 [INFO][4810] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f3020d901c36cf57093a2be6081892dc927c0fd970bd4ffffcac4f2487657a27" host="localhost" May 16 00:22:35.427935 containerd[1554]: 2025-05-16 00:22:35.366 [INFO][4810] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f3020d901c36cf57093a2be6081892dc927c0fd970bd4ffffcac4f2487657a27 May 16 00:22:35.427935 containerd[1554]: 2025-05-16 00:22:35.370 [INFO][4810] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f3020d901c36cf57093a2be6081892dc927c0fd970bd4ffffcac4f2487657a27" host="localhost" May 16 00:22:35.427935 containerd[1554]: 2025-05-16 00:22:35.392 [INFO][4810] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.f3020d901c36cf57093a2be6081892dc927c0fd970bd4ffffcac4f2487657a27" host="localhost" May 16 00:22:35.427935 containerd[1554]: 2025-05-16 00:22:35.397 [INFO][4810] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.f3020d901c36cf57093a2be6081892dc927c0fd970bd4ffffcac4f2487657a27" host="localhost" May 16 00:22:35.427935 containerd[1554]: 2025-05-16 00:22:35.397 [INFO][4810] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:22:35.427935 containerd[1554]: 2025-05-16 00:22:35.397 [INFO][4810] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="f3020d901c36cf57093a2be6081892dc927c0fd970bd4ffffcac4f2487657a27" HandleID="k8s-pod-network.f3020d901c36cf57093a2be6081892dc927c0fd970bd4ffffcac4f2487657a27" Workload="localhost-k8s-csi--node--driver--sv6vl-eth0" May 16 00:22:35.446948 containerd[1554]: 2025-05-16 00:22:35.398 [INFO][4797] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f3020d901c36cf57093a2be6081892dc927c0fd970bd4ffffcac4f2487657a27" Namespace="calico-system" Pod="csi-node-driver-sv6vl" WorkloadEndpoint="localhost-k8s-csi--node--driver--sv6vl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sv6vl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"67e150e9-6407-4d80-99aa-db86a8170146", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 22, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-sv6vl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali34d7730d070", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:22:35.446948 containerd[1554]: 2025-05-16 00:22:35.398 [INFO][4797] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="f3020d901c36cf57093a2be6081892dc927c0fd970bd4ffffcac4f2487657a27" Namespace="calico-system" Pod="csi-node-driver-sv6vl" WorkloadEndpoint="localhost-k8s-csi--node--driver--sv6vl-eth0" May 16 00:22:35.446948 containerd[1554]: 2025-05-16 00:22:35.398 [INFO][4797] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali34d7730d070 ContainerID="f3020d901c36cf57093a2be6081892dc927c0fd970bd4ffffcac4f2487657a27" Namespace="calico-system" Pod="csi-node-driver-sv6vl" WorkloadEndpoint="localhost-k8s-csi--node--driver--sv6vl-eth0" May 16 00:22:35.446948 containerd[1554]: 2025-05-16 00:22:35.400 [INFO][4797] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f3020d901c36cf57093a2be6081892dc927c0fd970bd4ffffcac4f2487657a27" Namespace="calico-system" Pod="csi-node-driver-sv6vl" WorkloadEndpoint="localhost-k8s-csi--node--driver--sv6vl-eth0" May 16 00:22:35.446948 containerd[1554]: 2025-05-16 00:22:35.401 [INFO][4797] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f3020d901c36cf57093a2be6081892dc927c0fd970bd4ffffcac4f2487657a27" Namespace="calico-system" Pod="csi-node-driver-sv6vl" WorkloadEndpoint="localhost-k8s-csi--node--driver--sv6vl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sv6vl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"67e150e9-6407-4d80-99aa-db86a8170146", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 22, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f3020d901c36cf57093a2be6081892dc927c0fd970bd4ffffcac4f2487657a27", Pod:"csi-node-driver-sv6vl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali34d7730d070", MAC:"a6:7d:82:93:d1:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:22:35.446948 containerd[1554]: 2025-05-16 00:22:35.425 [INFO][4797] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f3020d901c36cf57093a2be6081892dc927c0fd970bd4ffffcac4f2487657a27" Namespace="calico-system" Pod="csi-node-driver-sv6vl" WorkloadEndpoint="localhost-k8s-csi--node--driver--sv6vl-eth0" May 16 00:22:35.442970 systemd-networkd[1440]: cali9e2407ee734: Gained IPv6LL May 16 00:22:35.507557 systemd-networkd[1440]: cali96a2521d1bd: Gained IPv6LL May 16 00:22:35.619246 containerd[1554]: time="2025-05-16T00:22:35.619018991Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:22:35.689405 containerd[1554]: time="2025-05-16T00:22:35.641595734Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 16 00:22:35.689405 containerd[1554]: time="2025-05-16T00:22:35.688515204Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:22:35.722377 containerd[1554]: time="2025-05-16T00:22:35.721943844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:22:35.722377 containerd[1554]: time="2025-05-16T00:22:35.722331919Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 3.214170746s" May 16 00:22:35.722377 containerd[1554]: time="2025-05-16T00:22:35.722365345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 16 00:22:35.723069 containerd[1554]: time="2025-05-16T00:22:35.723052333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 16 00:22:35.749704 containerd[1554]: time="2025-05-16T00:22:35.749464326Z" level=info msg="CreateContainer within sandbox \"96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 16 00:22:35.803930 containerd[1554]: time="2025-05-16T00:22:35.802552856Z" level=info msg="Container 225900eb8b702e488a4765387da782d7becb4a0e31de8fdfb7a578703a940f25: CDI devices from CRI Config.CDIDevices: []" May 16 00:22:35.834570 containerd[1554]: time="2025-05-16T00:22:35.834546135Z" level=info msg="connecting to shim f3020d901c36cf57093a2be6081892dc927c0fd970bd4ffffcac4f2487657a27" address="unix:///run/containerd/s/10405cd20e08dcd0d18d29e2672dcef3d1d5b3f563b610de8604255c730dac6d" namespace=k8s.io protocol=ttrpc version=3 May 16 00:22:35.846472 containerd[1554]: time="2025-05-16T00:22:35.846452454Z" level=info msg="CreateContainer within sandbox \"96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"225900eb8b702e488a4765387da782d7becb4a0e31de8fdfb7a578703a940f25\"" May 16 00:22:35.847330 containerd[1554]: time="2025-05-16T00:22:35.847254431Z" level=info msg="StartContainer for \"225900eb8b702e488a4765387da782d7becb4a0e31de8fdfb7a578703a940f25\"" May 16 00:22:35.848101 containerd[1554]: time="2025-05-16T00:22:35.848078837Z" level=info msg="connecting to shim 225900eb8b702e488a4765387da782d7becb4a0e31de8fdfb7a578703a940f25" address="unix:///run/containerd/s/2359adeb47e48be9a8d89207c319499437acf1839527c5849bb0bc30cea6888a" protocol=ttrpc version=3 May 16 00:22:35.852470 systemd[1]: Started cri-containerd-f3020d901c36cf57093a2be6081892dc927c0fd970bd4ffffcac4f2487657a27.scope - libcontainer container f3020d901c36cf57093a2be6081892dc927c0fd970bd4ffffcac4f2487657a27. May 16 00:22:35.866467 systemd[1]: Started cri-containerd-225900eb8b702e488a4765387da782d7becb4a0e31de8fdfb7a578703a940f25.scope - libcontainer container 225900eb8b702e488a4765387da782d7becb4a0e31de8fdfb7a578703a940f25. May 16 00:22:35.869934 systemd-resolved[1441]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:22:35.890388 containerd[1554]: time="2025-05-16T00:22:35.890363137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sv6vl,Uid:67e150e9-6407-4d80-99aa-db86a8170146,Namespace:calico-system,Attempt:0,} returns sandbox id \"f3020d901c36cf57093a2be6081892dc927c0fd970bd4ffffcac4f2487657a27\"" May 16 00:22:35.921155 containerd[1554]: time="2025-05-16T00:22:35.921128152Z" level=info msg="StartContainer for \"225900eb8b702e488a4765387da782d7becb4a0e31de8fdfb7a578703a940f25\" returns successfully" May 16 00:22:36.017068 containerd[1554]: time="2025-05-16T00:22:36.016982458Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 00:22:36.240502 containerd[1554]: time="2025-05-16T00:22:36.240404976Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 16 00:22:36.240502 containerd[1554]: time="2025-05-16T00:22:36.240457536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 16 00:22:36.243215 kubelet[2779]: E0516 00:22:36.240750 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 00:22:36.243215 kubelet[2779]: E0516 00:22:36.243180 2779 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 00:22:36.244107 containerd[1554]: time="2025-05-16T00:22:36.243711306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 16 00:22:36.269185 kubelet[2779]: E0516 00:22:36.269052 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zwxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-52cwp_calico-system(52758d11-1828-43a1-82dc-8636be4d16ce): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 00:22:36.270262 kubelet[2779]: E0516 00:22:36.270224 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-52cwp" podUID="52758d11-1828-43a1-82dc-8636be4d16ce" May 16 00:22:36.306006 kubelet[2779]: E0516 00:22:36.305936 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-52cwp" podUID="52758d11-1828-43a1-82dc-8636be4d16ce" May 16 00:22:36.390728 kubelet[2779]: I0516 00:22:36.390602 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-69f746874b-drrg8" podStartSLOduration=26.174296946 podStartE2EDuration="29.390586616s" podCreationTimestamp="2025-05-16 00:22:07 +0000 UTC" firstStartedPulling="2025-05-16 00:22:32.506640244 +0000 UTC m=+38.565425868" lastFinishedPulling="2025-05-16 00:22:35.722929915 +0000 UTC m=+41.781715538" observedRunningTime="2025-05-16 00:22:36.3360402 +0000 UTC m=+42.394825822" watchObservedRunningTime="2025-05-16 00:22:36.390586616 +0000 UTC m=+42.449372247" May 16 00:22:36.702427 containerd[1554]: time="2025-05-16T00:22:36.702375738Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:22:36.703370 containerd[1554]: time="2025-05-16T00:22:36.702843053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 16 00:22:36.713114 containerd[1554]: time="2025-05-16T00:22:36.713045943Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 469.314755ms" May 16 00:22:36.713114 containerd[1554]: time="2025-05-16T00:22:36.713067902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 16 00:22:36.713982 containerd[1554]: time="2025-05-16T00:22:36.713915477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 16 00:22:36.716140 containerd[1554]: time="2025-05-16T00:22:36.715953550Z" level=info msg="CreateContainer within sandbox \"7f0d4c4351045c88c7c56e7e658e8e400d8b3caffedca6d799eb53c291650985\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 16 00:22:36.721361 containerd[1554]: time="2025-05-16T00:22:36.719070248Z" level=info msg="Container 2df294f02f1c664ef0ddd8d4f15c03e92ec233f27ba6f7bcf08892736cc631f4: CDI devices from CRI Config.CDIDevices: []" May 16 00:22:36.729304 containerd[1554]: time="2025-05-16T00:22:36.729221006Z" level=info msg="CreateContainer within sandbox \"7f0d4c4351045c88c7c56e7e658e8e400d8b3caffedca6d799eb53c291650985\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2df294f02f1c664ef0ddd8d4f15c03e92ec233f27ba6f7bcf08892736cc631f4\"" May 16 00:22:36.729892 containerd[1554]: time="2025-05-16T00:22:36.729582801Z" level=info msg="StartContainer for \"2df294f02f1c664ef0ddd8d4f15c03e92ec233f27ba6f7bcf08892736cc631f4\"" May 16 00:22:36.730552 containerd[1554]: time="2025-05-16T00:22:36.730269256Z" level=info msg="connecting to shim 2df294f02f1c664ef0ddd8d4f15c03e92ec233f27ba6f7bcf08892736cc631f4" address="unix:///run/containerd/s/d7d4a476d12b0625996d19e5c6752dec7ab3d5ad17daf1fe93dae76da073fc32" protocol=ttrpc version=3 May 16 00:22:36.750430 systemd[1]: Started cri-containerd-2df294f02f1c664ef0ddd8d4f15c03e92ec233f27ba6f7bcf08892736cc631f4.scope - libcontainer container 2df294f02f1c664ef0ddd8d4f15c03e92ec233f27ba6f7bcf08892736cc631f4. May 16 00:22:36.781701 containerd[1554]: time="2025-05-16T00:22:36.781676345Z" level=info msg="StartContainer for \"2df294f02f1c664ef0ddd8d4f15c03e92ec233f27ba6f7bcf08892736cc631f4\" returns successfully" May 16 00:22:37.320167 kubelet[2779]: I0516 00:22:37.320112 2779 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 00:22:37.362515 systemd-networkd[1440]: cali34d7730d070: Gained IPv6LL May 16 00:22:38.310904 kubelet[2779]: I0516 00:22:38.310561 2779 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 00:22:38.731302 kubelet[2779]: I0516 00:22:38.731260 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d8c456b68-lvb6f" podStartSLOduration=27.539770449 podStartE2EDuration="30.731247427s" podCreationTimestamp="2025-05-16 00:22:08 +0000 UTC" firstStartedPulling="2025-05-16 00:22:33.522182939 +0000 UTC m=+39.580968562" lastFinishedPulling="2025-05-16 00:22:36.71365992 +0000 UTC m=+42.772445540" observedRunningTime="2025-05-16 00:22:37.318721184 +0000 UTC m=+43.377506816" watchObservedRunningTime="2025-05-16 00:22:38.731247427 +0000 UTC m=+44.790033052" May 16 00:22:38.897152 systemd[1]: Created slice kubepods-besteffort-podeb2e8e6d_ed21_4529_9b04_d2eacbe774fb.slice - libcontainer container kubepods-besteffort-podeb2e8e6d_ed21_4529_9b04_d2eacbe774fb.slice. May 16 00:22:39.066600 kubelet[2779]: I0516 00:22:39.066461 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vjr5\" (UniqueName: \"kubernetes.io/projected/eb2e8e6d-ed21-4529-9b04-d2eacbe774fb-kube-api-access-9vjr5\") pod \"calico-apiserver-d8c456b68-5gvqs\" (UID: \"eb2e8e6d-ed21-4529-9b04-d2eacbe774fb\") " pod="calico-apiserver/calico-apiserver-d8c456b68-5gvqs" May 16 00:22:39.066600 kubelet[2779]: I0516 00:22:39.066499 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/eb2e8e6d-ed21-4529-9b04-d2eacbe774fb-calico-apiserver-certs\") pod \"calico-apiserver-d8c456b68-5gvqs\" (UID: \"eb2e8e6d-ed21-4529-9b04-d2eacbe774fb\") " pod="calico-apiserver/calico-apiserver-d8c456b68-5gvqs" May 16 00:22:39.499207 containerd[1554]: time="2025-05-16T00:22:39.499179962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8c456b68-5gvqs,Uid:eb2e8e6d-ed21-4529-9b04-d2eacbe774fb,Namespace:calico-apiserver,Attempt:0,}" May 16 00:22:39.931017 systemd-networkd[1440]: cali954b31559b9: Link UP May 16 00:22:39.931145 systemd-networkd[1440]: cali954b31559b9: Gained carrier May 16 00:22:39.943005 containerd[1554]: 2025-05-16 00:22:39.798 [INFO][4956] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--d8c456b68--5gvqs-eth0 calico-apiserver-d8c456b68- calico-apiserver eb2e8e6d-ed21-4529-9b04-d2eacbe774fb 1033 0 2025-05-16 00:22:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d8c456b68 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-d8c456b68-5gvqs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali954b31559b9 [] [] }} ContainerID="054a4367bdb8fb7a55e50dba5e20bc89fa6ef632190cbe487ae59c6f9d2a797f" Namespace="calico-apiserver" Pod="calico-apiserver-d8c456b68-5gvqs" WorkloadEndpoint="localhost-k8s-calico--apiserver--d8c456b68--5gvqs-" May 16 00:22:39.943005 containerd[1554]: 2025-05-16 00:22:39.816 [INFO][4956] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="054a4367bdb8fb7a55e50dba5e20bc89fa6ef632190cbe487ae59c6f9d2a797f" Namespace="calico-apiserver" Pod="calico-apiserver-d8c456b68-5gvqs" WorkloadEndpoint="localhost-k8s-calico--apiserver--d8c456b68--5gvqs-eth0" May 16 00:22:39.943005 containerd[1554]: 2025-05-16 00:22:39.866 [INFO][4975] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="054a4367bdb8fb7a55e50dba5e20bc89fa6ef632190cbe487ae59c6f9d2a797f" HandleID="k8s-pod-network.054a4367bdb8fb7a55e50dba5e20bc89fa6ef632190cbe487ae59c6f9d2a797f" Workload="localhost-k8s-calico--apiserver--d8c456b68--5gvqs-eth0" May 16 00:22:39.943005 containerd[1554]: 2025-05-16 00:22:39.866 [INFO][4975] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="054a4367bdb8fb7a55e50dba5e20bc89fa6ef632190cbe487ae59c6f9d2a797f" HandleID="k8s-pod-network.054a4367bdb8fb7a55e50dba5e20bc89fa6ef632190cbe487ae59c6f9d2a797f" Workload="localhost-k8s-calico--apiserver--d8c456b68--5gvqs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9630), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-d8c456b68-5gvqs", "timestamp":"2025-05-16 00:22:39.866638522 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:22:39.943005 containerd[1554]: 2025-05-16 00:22:39.866 [INFO][4975] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:22:39.943005 containerd[1554]: 2025-05-16 00:22:39.866 [INFO][4975] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:22:39.943005 containerd[1554]: 2025-05-16 00:22:39.866 [INFO][4975] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 00:22:39.943005 containerd[1554]: 2025-05-16 00:22:39.872 [INFO][4975] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.054a4367bdb8fb7a55e50dba5e20bc89fa6ef632190cbe487ae59c6f9d2a797f" host="localhost" May 16 00:22:39.943005 containerd[1554]: 2025-05-16 00:22:39.884 [INFO][4975] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 00:22:39.943005 containerd[1554]: 2025-05-16 00:22:39.886 [INFO][4975] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 00:22:39.943005 containerd[1554]: 2025-05-16 00:22:39.888 [INFO][4975] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 00:22:39.943005 containerd[1554]: 2025-05-16 00:22:39.889 [INFO][4975] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 00:22:39.943005 containerd[1554]: 2025-05-16 00:22:39.889 [INFO][4975] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.054a4367bdb8fb7a55e50dba5e20bc89fa6ef632190cbe487ae59c6f9d2a797f" host="localhost" May 16 00:22:39.943005 containerd[1554]: 2025-05-16 00:22:39.890 [INFO][4975] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.054a4367bdb8fb7a55e50dba5e20bc89fa6ef632190cbe487ae59c6f9d2a797f May 16 00:22:39.943005 containerd[1554]: 2025-05-16 00:22:39.915 [INFO][4975] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.054a4367bdb8fb7a55e50dba5e20bc89fa6ef632190cbe487ae59c6f9d2a797f" host="localhost" May 16 00:22:39.943005 containerd[1554]: 2025-05-16 00:22:39.926 [INFO][4975] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.138/26] block=192.168.88.128/26 handle="k8s-pod-network.054a4367bdb8fb7a55e50dba5e20bc89fa6ef632190cbe487ae59c6f9d2a797f" host="localhost" May 16 00:22:39.943005 containerd[1554]: 2025-05-16 00:22:39.926 [INFO][4975] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.138/26] handle="k8s-pod-network.054a4367bdb8fb7a55e50dba5e20bc89fa6ef632190cbe487ae59c6f9d2a797f" host="localhost" May 16 00:22:39.943005 containerd[1554]: 2025-05-16 00:22:39.926 [INFO][4975] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:22:39.943005 containerd[1554]: 2025-05-16 00:22:39.926 [INFO][4975] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.138/26] IPv6=[] ContainerID="054a4367bdb8fb7a55e50dba5e20bc89fa6ef632190cbe487ae59c6f9d2a797f" HandleID="k8s-pod-network.054a4367bdb8fb7a55e50dba5e20bc89fa6ef632190cbe487ae59c6f9d2a797f" Workload="localhost-k8s-calico--apiserver--d8c456b68--5gvqs-eth0" May 16 00:22:39.960488 containerd[1554]: 2025-05-16 00:22:39.928 [INFO][4956] cni-plugin/k8s.go 418: Populated endpoint ContainerID="054a4367bdb8fb7a55e50dba5e20bc89fa6ef632190cbe487ae59c6f9d2a797f" Namespace="calico-apiserver" Pod="calico-apiserver-d8c456b68-5gvqs" WorkloadEndpoint="localhost-k8s-calico--apiserver--d8c456b68--5gvqs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d8c456b68--5gvqs-eth0", GenerateName:"calico-apiserver-d8c456b68-", Namespace:"calico-apiserver", SelfLink:"", UID:"eb2e8e6d-ed21-4529-9b04-d2eacbe774fb", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 22, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d8c456b68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-d8c456b68-5gvqs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali954b31559b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:22:39.960488 containerd[1554]: 2025-05-16 00:22:39.929 [INFO][4956] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.138/32] ContainerID="054a4367bdb8fb7a55e50dba5e20bc89fa6ef632190cbe487ae59c6f9d2a797f" Namespace="calico-apiserver" Pod="calico-apiserver-d8c456b68-5gvqs" WorkloadEndpoint="localhost-k8s-calico--apiserver--d8c456b68--5gvqs-eth0" May 16 00:22:39.960488 containerd[1554]: 2025-05-16 00:22:39.929 [INFO][4956] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali954b31559b9 ContainerID="054a4367bdb8fb7a55e50dba5e20bc89fa6ef632190cbe487ae59c6f9d2a797f" Namespace="calico-apiserver" Pod="calico-apiserver-d8c456b68-5gvqs" WorkloadEndpoint="localhost-k8s-calico--apiserver--d8c456b68--5gvqs-eth0" May 16 00:22:39.960488 containerd[1554]: 2025-05-16 00:22:39.932 [INFO][4956] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="054a4367bdb8fb7a55e50dba5e20bc89fa6ef632190cbe487ae59c6f9d2a797f" Namespace="calico-apiserver" Pod="calico-apiserver-d8c456b68-5gvqs" WorkloadEndpoint="localhost-k8s-calico--apiserver--d8c456b68--5gvqs-eth0" May 16 00:22:39.960488 containerd[1554]: 2025-05-16 00:22:39.933 [INFO][4956] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="054a4367bdb8fb7a55e50dba5e20bc89fa6ef632190cbe487ae59c6f9d2a797f" Namespace="calico-apiserver" Pod="calico-apiserver-d8c456b68-5gvqs" WorkloadEndpoint="localhost-k8s-calico--apiserver--d8c456b68--5gvqs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d8c456b68--5gvqs-eth0", GenerateName:"calico-apiserver-d8c456b68-", Namespace:"calico-apiserver", SelfLink:"", UID:"eb2e8e6d-ed21-4529-9b04-d2eacbe774fb", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 22, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d8c456b68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"054a4367bdb8fb7a55e50dba5e20bc89fa6ef632190cbe487ae59c6f9d2a797f", Pod:"calico-apiserver-d8c456b68-5gvqs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali954b31559b9", MAC:"56:75:cc:0b:10:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:22:39.960488 containerd[1554]: 2025-05-16 00:22:39.940 [INFO][4956] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="054a4367bdb8fb7a55e50dba5e20bc89fa6ef632190cbe487ae59c6f9d2a797f" Namespace="calico-apiserver" Pod="calico-apiserver-d8c456b68-5gvqs" WorkloadEndpoint="localhost-k8s-calico--apiserver--d8c456b68--5gvqs-eth0" May 16 00:22:40.075144 containerd[1554]: time="2025-05-16T00:22:40.074823067Z" level=info msg="connecting to shim 054a4367bdb8fb7a55e50dba5e20bc89fa6ef632190cbe487ae59c6f9d2a797f" address="unix:///run/containerd/s/de7e8b18e6a77b029a64e402ca2256e5dbf57543f91a4aaa181d2fa0d8a45e51" namespace=k8s.io protocol=ttrpc version=3 May 16 00:22:40.099459 systemd[1]: Started cri-containerd-054a4367bdb8fb7a55e50dba5e20bc89fa6ef632190cbe487ae59c6f9d2a797f.scope - libcontainer container 054a4367bdb8fb7a55e50dba5e20bc89fa6ef632190cbe487ae59c6f9d2a797f. May 16 00:22:40.119451 systemd-resolved[1441]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:22:40.161181 containerd[1554]: time="2025-05-16T00:22:40.161151016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8c456b68-5gvqs,Uid:eb2e8e6d-ed21-4529-9b04-d2eacbe774fb,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"054a4367bdb8fb7a55e50dba5e20bc89fa6ef632190cbe487ae59c6f9d2a797f\"" May 16 00:22:40.222289 containerd[1554]: time="2025-05-16T00:22:40.222190360Z" level=info msg="CreateContainer within sandbox \"054a4367bdb8fb7a55e50dba5e20bc89fa6ef632190cbe487ae59c6f9d2a797f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 16 00:22:40.244838 containerd[1554]: time="2025-05-16T00:22:40.244693190Z" level=info msg="Container f6e8c26f65797729130d024c6a873ac29315f8ad7d3159f3dd93534988af4646: CDI devices from CRI Config.CDIDevices: []" May 16 00:22:40.253542 containerd[1554]: time="2025-05-16T00:22:40.253161439Z" level=info msg="CreateContainer within sandbox \"054a4367bdb8fb7a55e50dba5e20bc89fa6ef632190cbe487ae59c6f9d2a797f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f6e8c26f65797729130d024c6a873ac29315f8ad7d3159f3dd93534988af4646\"" May 16 00:22:40.254528 containerd[1554]: time="2025-05-16T00:22:40.254414972Z" level=info msg="StartContainer for \"f6e8c26f65797729130d024c6a873ac29315f8ad7d3159f3dd93534988af4646\"" May 16 00:22:40.255310 containerd[1554]: time="2025-05-16T00:22:40.255290422Z" level=info msg="connecting to shim f6e8c26f65797729130d024c6a873ac29315f8ad7d3159f3dd93534988af4646" address="unix:///run/containerd/s/de7e8b18e6a77b029a64e402ca2256e5dbf57543f91a4aaa181d2fa0d8a45e51" protocol=ttrpc version=3 May 16 00:22:40.270495 systemd[1]: Started cri-containerd-f6e8c26f65797729130d024c6a873ac29315f8ad7d3159f3dd93534988af4646.scope - libcontainer container f6e8c26f65797729130d024c6a873ac29315f8ad7d3159f3dd93534988af4646. May 16 00:22:40.331140 containerd[1554]: time="2025-05-16T00:22:40.331106957Z" level=info msg="StartContainer for \"f6e8c26f65797729130d024c6a873ac29315f8ad7d3159f3dd93534988af4646\" returns successfully" May 16 00:22:41.348173 kubelet[2779]: I0516 00:22:41.348128 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d8c456b68-5gvqs" podStartSLOduration=3.348016507 podStartE2EDuration="3.348016507s" podCreationTimestamp="2025-05-16 00:22:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:22:41.347593024 +0000 UTC m=+47.406378650" watchObservedRunningTime="2025-05-16 00:22:41.348016507 +0000 UTC m=+47.406802134" May 16 00:22:41.460895 systemd-networkd[1440]: cali954b31559b9: Gained IPv6LL May 16 00:22:41.883098 containerd[1554]: time="2025-05-16T00:22:41.883055046Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:22:41.999634 containerd[1554]: time="2025-05-16T00:22:41.922394962Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 16 00:22:42.000322 containerd[1554]: time="2025-05-16T00:22:41.997536500Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:22:42.000322 containerd[1554]: time="2025-05-16T00:22:41.999582012Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 5.285650324s" May 16 00:22:42.000322 containerd[1554]: time="2025-05-16T00:22:42.000262009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 16 00:22:42.000873 containerd[1554]: time="2025-05-16T00:22:42.000461959Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:22:42.093762 containerd[1554]: time="2025-05-16T00:22:42.093707776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 16 00:22:42.416248 containerd[1554]: time="2025-05-16T00:22:42.416020286Z" level=info msg="CreateContainer within sandbox \"2b6547256f9dc8f49a89e9b6feb297941f5508814a12aed01d3ad9c9f9429938\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 16 00:22:42.422661 kubelet[2779]: I0516 00:22:42.421342 2779 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 00:22:42.449389 containerd[1554]: time="2025-05-16T00:22:42.448555212Z" level=info msg="Container 23c014efccae973a1546c7131a694df96b5bc13d78434a7b6c9be3ff82edd438: CDI devices from CRI Config.CDIDevices: []" May 16 00:22:42.484699 containerd[1554]: time="2025-05-16T00:22:42.484590487Z" level=info msg="CreateContainer within sandbox \"2b6547256f9dc8f49a89e9b6feb297941f5508814a12aed01d3ad9c9f9429938\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"23c014efccae973a1546c7131a694df96b5bc13d78434a7b6c9be3ff82edd438\"" May 16 00:22:42.490192 containerd[1554]: time="2025-05-16T00:22:42.490163588Z" level=info msg="StartContainer for \"23c014efccae973a1546c7131a694df96b5bc13d78434a7b6c9be3ff82edd438\"" May 16 00:22:42.495611 containerd[1554]: time="2025-05-16T00:22:42.495583364Z" level=info msg="connecting to shim 23c014efccae973a1546c7131a694df96b5bc13d78434a7b6c9be3ff82edd438" address="unix:///run/containerd/s/5c1600a42e78f07f6f77f05e36608b1ca6b0109ffceb941f9425d1c81b666eeb" protocol=ttrpc version=3 May 16 00:22:42.580438 systemd[1]: Started cri-containerd-23c014efccae973a1546c7131a694df96b5bc13d78434a7b6c9be3ff82edd438.scope - libcontainer container 23c014efccae973a1546c7131a694df96b5bc13d78434a7b6c9be3ff82edd438. May 16 00:22:42.650153 containerd[1554]: time="2025-05-16T00:22:42.649704710Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:22:42.652181 containerd[1554]: time="2025-05-16T00:22:42.652154840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 16 00:22:42.659919 containerd[1554]: time="2025-05-16T00:22:42.659892171Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 566.155731ms" May 16 00:22:42.659919 containerd[1554]: time="2025-05-16T00:22:42.659918570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 16 00:22:42.665513 containerd[1554]: time="2025-05-16T00:22:42.664369902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 16 00:22:42.685114 containerd[1554]: time="2025-05-16T00:22:42.684658430Z" level=info msg="CreateContainer within sandbox \"f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 16 00:22:42.688937 containerd[1554]: time="2025-05-16T00:22:42.688880699Z" level=info msg="StartContainer for \"23c014efccae973a1546c7131a694df96b5bc13d78434a7b6c9be3ff82edd438\" returns successfully" May 16 00:22:42.705223 containerd[1554]: time="2025-05-16T00:22:42.704758316Z" level=info msg="Container 2db91f5ce4f51e96738a79ef5150fa3366d961c633cf04d81ae15e62bee97d52: CDI devices from CRI Config.CDIDevices: []" May 16 00:22:42.714061 containerd[1554]: time="2025-05-16T00:22:42.713957261Z" level=info msg="CreateContainer within sandbox \"f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2db91f5ce4f51e96738a79ef5150fa3366d961c633cf04d81ae15e62bee97d52\"" May 16 00:22:42.715161 containerd[1554]: time="2025-05-16T00:22:42.714893109Z" level=info msg="StartContainer for \"2db91f5ce4f51e96738a79ef5150fa3366d961c633cf04d81ae15e62bee97d52\"" May 16 00:22:42.716441 containerd[1554]: time="2025-05-16T00:22:42.715809917Z" level=info msg="connecting to shim 2db91f5ce4f51e96738a79ef5150fa3366d961c633cf04d81ae15e62bee97d52" address="unix:///run/containerd/s/42411c4ca41fcb511611e4f4025a40528ebf4d487bd182ee2796aecd813ba649" protocol=ttrpc version=3 May 16 00:22:42.735457 systemd[1]: Started cri-containerd-2db91f5ce4f51e96738a79ef5150fa3366d961c633cf04d81ae15e62bee97d52.scope - libcontainer container 2db91f5ce4f51e96738a79ef5150fa3366d961c633cf04d81ae15e62bee97d52. May 16 00:22:42.773739 containerd[1554]: time="2025-05-16T00:22:42.773703184Z" level=info msg="StartContainer for \"2db91f5ce4f51e96738a79ef5150fa3366d961c633cf04d81ae15e62bee97d52\" returns successfully" May 16 00:22:43.837893 containerd[1554]: time="2025-05-16T00:22:43.837758888Z" level=info msg="StopContainer for \"2db91f5ce4f51e96738a79ef5150fa3366d961c633cf04d81ae15e62bee97d52\" with timeout 30 (s)" May 16 00:22:43.839904 containerd[1554]: time="2025-05-16T00:22:43.839815538Z" level=info msg="Stop container \"2db91f5ce4f51e96738a79ef5150fa3366d961c633cf04d81ae15e62bee97d52\" with signal terminated" May 16 00:22:43.852654 systemd[1]: cri-containerd-2db91f5ce4f51e96738a79ef5150fa3366d961c633cf04d81ae15e62bee97d52.scope: Deactivated successfully. May 16 00:22:43.852847 systemd[1]: cri-containerd-2db91f5ce4f51e96738a79ef5150fa3366d961c633cf04d81ae15e62bee97d52.scope: Consumed 545ms CPU time, 24.8M memory peak, 6.1M read from disk. May 16 00:22:43.870049 containerd[1554]: time="2025-05-16T00:22:43.869969173Z" level=info msg="TaskExit event in podsandbox handler container_id:\"23c014efccae973a1546c7131a694df96b5bc13d78434a7b6c9be3ff82edd438\" id:\"efa3c33c2e29b1d74e54aec5eabb297e421f6cd279b6333dbac60e1435ab2c19\" pid:5164 exited_at:{seconds:1747354963 nanos:860498014}" May 16 00:22:43.870049 containerd[1554]: time="2025-05-16T00:22:43.870011028Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2db91f5ce4f51e96738a79ef5150fa3366d961c633cf04d81ae15e62bee97d52\" id:\"2db91f5ce4f51e96738a79ef5150fa3366d961c633cf04d81ae15e62bee97d52\" pid:5123 exit_status:1 exited_at:{seconds:1747354963 nanos:862473446}" May 16 00:22:43.870049 containerd[1554]: time="2025-05-16T00:22:43.869976253Z" level=info msg="received exit event container_id:\"2db91f5ce4f51e96738a79ef5150fa3366d961c633cf04d81ae15e62bee97d52\" id:\"2db91f5ce4f51e96738a79ef5150fa3366d961c633cf04d81ae15e62bee97d52\" pid:5123 exit_status:1 exited_at:{seconds:1747354963 nanos:862473446}" May 16 00:22:43.905971 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2db91f5ce4f51e96738a79ef5150fa3366d961c633cf04d81ae15e62bee97d52-rootfs.mount: Deactivated successfully. May 16 00:22:43.938408 kubelet[2779]: I0516 00:22:43.828621 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-69f746874b-lqglx" podStartSLOduration=28.710857898 podStartE2EDuration="36.791291584s" podCreationTimestamp="2025-05-16 00:22:07 +0000 UTC" firstStartedPulling="2025-05-16 00:22:34.582495293 +0000 UTC m=+40.641280914" lastFinishedPulling="2025-05-16 00:22:42.66292898 +0000 UTC m=+48.721714600" observedRunningTime="2025-05-16 00:22:43.763180114 +0000 UTC m=+49.821965745" watchObservedRunningTime="2025-05-16 00:22:43.791291584 +0000 UTC m=+49.850077209" May 16 00:22:43.953268 containerd[1554]: time="2025-05-16T00:22:43.953237077Z" level=info msg="StopContainer for \"2db91f5ce4f51e96738a79ef5150fa3366d961c633cf04d81ae15e62bee97d52\" returns successfully" May 16 00:22:43.958136 containerd[1554]: time="2025-05-16T00:22:43.958107783Z" level=info msg="StopPodSandbox for \"f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97\"" May 16 00:22:43.958251 containerd[1554]: time="2025-05-16T00:22:43.958166592Z" level=info msg="Container to stop \"2db91f5ce4f51e96738a79ef5150fa3366d961c633cf04d81ae15e62bee97d52\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:22:43.961366 kubelet[2779]: I0516 00:22:43.961178 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-87cc6cc95-w8v7r" podStartSLOduration=26.356010038 podStartE2EDuration="33.961164103s" podCreationTimestamp="2025-05-16 00:22:10 +0000 UTC" firstStartedPulling="2025-05-16 00:22:34.470425261 +0000 UTC m=+40.529210885" lastFinishedPulling="2025-05-16 00:22:42.075579328 +0000 UTC m=+48.134364950" observedRunningTime="2025-05-16 00:22:43.783202009 +0000 UTC m=+49.841987641" watchObservedRunningTime="2025-05-16 00:22:43.961164103 +0000 UTC m=+50.019949735" May 16 00:22:43.969983 systemd[1]: cri-containerd-f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97.scope: Deactivated successfully. May 16 00:22:43.973013 containerd[1554]: time="2025-05-16T00:22:43.971287026Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97\" id:\"f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97\" pid:4775 exit_status:137 exited_at:{seconds:1747354963 nanos:970969729}" May 16 00:22:43.999417 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97-rootfs.mount: Deactivated successfully. May 16 00:22:44.021329 containerd[1554]: time="2025-05-16T00:22:44.020646605Z" level=info msg="received exit event sandbox_id:\"f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97\" exit_status:137 exited_at:{seconds:1747354963 nanos:970969729}" May 16 00:22:44.034554 containerd[1554]: time="2025-05-16T00:22:44.022862195Z" level=info msg="shim disconnected" id=f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97 namespace=k8s.io May 16 00:22:44.034554 containerd[1554]: time="2025-05-16T00:22:44.022875419Z" level=warning msg="cleaning up after shim disconnected" id=f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97 namespace=k8s.io May 16 00:22:44.034554 containerd[1554]: time="2025-05-16T00:22:44.022879980Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:22:44.024091 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97-shm.mount: Deactivated successfully. May 16 00:22:44.429614 systemd-networkd[1440]: cali9e2407ee734: Link DOWN May 16 00:22:44.429619 systemd-networkd[1440]: cali9e2407ee734: Lost carrier May 16 00:22:44.601806 kubelet[2779]: I0516 00:22:44.565999 2779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" May 16 00:22:44.956363 containerd[1554]: 2025-05-16 00:22:44.425 [INFO][5253] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" May 16 00:22:44.956363 containerd[1554]: 2025-05-16 00:22:44.428 [INFO][5253] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" iface="eth0" netns="/var/run/netns/cni-62302e20-b24e-e19b-0c80-50e085702489" May 16 00:22:44.956363 containerd[1554]: 2025-05-16 00:22:44.428 [INFO][5253] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" iface="eth0" netns="/var/run/netns/cni-62302e20-b24e-e19b-0c80-50e085702489" May 16 00:22:44.956363 containerd[1554]: 2025-05-16 00:22:44.433 [INFO][5253] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" after=4.863631ms iface="eth0" netns="/var/run/netns/cni-62302e20-b24e-e19b-0c80-50e085702489" May 16 00:22:44.956363 containerd[1554]: 2025-05-16 00:22:44.433 [INFO][5253] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" May 16 00:22:44.956363 containerd[1554]: 2025-05-16 00:22:44.433 [INFO][5253] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" May 16 00:22:44.956363 containerd[1554]: 2025-05-16 00:22:44.843 [INFO][5260] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" HandleID="k8s-pod-network.f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" Workload="localhost-k8s-calico--apiserver--69f746874b--lqglx-eth0" May 16 00:22:44.956363 containerd[1554]: 2025-05-16 00:22:44.849 [INFO][5260] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:22:44.956363 containerd[1554]: 2025-05-16 00:22:44.849 [INFO][5260] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:22:44.956363 containerd[1554]: 2025-05-16 00:22:44.947 [INFO][5260] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" HandleID="k8s-pod-network.f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" Workload="localhost-k8s-calico--apiserver--69f746874b--lqglx-eth0" May 16 00:22:44.956363 containerd[1554]: 2025-05-16 00:22:44.947 [INFO][5260] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" HandleID="k8s-pod-network.f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" Workload="localhost-k8s-calico--apiserver--69f746874b--lqglx-eth0" May 16 00:22:44.956363 containerd[1554]: 2025-05-16 00:22:44.950 [INFO][5260] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:22:44.956363 containerd[1554]: 2025-05-16 00:22:44.952 [INFO][5253] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" May 16 00:22:44.956363 containerd[1554]: time="2025-05-16T00:22:44.954174799Z" level=info msg="TearDown network for sandbox \"f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97\" successfully" May 16 00:22:44.956363 containerd[1554]: time="2025-05-16T00:22:44.954192192Z" level=info msg="StopPodSandbox for \"f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97\" returns successfully" May 16 00:22:44.956148 systemd[1]: run-netns-cni\x2d62302e20\x2db24e\x2de19b\x2d0c80\x2d50e085702489.mount: Deactivated successfully. May 16 00:22:45.322702 kubelet[2779]: I0516 00:22:45.322543 2779 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98jmw\" (UniqueName: \"kubernetes.io/projected/9d770ce6-58c4-46a7-9547-3ce0bd7d1645-kube-api-access-98jmw\") pod \"9d770ce6-58c4-46a7-9547-3ce0bd7d1645\" (UID: \"9d770ce6-58c4-46a7-9547-3ce0bd7d1645\") " May 16 00:22:45.329723 kubelet[2779]: I0516 00:22:45.329533 2779 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9d770ce6-58c4-46a7-9547-3ce0bd7d1645-calico-apiserver-certs\") pod \"9d770ce6-58c4-46a7-9547-3ce0bd7d1645\" (UID: \"9d770ce6-58c4-46a7-9547-3ce0bd7d1645\") " May 16 00:22:45.383723 systemd[1]: var-lib-kubelet-pods-9d770ce6\x2d58c4\x2d46a7\x2d9547\x2d3ce0bd7d1645-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 16 00:22:45.383798 systemd[1]: var-lib-kubelet-pods-9d770ce6\x2d58c4\x2d46a7\x2d9547\x2d3ce0bd7d1645-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d98jmw.mount: Deactivated successfully. May 16 00:22:45.441957 kubelet[2779]: I0516 00:22:45.438795 2779 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d770ce6-58c4-46a7-9547-3ce0bd7d1645-kube-api-access-98jmw" (OuterVolumeSpecName: "kube-api-access-98jmw") pod "9d770ce6-58c4-46a7-9547-3ce0bd7d1645" (UID: "9d770ce6-58c4-46a7-9547-3ce0bd7d1645"). InnerVolumeSpecName "kube-api-access-98jmw". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 00:22:45.442091 kubelet[2779]: I0516 00:22:45.441967 2779 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d770ce6-58c4-46a7-9547-3ce0bd7d1645-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "9d770ce6-58c4-46a7-9547-3ce0bd7d1645" (UID: "9d770ce6-58c4-46a7-9547-3ce0bd7d1645"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 16 00:22:45.446939 kubelet[2779]: I0516 00:22:45.446855 2779 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9d770ce6-58c4-46a7-9547-3ce0bd7d1645-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" May 16 00:22:45.446939 kubelet[2779]: I0516 00:22:45.446902 2779 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-98jmw\" (UniqueName: \"kubernetes.io/projected/9d770ce6-58c4-46a7-9547-3ce0bd7d1645-kube-api-access-98jmw\") on node \"localhost\" DevicePath \"\"" May 16 00:22:45.581327 systemd[1]: Removed slice kubepods-besteffort-pod9d770ce6_58c4_46a7_9547_3ce0bd7d1645.slice - libcontainer container kubepods-besteffort-pod9d770ce6_58c4_46a7_9547_3ce0bd7d1645.slice. May 16 00:22:45.581451 systemd[1]: kubepods-besteffort-pod9d770ce6_58c4_46a7_9547_3ce0bd7d1645.slice: Consumed 570ms CPU time, 25.5M memory peak, 6.1M read from disk. May 16 00:22:45.841784 containerd[1554]: time="2025-05-16T00:22:45.841698866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:22:45.857897 containerd[1554]: time="2025-05-16T00:22:45.844339041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 16 00:22:45.873363 containerd[1554]: time="2025-05-16T00:22:45.873320451Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:22:45.874653 containerd[1554]: time="2025-05-16T00:22:45.874620322Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:22:45.875262 containerd[1554]: time="2025-05-16T00:22:45.875039949Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 3.210645606s" May 16 00:22:45.875262 containerd[1554]: time="2025-05-16T00:22:45.875058820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 16 00:22:45.917606 containerd[1554]: time="2025-05-16T00:22:45.917578546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 16 00:22:45.960014 containerd[1554]: time="2025-05-16T00:22:45.959988023Z" level=info msg="CreateContainer within sandbox \"f3020d901c36cf57093a2be6081892dc927c0fd970bd4ffffcac4f2487657a27\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 16 00:22:46.018367 containerd[1554]: time="2025-05-16T00:22:46.018270182Z" level=info msg="Container d22e10c20a734440fd4d86c50226d4daeac2ee3d33e4f8fc1fa304a7733fab45: CDI devices from CRI Config.CDIDevices: []" May 16 00:22:46.020977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2764109746.mount: Deactivated successfully. May 16 00:22:46.057901 containerd[1554]: time="2025-05-16T00:22:46.057835726Z" level=info msg="CreateContainer within sandbox \"f3020d901c36cf57093a2be6081892dc927c0fd970bd4ffffcac4f2487657a27\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d22e10c20a734440fd4d86c50226d4daeac2ee3d33e4f8fc1fa304a7733fab45\"" May 16 00:22:46.059687 containerd[1554]: time="2025-05-16T00:22:46.058680868Z" level=info msg="StartContainer for \"d22e10c20a734440fd4d86c50226d4daeac2ee3d33e4f8fc1fa304a7733fab45\"" May 16 00:22:46.062174 containerd[1554]: time="2025-05-16T00:22:46.062153594Z" level=info msg="connecting to shim d22e10c20a734440fd4d86c50226d4daeac2ee3d33e4f8fc1fa304a7733fab45" address="unix:///run/containerd/s/10405cd20e08dcd0d18d29e2672dcef3d1d5b3f563b610de8604255c730dac6d" protocol=ttrpc version=3 May 16 00:22:46.087414 kubelet[2779]: I0516 00:22:46.086169 2779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d770ce6-58c4-46a7-9547-3ce0bd7d1645" path="/var/lib/kubelet/pods/9d770ce6-58c4-46a7-9547-3ce0bd7d1645/volumes" May 16 00:22:46.111445 systemd[1]: Started cri-containerd-d22e10c20a734440fd4d86c50226d4daeac2ee3d33e4f8fc1fa304a7733fab45.scope - libcontainer container d22e10c20a734440fd4d86c50226d4daeac2ee3d33e4f8fc1fa304a7733fab45. May 16 00:22:46.138321 containerd[1554]: time="2025-05-16T00:22:46.138294278Z" level=info msg="StartContainer for \"d22e10c20a734440fd4d86c50226d4daeac2ee3d33e4f8fc1fa304a7733fab45\" returns successfully" May 16 00:22:46.354931 containerd[1554]: time="2025-05-16T00:22:46.354903025Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 00:22:46.359228 containerd[1554]: time="2025-05-16T00:22:46.356760671Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 16 00:22:46.385450 containerd[1554]: time="2025-05-16T00:22:46.385396697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 16 00:22:46.398488 kubelet[2779]: E0516 00:22:46.391364 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 00:22:46.398488 kubelet[2779]: E0516 00:22:46.397761 2779 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 00:22:46.404811 containerd[1554]: time="2025-05-16T00:22:46.404584534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 16 00:22:46.408282 kubelet[2779]: E0516 00:22:46.407810 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f2939758c3c44271a54467e3945f6783,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bslh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-786d5f7786-2pbzk_calico-system(9824a9f5-eda0-4e88-bed1-bf9b90951f1d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 00:22:48.213210 containerd[1554]: time="2025-05-16T00:22:48.213041973Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:22:48.213852 containerd[1554]: time="2025-05-16T00:22:48.213574873Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 16 00:22:48.213852 containerd[1554]: time="2025-05-16T00:22:48.213829692Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:22:48.215236 containerd[1554]: time="2025-05-16T00:22:48.215219751Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:22:48.215749 containerd[1554]: time="2025-05-16T00:22:48.215530471Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 1.810307842s" May 16 00:22:48.215749 containerd[1554]: time="2025-05-16T00:22:48.215549269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 16 00:22:48.216798 containerd[1554]: time="2025-05-16T00:22:48.216379209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 16 00:22:48.218620 containerd[1554]: time="2025-05-16T00:22:48.218551487Z" level=info msg="CreateContainer within sandbox \"f3020d901c36cf57093a2be6081892dc927c0fd970bd4ffffcac4f2487657a27\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 16 00:22:48.225575 containerd[1554]: time="2025-05-16T00:22:48.225551904Z" level=info msg="Container a2dbd68e23c534de237ea44675d3c1806be9b89962655a738c483cbc2b762e9f: CDI devices from CRI Config.CDIDevices: []" May 16 00:22:48.227523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1484064885.mount: Deactivated successfully. May 16 00:22:48.232969 containerd[1554]: time="2025-05-16T00:22:48.232946681Z" level=info msg="CreateContainer within sandbox \"f3020d901c36cf57093a2be6081892dc927c0fd970bd4ffffcac4f2487657a27\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a2dbd68e23c534de237ea44675d3c1806be9b89962655a738c483cbc2b762e9f\"" May 16 00:22:48.233654 containerd[1554]: time="2025-05-16T00:22:48.233514743Z" level=info msg="StartContainer for \"a2dbd68e23c534de237ea44675d3c1806be9b89962655a738c483cbc2b762e9f\"" May 16 00:22:48.234548 containerd[1554]: time="2025-05-16T00:22:48.234524452Z" level=info msg="connecting to shim a2dbd68e23c534de237ea44675d3c1806be9b89962655a738c483cbc2b762e9f" address="unix:///run/containerd/s/10405cd20e08dcd0d18d29e2672dcef3d1d5b3f563b610de8604255c730dac6d" protocol=ttrpc version=3 May 16 00:22:48.256490 systemd[1]: Started cri-containerd-a2dbd68e23c534de237ea44675d3c1806be9b89962655a738c483cbc2b762e9f.scope - libcontainer container a2dbd68e23c534de237ea44675d3c1806be9b89962655a738c483cbc2b762e9f. May 16 00:22:48.297567 containerd[1554]: time="2025-05-16T00:22:48.297536151Z" level=info msg="StartContainer for \"a2dbd68e23c534de237ea44675d3c1806be9b89962655a738c483cbc2b762e9f\" returns successfully" May 16 00:22:48.489450 containerd[1554]: time="2025-05-16T00:22:48.488852639Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 00:22:48.522112 containerd[1554]: time="2025-05-16T00:22:48.522027481Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 16 00:22:48.522112 containerd[1554]: time="2025-05-16T00:22:48.522056871Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 16 00:22:48.522281 kubelet[2779]: E0516 00:22:48.522197 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 00:22:48.522281 kubelet[2779]: E0516 00:22:48.522262 2779 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 00:22:48.528468 kubelet[2779]: E0516 00:22:48.522459 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bslh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-786d5f7786-2pbzk_calico-system(9824a9f5-eda0-4e88-bed1-bf9b90951f1d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 00:22:48.532295 containerd[1554]: time="2025-05-16T00:22:48.522850159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 16 00:22:48.540891 kubelet[2779]: E0516 00:22:48.540848 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-786d5f7786-2pbzk" podUID="9824a9f5-eda0-4e88-bed1-bf9b90951f1d" May 16 00:22:48.710830 kubelet[2779]: I0516 00:22:48.707939 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-sv6vl" podStartSLOduration=26.371011818 podStartE2EDuration="38.695854603s" podCreationTimestamp="2025-05-16 00:22:10 +0000 UTC" firstStartedPulling="2025-05-16 00:22:35.891375189 +0000 UTC m=+41.950160811" lastFinishedPulling="2025-05-16 00:22:48.216217973 +0000 UTC m=+54.275003596" observedRunningTime="2025-05-16 00:22:48.655669414 +0000 UTC m=+54.714455051" watchObservedRunningTime="2025-05-16 00:22:48.695854603 +0000 UTC m=+54.754640229" May 16 00:22:48.822146 containerd[1554]: time="2025-05-16T00:22:48.822046232Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 00:22:48.822906 containerd[1554]: time="2025-05-16T00:22:48.822883736Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 16 00:22:48.823017 containerd[1554]: time="2025-05-16T00:22:48.822952022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 16 00:22:48.823155 kubelet[2779]: E0516 00:22:48.823086 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 00:22:48.823155 kubelet[2779]: E0516 00:22:48.823123 2779 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 00:22:48.823243 kubelet[2779]: E0516 00:22:48.823211 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zwxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-52cwp_calico-system(52758d11-1828-43a1-82dc-8636be4d16ce): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 00:22:48.824441 kubelet[2779]: E0516 00:22:48.824409 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-52cwp" podUID="52758d11-1828-43a1-82dc-8636be4d16ce" May 16 00:22:49.436022 kubelet[2779]: I0516 00:22:49.435975 2779 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 16 00:22:49.454988 kubelet[2779]: I0516 00:22:49.454952 2779 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 16 00:22:54.135984 kubelet[2779]: I0516 00:22:54.135604 2779 scope.go:117] "RemoveContainer" containerID="2db91f5ce4f51e96738a79ef5150fa3366d961c633cf04d81ae15e62bee97d52" May 16 00:22:54.144127 containerd[1554]: time="2025-05-16T00:22:54.143999531Z" level=info msg="RemoveContainer for \"2db91f5ce4f51e96738a79ef5150fa3366d961c633cf04d81ae15e62bee97d52\"" May 16 00:22:54.151930 containerd[1554]: time="2025-05-16T00:22:54.151908368Z" level=info msg="RemoveContainer for \"2db91f5ce4f51e96738a79ef5150fa3366d961c633cf04d81ae15e62bee97d52\" returns successfully" May 16 00:22:54.157151 containerd[1554]: time="2025-05-16T00:22:54.157116151Z" level=info msg="StopPodSandbox for \"f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97\"" May 16 00:22:54.183931 kubelet[2779]: I0516 00:22:54.183908 2779 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 00:22:54.790676 containerd[1554]: 2025-05-16 00:22:54.566 [WARNING][5364] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f746874b--lqglx-eth0" May 16 00:22:54.790676 containerd[1554]: 2025-05-16 00:22:54.584 [INFO][5364] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" May 16 00:22:54.790676 containerd[1554]: 2025-05-16 00:22:54.584 [INFO][5364] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" iface="eth0" netns="" May 16 00:22:54.790676 containerd[1554]: 2025-05-16 00:22:54.584 [INFO][5364] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" May 16 00:22:54.790676 containerd[1554]: 2025-05-16 00:22:54.584 [INFO][5364] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" May 16 00:22:54.790676 containerd[1554]: 2025-05-16 00:22:54.752 [INFO][5374] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" HandleID="k8s-pod-network.f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" Workload="localhost-k8s-calico--apiserver--69f746874b--lqglx-eth0" May 16 00:22:54.790676 containerd[1554]: 2025-05-16 00:22:54.752 [INFO][5374] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:22:54.790676 containerd[1554]: 2025-05-16 00:22:54.752 [INFO][5374] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:22:54.790676 containerd[1554]: 2025-05-16 00:22:54.784 [WARNING][5374] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" HandleID="k8s-pod-network.f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" Workload="localhost-k8s-calico--apiserver--69f746874b--lqglx-eth0" May 16 00:22:54.790676 containerd[1554]: 2025-05-16 00:22:54.784 [INFO][5374] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" HandleID="k8s-pod-network.f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" Workload="localhost-k8s-calico--apiserver--69f746874b--lqglx-eth0" May 16 00:22:54.790676 containerd[1554]: 2025-05-16 00:22:54.787 [INFO][5374] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:22:54.790676 containerd[1554]: 2025-05-16 00:22:54.789 [INFO][5364] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" May 16 00:22:54.794041 containerd[1554]: time="2025-05-16T00:22:54.790699786Z" level=info msg="TearDown network for sandbox \"f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97\" successfully" May 16 00:22:54.794041 containerd[1554]: time="2025-05-16T00:22:54.790717061Z" level=info msg="StopPodSandbox for \"f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97\" returns successfully" May 16 00:22:54.794041 containerd[1554]: time="2025-05-16T00:22:54.792890238Z" level=info msg="RemovePodSandbox for \"f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97\"" May 16 00:22:54.794310 containerd[1554]: time="2025-05-16T00:22:54.794294278Z" level=info msg="Forcibly stopping sandbox \"f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97\"" May 16 00:22:54.844996 containerd[1554]: 2025-05-16 00:22:54.822 [WARNING][5390] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f746874b--lqglx-eth0" May 16 00:22:54.844996 containerd[1554]: 2025-05-16 00:22:54.822 [INFO][5390] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" May 16 00:22:54.844996 containerd[1554]: 2025-05-16 00:22:54.822 [INFO][5390] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" iface="eth0" netns="" May 16 00:22:54.844996 containerd[1554]: 2025-05-16 00:22:54.822 [INFO][5390] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" May 16 00:22:54.844996 containerd[1554]: 2025-05-16 00:22:54.822 [INFO][5390] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" May 16 00:22:54.844996 containerd[1554]: 2025-05-16 00:22:54.836 [INFO][5397] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" HandleID="k8s-pod-network.f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" Workload="localhost-k8s-calico--apiserver--69f746874b--lqglx-eth0" May 16 00:22:54.844996 containerd[1554]: 2025-05-16 00:22:54.836 [INFO][5397] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:22:54.844996 containerd[1554]: 2025-05-16 00:22:54.836 [INFO][5397] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:22:54.844996 containerd[1554]: 2025-05-16 00:22:54.840 [WARNING][5397] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" HandleID="k8s-pod-network.f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" Workload="localhost-k8s-calico--apiserver--69f746874b--lqglx-eth0" May 16 00:22:54.844996 containerd[1554]: 2025-05-16 00:22:54.840 [INFO][5397] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" HandleID="k8s-pod-network.f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" Workload="localhost-k8s-calico--apiserver--69f746874b--lqglx-eth0" May 16 00:22:54.844996 containerd[1554]: 2025-05-16 00:22:54.841 [INFO][5397] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:22:54.844996 containerd[1554]: 2025-05-16 00:22:54.843 [INFO][5390] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97" May 16 00:22:54.846103 containerd[1554]: time="2025-05-16T00:22:54.845026138Z" level=info msg="TearDown network for sandbox \"f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97\" successfully" May 16 00:22:54.851315 containerd[1554]: time="2025-05-16T00:22:54.851292366Z" level=info msg="Ensure that sandbox f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97 in task-service has been cleanup successfully" May 16 00:22:54.855937 containerd[1554]: time="2025-05-16T00:22:54.855914368Z" level=info msg="RemovePodSandbox \"f42d4f85a79b4a9049d5932780b8e0e227e2510e6f341a1e1ec2fc23d9bd1a97\" returns successfully" May 16 00:22:58.802914 containerd[1554]: time="2025-05-16T00:22:58.802880645Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ebf84c1e17d60aa205041eea0d2c683ea9732b65fd86a93185f8c277845711b3\" id:\"6677b36aff0962d7ef69814bf93813ee94f38691b8b371cc6649af18eec5cd87\" pid:5416 exited_at:{seconds:1747354978 nanos:785802511}" May 16 00:23:02.339732 kubelet[2779]: E0516 00:23:02.339673 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-52cwp" podUID="52758d11-1828-43a1-82dc-8636be4d16ce" May 16 00:23:04.809828 kubelet[2779]: E0516 00:23:04.808690 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-786d5f7786-2pbzk" podUID="9824a9f5-eda0-4e88-bed1-bf9b90951f1d" May 16 00:23:05.397847 systemd[1]: Started sshd@7-139.178.70.108:22-147.75.109.163:56334.service - OpenSSH per-connection server daemon (147.75.109.163:56334). May 16 00:23:05.555198 sshd[5441]: Accepted publickey for core from 147.75.109.163 port 56334 ssh2: RSA SHA256:EfNgMsQPwPDdLbPZByY9C3lyE6YZGC6uGSeNGzrF4TU May 16 00:23:05.557890 sshd-session[5441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:23:05.571459 systemd-logind[1535]: New session 10 of user core. May 16 00:23:05.576503 systemd[1]: Started session-10.scope - Session 10 of User core. May 16 00:23:06.766185 sshd[5443]: Connection closed by 147.75.109.163 port 56334 May 16 00:23:06.766565 sshd-session[5441]: pam_unix(sshd:session): session closed for user core May 16 00:23:06.784441 systemd[1]: sshd@7-139.178.70.108:22-147.75.109.163:56334.service: Deactivated successfully. May 16 00:23:06.786810 systemd[1]: session-10.scope: Deactivated successfully. May 16 00:23:06.797049 systemd-logind[1535]: Session 10 logged out. Waiting for processes to exit. May 16 00:23:06.798551 systemd-logind[1535]: Removed session 10. May 16 00:23:11.777937 systemd[1]: Started sshd@8-139.178.70.108:22-147.75.109.163:43052.service - OpenSSH per-connection server daemon (147.75.109.163:43052). May 16 00:23:11.892916 sshd[5464]: Accepted publickey for core from 147.75.109.163 port 43052 ssh2: RSA SHA256:EfNgMsQPwPDdLbPZByY9C3lyE6YZGC6uGSeNGzrF4TU May 16 00:23:11.918948 sshd-session[5464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:23:11.929954 systemd-logind[1535]: New session 11 of user core. May 16 00:23:11.934503 systemd[1]: Started session-11.scope - Session 11 of User core. May 16 00:23:12.244630 sshd[5467]: Connection closed by 147.75.109.163 port 43052 May 16 00:23:12.245012 sshd-session[5464]: pam_unix(sshd:session): session closed for user core May 16 00:23:12.247210 systemd[1]: sshd@8-139.178.70.108:22-147.75.109.163:43052.service: Deactivated successfully. May 16 00:23:12.248557 systemd[1]: session-11.scope: Deactivated successfully. May 16 00:23:12.249213 systemd-logind[1535]: Session 11 logged out. Waiting for processes to exit. May 16 00:23:12.250001 systemd-logind[1535]: Removed session 11. May 16 00:23:14.295276 containerd[1554]: time="2025-05-16T00:23:14.240823233Z" level=info msg="TaskExit event in podsandbox handler container_id:\"23c014efccae973a1546c7131a694df96b5bc13d78434a7b6c9be3ff82edd438\" id:\"8aed58a4b5195b0d6665898bd7db08b2a0cbb7b0736fbc3989ad8e8a37ce4b59\" pid:5492 exited_at:{seconds:1747354994 nanos:150790564}" May 16 00:23:14.569218 containerd[1554]: time="2025-05-16T00:23:14.567482108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 16 00:23:15.204461 containerd[1554]: time="2025-05-16T00:23:15.204419202Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 00:23:15.214212 containerd[1554]: time="2025-05-16T00:23:15.212713808Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 16 00:23:15.228402 containerd[1554]: time="2025-05-16T00:23:15.219346538Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 16 00:23:15.367593 kubelet[2779]: E0516 00:23:15.353270 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 00:23:15.474773 kubelet[2779]: E0516 00:23:15.392830 2779 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 00:23:15.492681 kubelet[2779]: E0516 00:23:15.492611 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zwxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-52cwp_calico-system(52758d11-1828-43a1-82dc-8636be4d16ce): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 00:23:15.524867 kubelet[2779]: E0516 00:23:15.524829 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-52cwp" podUID="52758d11-1828-43a1-82dc-8636be4d16ce" May 16 00:23:16.070636 containerd[1554]: time="2025-05-16T00:23:16.070603789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 16 00:23:16.381753 containerd[1554]: time="2025-05-16T00:23:16.381664994Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 00:23:16.401938 containerd[1554]: time="2025-05-16T00:23:16.401898459Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 16 00:23:16.402024 containerd[1554]: time="2025-05-16T00:23:16.401972999Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 16 00:23:16.403360 kubelet[2779]: E0516 00:23:16.402109 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 00:23:16.403360 kubelet[2779]: E0516 00:23:16.402139 2779 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 00:23:16.403360 kubelet[2779]: E0516 00:23:16.402210 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f2939758c3c44271a54467e3945f6783,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bslh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-786d5f7786-2pbzk_calico-system(9824a9f5-eda0-4e88-bed1-bf9b90951f1d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 00:23:16.403958 containerd[1554]: time="2025-05-16T00:23:16.403941108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 16 00:23:16.710627 containerd[1554]: time="2025-05-16T00:23:16.710524954Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 00:23:16.729035 containerd[1554]: time="2025-05-16T00:23:16.728699954Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 16 00:23:16.729035 containerd[1554]: time="2025-05-16T00:23:16.728790612Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 16 00:23:16.796091 kubelet[2779]: E0516 00:23:16.796061 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 00:23:16.796091 kubelet[2779]: E0516 00:23:16.796092 2779 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 00:23:16.796234 kubelet[2779]: E0516 00:23:16.796181 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bslh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-786d5f7786-2pbzk_calico-system(9824a9f5-eda0-4e88-bed1-bf9b90951f1d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 00:23:16.801789 kubelet[2779]: E0516 00:23:16.801759 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-786d5f7786-2pbzk" podUID="9824a9f5-eda0-4e88-bed1-bf9b90951f1d" May 16 00:23:17.263310 systemd[1]: Started sshd@9-139.178.70.108:22-147.75.109.163:43068.service - OpenSSH per-connection server daemon (147.75.109.163:43068). May 16 00:23:17.348226 containerd[1554]: time="2025-05-16T00:23:17.348189659Z" level=info msg="TaskExit event in podsandbox handler container_id:\"23c014efccae973a1546c7131a694df96b5bc13d78434a7b6c9be3ff82edd438\" id:\"a76f0eb93ccf09fe824947208f98d49ac6445bf9548595c1992813029707c977\" pid:5513 exited_at:{seconds:1747354997 nanos:336752537}" May 16 00:23:17.419285 sshd[5520]: Accepted publickey for core from 147.75.109.163 port 43068 ssh2: RSA SHA256:EfNgMsQPwPDdLbPZByY9C3lyE6YZGC6uGSeNGzrF4TU May 16 00:23:17.422742 sshd-session[5520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:23:17.426123 systemd-logind[1535]: New session 12 of user core. May 16 00:23:17.431469 systemd[1]: Started session-12.scope - Session 12 of User core. May 16 00:23:18.474018 sshd[5525]: Connection closed by 147.75.109.163 port 43068 May 16 00:23:18.474523 sshd-session[5520]: pam_unix(sshd:session): session closed for user core May 16 00:23:18.483975 systemd[1]: sshd@9-139.178.70.108:22-147.75.109.163:43068.service: Deactivated successfully. May 16 00:23:18.485284 systemd[1]: session-12.scope: Deactivated successfully. May 16 00:23:18.486241 systemd-logind[1535]: Session 12 logged out. Waiting for processes to exit. May 16 00:23:18.495988 systemd[1]: Started sshd@10-139.178.70.108:22-147.75.109.163:59934.service - OpenSSH per-connection server daemon (147.75.109.163:59934). May 16 00:23:18.496868 systemd-logind[1535]: Removed session 12. May 16 00:23:18.728896 sshd[5537]: Accepted publickey for core from 147.75.109.163 port 59934 ssh2: RSA SHA256:EfNgMsQPwPDdLbPZByY9C3lyE6YZGC6uGSeNGzrF4TU May 16 00:23:18.731305 sshd-session[5537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:23:18.737297 systemd-logind[1535]: New session 13 of user core. May 16 00:23:18.743484 systemd[1]: Started session-13.scope - Session 13 of User core. May 16 00:23:19.373164 sshd[5540]: Connection closed by 147.75.109.163 port 59934 May 16 00:23:19.372574 sshd-session[5537]: pam_unix(sshd:session): session closed for user core May 16 00:23:19.380023 systemd[1]: sshd@10-139.178.70.108:22-147.75.109.163:59934.service: Deactivated successfully. May 16 00:23:19.382242 systemd[1]: session-13.scope: Deactivated successfully. May 16 00:23:19.383951 systemd-logind[1535]: Session 13 logged out. Waiting for processes to exit. May 16 00:23:19.385582 systemd[1]: Started sshd@11-139.178.70.108:22-147.75.109.163:59950.service - OpenSSH per-connection server daemon (147.75.109.163:59950). May 16 00:23:19.388889 systemd-logind[1535]: Removed session 13. May 16 00:23:19.431979 sshd[5549]: Accepted publickey for core from 147.75.109.163 port 59950 ssh2: RSA SHA256:EfNgMsQPwPDdLbPZByY9C3lyE6YZGC6uGSeNGzrF4TU May 16 00:23:19.432934 sshd-session[5549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:23:19.439554 systemd-logind[1535]: New session 14 of user core. May 16 00:23:19.444550 systemd[1]: Started session-14.scope - Session 14 of User core. May 16 00:23:19.961259 sshd[5552]: Connection closed by 147.75.109.163 port 59950 May 16 00:23:19.961133 sshd-session[5549]: pam_unix(sshd:session): session closed for user core May 16 00:23:19.963809 systemd-logind[1535]: Session 14 logged out. Waiting for processes to exit. May 16 00:23:19.964118 systemd[1]: sshd@11-139.178.70.108:22-147.75.109.163:59950.service: Deactivated successfully. May 16 00:23:19.965816 systemd[1]: session-14.scope: Deactivated successfully. May 16 00:23:19.967416 systemd-logind[1535]: Removed session 14. May 16 00:23:24.993103 systemd[1]: Started sshd@12-139.178.70.108:22-147.75.109.163:59966.service - OpenSSH per-connection server daemon (147.75.109.163:59966). May 16 00:23:25.238995 sshd[5576]: Accepted publickey for core from 147.75.109.163 port 59966 ssh2: RSA SHA256:EfNgMsQPwPDdLbPZByY9C3lyE6YZGC6uGSeNGzrF4TU May 16 00:23:25.243921 sshd-session[5576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:23:25.249661 systemd-logind[1535]: New session 15 of user core. May 16 00:23:25.255453 systemd[1]: Started session-15.scope - Session 15 of User core. May 16 00:23:26.089938 sshd[5578]: Connection closed by 147.75.109.163 port 59966 May 16 00:23:26.089877 sshd-session[5576]: pam_unix(sshd:session): session closed for user core May 16 00:23:26.092463 systemd[1]: sshd@12-139.178.70.108:22-147.75.109.163:59966.service: Deactivated successfully. May 16 00:23:26.095155 systemd[1]: session-15.scope: Deactivated successfully. May 16 00:23:26.096852 systemd-logind[1535]: Session 15 logged out. Waiting for processes to exit. May 16 00:23:26.097899 systemd-logind[1535]: Removed session 15. May 16 00:23:27.209187 kubelet[2779]: E0516 00:23:27.209152 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-52cwp" podUID="52758d11-1828-43a1-82dc-8636be4d16ce" May 16 00:23:28.067722 kubelet[2779]: E0516 00:23:28.067312 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-786d5f7786-2pbzk" podUID="9824a9f5-eda0-4e88-bed1-bf9b90951f1d" May 16 00:23:29.303840 containerd[1554]: time="2025-05-16T00:23:29.279466602Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ebf84c1e17d60aa205041eea0d2c683ea9732b65fd86a93185f8c277845711b3\" id:\"4c6e0e0a70cc7e22222e1f50cc956514930dd5c86764a3162f338dd1b3f9b41c\" pid:5601 exited_at:{seconds:1747355009 nanos:197584081}" May 16 00:23:31.154731 systemd[1]: Started sshd@13-139.178.70.108:22-147.75.109.163:37824.service - OpenSSH per-connection server daemon (147.75.109.163:37824). May 16 00:23:31.367961 sshd[5634]: Accepted publickey for core from 147.75.109.163 port 37824 ssh2: RSA SHA256:EfNgMsQPwPDdLbPZByY9C3lyE6YZGC6uGSeNGzrF4TU May 16 00:23:31.372948 sshd-session[5634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:23:31.383812 systemd-logind[1535]: New session 16 of user core. May 16 00:23:31.388563 systemd[1]: Started session-16.scope - Session 16 of User core. May 16 00:23:32.230987 sshd[5638]: Connection closed by 147.75.109.163 port 37824 May 16 00:23:32.231411 sshd-session[5634]: pam_unix(sshd:session): session closed for user core May 16 00:23:32.234968 systemd[1]: sshd@13-139.178.70.108:22-147.75.109.163:37824.service: Deactivated successfully. May 16 00:23:32.236979 systemd[1]: session-16.scope: Deactivated successfully. May 16 00:23:32.238197 systemd-logind[1535]: Session 16 logged out. Waiting for processes to exit. May 16 00:23:32.239587 systemd-logind[1535]: Removed session 16. May 16 00:23:37.451031 systemd[1]: Started sshd@14-139.178.70.108:22-147.75.109.163:37840.service - OpenSSH per-connection server daemon (147.75.109.163:37840). May 16 00:23:38.207813 sshd[5659]: Accepted publickey for core from 147.75.109.163 port 37840 ssh2: RSA SHA256:EfNgMsQPwPDdLbPZByY9C3lyE6YZGC6uGSeNGzrF4TU May 16 00:23:38.216032 sshd-session[5659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:23:38.224490 systemd-logind[1535]: New session 17 of user core. May 16 00:23:38.229954 systemd[1]: Started session-17.scope - Session 17 of User core. May 16 00:23:38.382576 kubelet[2779]: E0516 00:23:38.379291 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-52cwp" podUID="52758d11-1828-43a1-82dc-8636be4d16ce" May 16 00:23:38.816164 kubelet[2779]: I0516 00:23:38.804338 2779 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 00:23:39.144206 containerd[1554]: time="2025-05-16T00:23:39.133229629Z" level=info msg="StopContainer for \"225900eb8b702e488a4765387da782d7becb4a0e31de8fdfb7a578703a940f25\" with timeout 30 (s)" May 16 00:23:39.340637 containerd[1554]: time="2025-05-16T00:23:39.340607879Z" level=info msg="Stop container \"225900eb8b702e488a4765387da782d7becb4a0e31de8fdfb7a578703a940f25\" with signal terminated" May 16 00:23:39.528736 systemd[1]: cri-containerd-225900eb8b702e488a4765387da782d7becb4a0e31de8fdfb7a578703a940f25.scope: Deactivated successfully. May 16 00:23:39.528961 systemd[1]: cri-containerd-225900eb8b702e488a4765387da782d7becb4a0e31de8fdfb7a578703a940f25.scope: Consumed 511ms CPU time, 58M memory peak, 17.8M read from disk. May 16 00:23:39.632601 containerd[1554]: time="2025-05-16T00:23:39.632447905Z" level=info msg="received exit event container_id:\"225900eb8b702e488a4765387da782d7becb4a0e31de8fdfb7a578703a940f25\" id:\"225900eb8b702e488a4765387da782d7becb4a0e31de8fdfb7a578703a940f25\" pid:4879 exit_status:1 exited_at:{seconds:1747355019 nanos:587006198}" May 16 00:23:39.660494 containerd[1554]: time="2025-05-16T00:23:39.660410775Z" level=info msg="TaskExit event in podsandbox handler container_id:\"225900eb8b702e488a4765387da782d7becb4a0e31de8fdfb7a578703a940f25\" id:\"225900eb8b702e488a4765387da782d7becb4a0e31de8fdfb7a578703a940f25\" pid:4879 exit_status:1 exited_at:{seconds:1747355019 nanos:587006198}" May 16 00:23:39.714239 sshd[5661]: Connection closed by 147.75.109.163 port 37840 May 16 00:23:39.716227 sshd-session[5659]: pam_unix(sshd:session): session closed for user core May 16 00:23:39.717450 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-225900eb8b702e488a4765387da782d7becb4a0e31de8fdfb7a578703a940f25-rootfs.mount: Deactivated successfully. May 16 00:23:39.732148 systemd[1]: sshd@14-139.178.70.108:22-147.75.109.163:37840.service: Deactivated successfully. May 16 00:23:39.734322 systemd[1]: session-17.scope: Deactivated successfully. May 16 00:23:39.735186 systemd-logind[1535]: Session 17 logged out. Waiting for processes to exit. May 16 00:23:39.741722 systemd[1]: Started sshd@15-139.178.70.108:22-147.75.109.163:60190.service - OpenSSH per-connection server daemon (147.75.109.163:60190). May 16 00:23:39.744845 systemd-logind[1535]: Removed session 17. May 16 00:23:39.745409 containerd[1554]: time="2025-05-16T00:23:39.744934475Z" level=info msg="StopContainer for \"225900eb8b702e488a4765387da782d7becb4a0e31de8fdfb7a578703a940f25\" returns successfully" May 16 00:23:39.802768 containerd[1554]: time="2025-05-16T00:23:39.802674630Z" level=info msg="StopPodSandbox for \"96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9\"" May 16 00:23:39.804195 containerd[1554]: time="2025-05-16T00:23:39.804175239Z" level=info msg="Container to stop \"225900eb8b702e488a4765387da782d7becb4a0e31de8fdfb7a578703a940f25\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:23:39.812237 systemd[1]: cri-containerd-96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9.scope: Deactivated successfully. May 16 00:23:39.814323 containerd[1554]: time="2025-05-16T00:23:39.814028073Z" level=info msg="TaskExit event in podsandbox handler container_id:\"96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9\" id:\"96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9\" pid:4350 exit_status:137 exited_at:{seconds:1747355019 nanos:813613096}" May 16 00:23:39.840022 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9-rootfs.mount: Deactivated successfully. May 16 00:23:39.841425 containerd[1554]: time="2025-05-16T00:23:39.841408547Z" level=info msg="shim disconnected" id=96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9 namespace=k8s.io May 16 00:23:39.841534 containerd[1554]: time="2025-05-16T00:23:39.841461661Z" level=warning msg="cleaning up after shim disconnected" id=96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9 namespace=k8s.io May 16 00:23:39.841534 containerd[1554]: time="2025-05-16T00:23:39.841468481Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:23:39.847362 sshd[5697]: Accepted publickey for core from 147.75.109.163 port 60190 ssh2: RSA SHA256:EfNgMsQPwPDdLbPZByY9C3lyE6YZGC6uGSeNGzrF4TU May 16 00:23:39.852948 sshd-session[5697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:23:39.863726 systemd-logind[1535]: New session 18 of user core. May 16 00:23:39.869486 systemd[1]: Started session-18.scope - Session 18 of User core. May 16 00:23:39.893114 containerd[1554]: time="2025-05-16T00:23:39.893088271Z" level=info msg="received exit event sandbox_id:\"96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9\" exit_status:137 exited_at:{seconds:1747355019 nanos:813613096}" May 16 00:23:39.896033 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9-shm.mount: Deactivated successfully. May 16 00:23:39.973598 kubelet[2779]: I0516 00:23:39.973558 2779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" May 16 00:23:40.443928 sshd[5729]: Connection closed by 147.75.109.163 port 60190 May 16 00:23:40.450875 sshd-session[5697]: pam_unix(sshd:session): session closed for user core May 16 00:23:40.458382 systemd[1]: Started sshd@16-139.178.70.108:22-147.75.109.163:60206.service - OpenSSH per-connection server daemon (147.75.109.163:60206). May 16 00:23:40.478997 systemd[1]: sshd@15-139.178.70.108:22-147.75.109.163:60190.service: Deactivated successfully. May 16 00:23:40.480102 systemd[1]: session-18.scope: Deactivated successfully. May 16 00:23:40.482699 systemd-logind[1535]: Session 18 logged out. Waiting for processes to exit. May 16 00:23:40.483613 systemd-networkd[1440]: calid82ce67765d: Link DOWN May 16 00:23:40.483617 systemd-networkd[1440]: calid82ce67765d: Lost carrier May 16 00:23:40.484630 systemd-logind[1535]: Removed session 18. May 16 00:23:40.756539 sshd[5761]: Accepted publickey for core from 147.75.109.163 port 60206 ssh2: RSA SHA256:EfNgMsQPwPDdLbPZByY9C3lyE6YZGC6uGSeNGzrF4TU May 16 00:23:40.767199 sshd-session[5761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:23:40.780894 systemd-logind[1535]: New session 19 of user core. May 16 00:23:40.787473 systemd[1]: Started session-19.scope - Session 19 of User core. May 16 00:23:41.287491 containerd[1554]: 2025-05-16 00:23:40.446 [INFO][5750] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" May 16 00:23:41.287491 containerd[1554]: 2025-05-16 00:23:40.453 [INFO][5750] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" iface="eth0" netns="/var/run/netns/cni-84d17afa-c5ba-1625-76a2-079a9cd8f46a" May 16 00:23:41.287491 containerd[1554]: 2025-05-16 00:23:40.453 [INFO][5750] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" iface="eth0" netns="/var/run/netns/cni-84d17afa-c5ba-1625-76a2-079a9cd8f46a" May 16 00:23:41.287491 containerd[1554]: 2025-05-16 00:23:40.470 [INFO][5750] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" after=16.800037ms iface="eth0" netns="/var/run/netns/cni-84d17afa-c5ba-1625-76a2-079a9cd8f46a" May 16 00:23:41.287491 containerd[1554]: 2025-05-16 00:23:40.470 [INFO][5750] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" May 16 00:23:41.287491 containerd[1554]: 2025-05-16 00:23:40.470 [INFO][5750] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" May 16 00:23:41.287491 containerd[1554]: 2025-05-16 00:23:41.013 [INFO][5764] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" HandleID="k8s-pod-network.96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" Workload="localhost-k8s-calico--apiserver--69f746874b--drrg8-eth0" May 16 00:23:41.287491 containerd[1554]: 2025-05-16 00:23:41.015 [INFO][5764] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:23:41.287491 containerd[1554]: 2025-05-16 00:23:41.017 [INFO][5764] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:23:41.287491 containerd[1554]: 2025-05-16 00:23:41.250 [INFO][5764] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" HandleID="k8s-pod-network.96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" Workload="localhost-k8s-calico--apiserver--69f746874b--drrg8-eth0" May 16 00:23:41.287491 containerd[1554]: 2025-05-16 00:23:41.250 [INFO][5764] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" HandleID="k8s-pod-network.96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" Workload="localhost-k8s-calico--apiserver--69f746874b--drrg8-eth0" May 16 00:23:41.287491 containerd[1554]: 2025-05-16 00:23:41.253 [INFO][5764] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:23:41.287491 containerd[1554]: 2025-05-16 00:23:41.268 [INFO][5750] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" May 16 00:23:41.303572 systemd[1]: run-netns-cni\x2d84d17afa\x2dc5ba\x2d1625\x2d76a2\x2d079a9cd8f46a.mount: Deactivated successfully. May 16 00:23:41.311800 containerd[1554]: time="2025-05-16T00:23:41.310341474Z" level=info msg="TearDown network for sandbox \"96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9\" successfully" May 16 00:23:41.311800 containerd[1554]: time="2025-05-16T00:23:41.311528985Z" level=info msg="StopPodSandbox for \"96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9\" returns successfully" May 16 00:23:42.977385 kubelet[2779]: I0516 00:23:42.977274 2779 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8798d\" (UniqueName: \"kubernetes.io/projected/f8823d20-0cc7-41eb-b83e-818f1ec7a773-kube-api-access-8798d\") pod \"f8823d20-0cc7-41eb-b83e-818f1ec7a773\" (UID: \"f8823d20-0cc7-41eb-b83e-818f1ec7a773\") " May 16 00:23:42.977385 kubelet[2779]: I0516 00:23:42.977336 2779 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f8823d20-0cc7-41eb-b83e-818f1ec7a773-calico-apiserver-certs\") pod \"f8823d20-0cc7-41eb-b83e-818f1ec7a773\" (UID: \"f8823d20-0cc7-41eb-b83e-818f1ec7a773\") " May 16 00:23:43.123403 sshd[5776]: Connection closed by 147.75.109.163 port 60206 May 16 00:23:43.177502 sshd-session[5761]: pam_unix(sshd:session): session closed for user core May 16 00:23:43.256412 systemd[1]: sshd@16-139.178.70.108:22-147.75.109.163:60206.service: Deactivated successfully. May 16 00:23:43.258128 systemd[1]: session-19.scope: Deactivated successfully. May 16 00:23:43.258288 systemd[1]: session-19.scope: Consumed 278ms CPU time, 65.9M memory peak. May 16 00:23:43.259654 systemd-logind[1535]: Session 19 logged out. Waiting for processes to exit. May 16 00:23:43.265731 systemd[1]: Started sshd@17-139.178.70.108:22-147.75.109.163:60210.service - OpenSSH per-connection server daemon (147.75.109.163:60210). May 16 00:23:43.267433 systemd-logind[1535]: Removed session 19. May 16 00:23:43.548254 sshd[5797]: Accepted publickey for core from 147.75.109.163 port 60210 ssh2: RSA SHA256:EfNgMsQPwPDdLbPZByY9C3lyE6YZGC6uGSeNGzrF4TU May 16 00:23:43.543450 systemd[1]: var-lib-kubelet-pods-f8823d20\x2d0cc7\x2d41eb\x2db83e\x2d818f1ec7a773-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8798d.mount: Deactivated successfully. May 16 00:23:43.551909 sshd-session[5797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:23:43.548041 systemd[1]: var-lib-kubelet-pods-f8823d20\x2d0cc7\x2d41eb\x2db83e\x2d818f1ec7a773-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 16 00:23:43.568495 systemd-logind[1535]: New session 20 of user core. May 16 00:23:43.574487 systemd[1]: Started session-20.scope - Session 20 of User core. May 16 00:23:43.907036 kubelet[2779]: I0516 00:23:43.900092 2779 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8823d20-0cc7-41eb-b83e-818f1ec7a773-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "f8823d20-0cc7-41eb-b83e-818f1ec7a773" (UID: "f8823d20-0cc7-41eb-b83e-818f1ec7a773"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 16 00:23:43.930695 kubelet[2779]: I0516 00:23:43.930612 2779 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8823d20-0cc7-41eb-b83e-818f1ec7a773-kube-api-access-8798d" (OuterVolumeSpecName: "kube-api-access-8798d") pod "f8823d20-0cc7-41eb-b83e-818f1ec7a773" (UID: "f8823d20-0cc7-41eb-b83e-818f1ec7a773"). InnerVolumeSpecName "kube-api-access-8798d". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 00:23:43.997035 kubelet[2779]: I0516 00:23:43.997013 2779 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8798d\" (UniqueName: \"kubernetes.io/projected/f8823d20-0cc7-41eb-b83e-818f1ec7a773-kube-api-access-8798d\") pod \"f8823d20-0cc7-41eb-b83e-818f1ec7a773\" (UID: \"f8823d20-0cc7-41eb-b83e-818f1ec7a773\") " May 16 00:23:44.029654 kubelet[2779]: W0516 00:23:44.025572 2779 empty_dir.go:511] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/f8823d20-0cc7-41eb-b83e-818f1ec7a773/volumes/kubernetes.io~projected/kube-api-access-8798d May 16 00:23:44.029654 kubelet[2779]: I0516 00:23:44.029497 2779 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8823d20-0cc7-41eb-b83e-818f1ec7a773-kube-api-access-8798d" (OuterVolumeSpecName: "kube-api-access-8798d") pod "f8823d20-0cc7-41eb-b83e-818f1ec7a773" (UID: "f8823d20-0cc7-41eb-b83e-818f1ec7a773"). InnerVolumeSpecName "kube-api-access-8798d". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 00:23:44.163407 kubelet[2779]: I0516 00:23:44.161297 2779 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f8823d20-0cc7-41eb-b83e-818f1ec7a773-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" May 16 00:23:44.265720 kubelet[2779]: I0516 00:23:44.265294 2779 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8798d\" (UniqueName: \"kubernetes.io/projected/f8823d20-0cc7-41eb-b83e-818f1ec7a773-kube-api-access-8798d\") on node \"localhost\" DevicePath \"\"" May 16 00:23:44.550591 kubelet[2779]: E0516 00:23:44.549645 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-786d5f7786-2pbzk" podUID="9824a9f5-eda0-4e88-bed1-bf9b90951f1d" May 16 00:23:45.138738 containerd[1554]: time="2025-05-16T00:23:45.134837785Z" level=info msg="TaskExit event in podsandbox handler container_id:\"23c014efccae973a1546c7131a694df96b5bc13d78434a7b6c9be3ff82edd438\" id:\"145ade9d62eafeac0caa17e674b4148fba9806df649aac683e81de4c3efb5c90\" pid:5823 exited_at:{seconds:1747355025 nanos:76812631}" May 16 00:23:45.186323 systemd[1]: Removed slice kubepods-besteffort-podf8823d20_0cc7_41eb_b83e_818f1ec7a773.slice - libcontainer container kubepods-besteffort-podf8823d20_0cc7_41eb_b83e_818f1ec7a773.slice. May 16 00:23:45.186408 systemd[1]: kubepods-besteffort-podf8823d20_0cc7_41eb_b83e_818f1ec7a773.slice: Consumed 535ms CPU time, 58.7M memory peak, 17.8M read from disk. May 16 00:23:48.257268 kubelet[2779]: I0516 00:23:48.224422 2779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8823d20-0cc7-41eb-b83e-818f1ec7a773" path="/var/lib/kubelet/pods/f8823d20-0cc7-41eb-b83e-818f1ec7a773/volumes" May 16 00:23:49.274176 sshd[5806]: Connection closed by 147.75.109.163 port 60210 May 16 00:23:49.305899 sshd-session[5797]: pam_unix(sshd:session): session closed for user core May 16 00:23:49.422804 systemd[1]: sshd@17-139.178.70.108:22-147.75.109.163:60210.service: Deactivated successfully. May 16 00:23:49.423883 systemd[1]: session-20.scope: Deactivated successfully. May 16 00:23:49.424001 systemd[1]: session-20.scope: Consumed 796ms CPU time, 67.7M memory peak. May 16 00:23:49.424339 systemd-logind[1535]: Session 20 logged out. Waiting for processes to exit. May 16 00:23:49.440275 systemd[1]: Started sshd@18-139.178.70.108:22-147.75.109.163:53138.service - OpenSSH per-connection server daemon (147.75.109.163:53138). May 16 00:23:49.441440 systemd-logind[1535]: Removed session 20. May 16 00:23:49.605387 sshd[5845]: Accepted publickey for core from 147.75.109.163 port 53138 ssh2: RSA SHA256:EfNgMsQPwPDdLbPZByY9C3lyE6YZGC6uGSeNGzrF4TU May 16 00:23:49.627459 sshd-session[5845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:23:49.639050 systemd-logind[1535]: New session 21 of user core. May 16 00:23:49.640454 systemd[1]: Started session-21.scope - Session 21 of User core. May 16 00:23:50.803787 sshd[5850]: Connection closed by 147.75.109.163 port 53138 May 16 00:23:50.804779 sshd-session[5845]: pam_unix(sshd:session): session closed for user core May 16 00:23:50.808878 systemd[1]: sshd@18-139.178.70.108:22-147.75.109.163:53138.service: Deactivated successfully. May 16 00:23:50.811204 systemd[1]: session-21.scope: Deactivated successfully. May 16 00:23:50.812810 systemd-logind[1535]: Session 21 logged out. Waiting for processes to exit. May 16 00:23:50.813953 systemd-logind[1535]: Removed session 21. May 16 00:23:54.336660 kubelet[2779]: E0516 00:23:54.336521 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-52cwp" podUID="52758d11-1828-43a1-82dc-8636be4d16ce" May 16 00:23:54.957588 kubelet[2779]: I0516 00:23:54.957546 2779 scope.go:117] "RemoveContainer" containerID="225900eb8b702e488a4765387da782d7becb4a0e31de8fdfb7a578703a940f25" May 16 00:23:55.082592 containerd[1554]: time="2025-05-16T00:23:55.070919597Z" level=info msg="RemoveContainer for \"225900eb8b702e488a4765387da782d7becb4a0e31de8fdfb7a578703a940f25\"" May 16 00:23:55.164652 containerd[1554]: time="2025-05-16T00:23:55.164596920Z" level=info msg="RemoveContainer for \"225900eb8b702e488a4765387da782d7becb4a0e31de8fdfb7a578703a940f25\" returns successfully" May 16 00:23:55.169459 containerd[1554]: time="2025-05-16T00:23:55.169429474Z" level=info msg="StopPodSandbox for \"96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9\"" May 16 00:23:55.869889 systemd[1]: Started sshd@19-139.178.70.108:22-147.75.109.163:53146.service - OpenSSH per-connection server daemon (147.75.109.163:53146). May 16 00:23:56.106497 sshd[5888]: Accepted publickey for core from 147.75.109.163 port 53146 ssh2: RSA SHA256:EfNgMsQPwPDdLbPZByY9C3lyE6YZGC6uGSeNGzrF4TU May 16 00:23:56.115295 sshd-session[5888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:23:56.127646 systemd-logind[1535]: New session 22 of user core. May 16 00:23:56.134522 systemd[1]: Started session-22.scope - Session 22 of User core. May 16 00:23:56.432979 containerd[1554]: 2025-05-16 00:23:55.794 [WARNING][5879] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f746874b--drrg8-eth0" May 16 00:23:56.432979 containerd[1554]: 2025-05-16 00:23:55.794 [INFO][5879] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" May 16 00:23:56.432979 containerd[1554]: 2025-05-16 00:23:55.794 [INFO][5879] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" iface="eth0" netns="" May 16 00:23:56.432979 containerd[1554]: 2025-05-16 00:23:55.795 [INFO][5879] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" May 16 00:23:56.432979 containerd[1554]: 2025-05-16 00:23:55.795 [INFO][5879] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" May 16 00:23:56.432979 containerd[1554]: 2025-05-16 00:23:56.377 [INFO][5886] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" HandleID="k8s-pod-network.96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" Workload="localhost-k8s-calico--apiserver--69f746874b--drrg8-eth0" May 16 00:23:56.432979 containerd[1554]: 2025-05-16 00:23:56.380 [INFO][5886] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:23:56.432979 containerd[1554]: 2025-05-16 00:23:56.381 [INFO][5886] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:23:56.432979 containerd[1554]: 2025-05-16 00:23:56.411 [WARNING][5886] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" HandleID="k8s-pod-network.96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" Workload="localhost-k8s-calico--apiserver--69f746874b--drrg8-eth0" May 16 00:23:56.432979 containerd[1554]: 2025-05-16 00:23:56.411 [INFO][5886] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" HandleID="k8s-pod-network.96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" Workload="localhost-k8s-calico--apiserver--69f746874b--drrg8-eth0" May 16 00:23:56.432979 containerd[1554]: 2025-05-16 00:23:56.414 [INFO][5886] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:23:56.432979 containerd[1554]: 2025-05-16 00:23:56.421 [INFO][5879] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" May 16 00:23:56.440360 containerd[1554]: time="2025-05-16T00:23:56.433033442Z" level=info msg="TearDown network for sandbox \"96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9\" successfully" May 16 00:23:56.440360 containerd[1554]: time="2025-05-16T00:23:56.436834751Z" level=info msg="StopPodSandbox for \"96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9\" returns successfully" May 16 00:23:56.451342 containerd[1554]: time="2025-05-16T00:23:56.451067355Z" level=info msg="RemovePodSandbox for \"96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9\"" May 16 00:23:56.453917 containerd[1554]: time="2025-05-16T00:23:56.453013343Z" level=info msg="Forcibly stopping sandbox \"96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9\"" May 16 00:23:56.693016 containerd[1554]: 2025-05-16 00:23:56.646 [WARNING][5907] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f746874b--drrg8-eth0" May 16 00:23:56.693016 containerd[1554]: 2025-05-16 00:23:56.646 [INFO][5907] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" May 16 00:23:56.693016 containerd[1554]: 2025-05-16 00:23:56.646 [INFO][5907] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" iface="eth0" netns="" May 16 00:23:56.693016 containerd[1554]: 2025-05-16 00:23:56.646 [INFO][5907] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" May 16 00:23:56.693016 containerd[1554]: 2025-05-16 00:23:56.646 [INFO][5907] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" May 16 00:23:56.693016 containerd[1554]: 2025-05-16 00:23:56.672 [INFO][5917] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" HandleID="k8s-pod-network.96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" Workload="localhost-k8s-calico--apiserver--69f746874b--drrg8-eth0" May 16 00:23:56.693016 containerd[1554]: 2025-05-16 00:23:56.672 [INFO][5917] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:23:56.693016 containerd[1554]: 2025-05-16 00:23:56.672 [INFO][5917] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:23:56.693016 containerd[1554]: 2025-05-16 00:23:56.682 [WARNING][5917] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" HandleID="k8s-pod-network.96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" Workload="localhost-k8s-calico--apiserver--69f746874b--drrg8-eth0" May 16 00:23:56.693016 containerd[1554]: 2025-05-16 00:23:56.683 [INFO][5917] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" HandleID="k8s-pod-network.96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" Workload="localhost-k8s-calico--apiserver--69f746874b--drrg8-eth0" May 16 00:23:56.693016 containerd[1554]: 2025-05-16 00:23:56.687 [INFO][5917] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:23:56.693016 containerd[1554]: 2025-05-16 00:23:56.690 [INFO][5907] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9" May 16 00:23:56.701123 containerd[1554]: time="2025-05-16T00:23:56.693564239Z" level=info msg="TearDown network for sandbox \"96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9\" successfully" May 16 00:23:56.741363 containerd[1554]: time="2025-05-16T00:23:56.741109897Z" level=info msg="Ensure that sandbox 96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9 in task-service has been cleanup successfully" May 16 00:23:56.782613 containerd[1554]: time="2025-05-16T00:23:56.782374754Z" level=info msg="RemovePodSandbox \"96046ad50a8a54256416559d0bc1f5fd508ce2d3f26f860d834582666007c0d9\" returns successfully" May 16 00:23:57.232936 containerd[1554]: time="2025-05-16T00:23:57.232898721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 16 00:23:57.680518 sshd[5893]: Connection closed by 147.75.109.163 port 53146 May 16 00:23:57.682798 sshd-session[5888]: pam_unix(sshd:session): session closed for user core May 16 00:23:57.687322 systemd[1]: sshd@19-139.178.70.108:22-147.75.109.163:53146.service: Deactivated successfully. May 16 00:23:57.688967 systemd[1]: session-22.scope: Deactivated successfully. May 16 00:23:57.689618 systemd-logind[1535]: Session 22 logged out. Waiting for processes to exit. May 16 00:23:57.690801 systemd-logind[1535]: Removed session 22. May 16 00:23:58.055837 containerd[1554]: time="2025-05-16T00:23:58.055686171Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 00:23:58.128954 containerd[1554]: time="2025-05-16T00:23:58.127727793Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 16 00:23:58.128954 containerd[1554]: time="2025-05-16T00:23:58.128915661Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 16 00:23:58.599151 kubelet[2779]: E0516 00:23:58.474655 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 00:23:58.611006 kubelet[2779]: E0516 00:23:58.610922 2779 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 00:23:58.670538 kubelet[2779]: E0516 00:23:58.670408 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f2939758c3c44271a54467e3945f6783,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bslh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-786d5f7786-2pbzk_calico-system(9824a9f5-eda0-4e88-bed1-bf9b90951f1d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 00:23:58.675204 containerd[1554]: time="2025-05-16T00:23:58.675043962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 16 00:23:58.910970 containerd[1554]: time="2025-05-16T00:23:58.910832323Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 00:23:58.912283 containerd[1554]: time="2025-05-16T00:23:58.912265108Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 16 00:23:58.912625 containerd[1554]: time="2025-05-16T00:23:58.912600324Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 16 00:23:58.925559 kubelet[2779]: E0516 00:23:58.925532 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 00:23:58.926051 kubelet[2779]: E0516 00:23:58.925738 2779 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 00:23:58.926051 kubelet[2779]: E0516 00:23:58.925819 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bslh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-786d5f7786-2pbzk_calico-system(9824a9f5-eda0-4e88-bed1-bf9b90951f1d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 00:23:59.008886 kubelet[2779]: E0516 00:23:59.007909 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-786d5f7786-2pbzk" podUID="9824a9f5-eda0-4e88-bed1-bf9b90951f1d"