May 27 17:15:49.833199 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 27 17:15:49.833219 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue May 27 15:31:23 -00 2025 May 27 17:15:49.833228 kernel: KASLR enabled May 27 17:15:49.833234 kernel: efi: EFI v2.7 by EDK II May 27 17:15:49.833239 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 May 27 17:15:49.833245 kernel: random: crng init done May 27 17:15:49.833252 kernel: secureboot: Secure boot disabled May 27 17:15:49.833257 kernel: ACPI: Early table checksum verification disabled May 27 17:15:49.833263 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) May 27 17:15:49.833270 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 27 17:15:49.833276 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:15:49.833292 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:15:49.833300 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:15:49.833306 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:15:49.833313 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:15:49.833321 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:15:49.833327 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:15:49.833334 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:15:49.833340 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:15:49.833346 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 27 17:15:49.833352 kernel: ACPI: Use ACPI SPCR as default console: Yes May 27 17:15:49.833358 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 27 17:15:49.833364 kernel: NODE_DATA(0) allocated [mem 0xdc965dc0-0xdc96cfff] May 27 17:15:49.833370 kernel: Zone ranges: May 27 17:15:49.833377 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 27 17:15:49.833384 kernel: DMA32 empty May 27 17:15:49.833390 kernel: Normal empty May 27 17:15:49.833396 kernel: Device empty May 27 17:15:49.833402 kernel: Movable zone start for each node May 27 17:15:49.833408 kernel: Early memory node ranges May 27 17:15:49.833414 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] May 27 17:15:49.833420 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] May 27 17:15:49.833427 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] May 27 17:15:49.833433 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] May 27 17:15:49.833499 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] May 27 17:15:49.833507 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] May 27 17:15:49.833513 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] May 27 17:15:49.833521 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] May 27 17:15:49.833528 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] May 27 17:15:49.833534 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 27 17:15:49.833543 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 27 17:15:49.833549 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 27 17:15:49.833556 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 27 17:15:49.833564 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 27 17:15:49.833571 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 27 17:15:49.833577 kernel: psci: probing for conduit method from ACPI. May 27 17:15:49.833584 kernel: psci: PSCIv1.1 detected in firmware. May 27 17:15:49.833591 kernel: psci: Using standard PSCI v0.2 function IDs May 27 17:15:49.833598 kernel: psci: Trusted OS migration not required May 27 17:15:49.833604 kernel: psci: SMC Calling Convention v1.1 May 27 17:15:49.833611 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 27 17:15:49.833618 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 27 17:15:49.833624 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 27 17:15:49.833633 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 27 17:15:49.833639 kernel: Detected PIPT I-cache on CPU0 May 27 17:15:49.833646 kernel: CPU features: detected: GIC system register CPU interface May 27 17:15:49.833653 kernel: CPU features: detected: Spectre-v4 May 27 17:15:49.833659 kernel: CPU features: detected: Spectre-BHB May 27 17:15:49.833666 kernel: CPU features: kernel page table isolation forced ON by KASLR May 27 17:15:49.833673 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 27 17:15:49.833679 kernel: CPU features: detected: ARM erratum 1418040 May 27 17:15:49.833686 kernel: CPU features: detected: SSBS not fully self-synchronizing May 27 17:15:49.833692 kernel: alternatives: applying boot alternatives May 27 17:15:49.833700 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=4e706b869299e1c88703222069cdfa08c45ebce568f762053eea5b3f5f0939c3 May 27 17:15:49.833709 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 17:15:49.833716 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 27 17:15:49.833723 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 17:15:49.833729 kernel: Fallback order for Node 0: 0 May 27 17:15:49.833736 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 May 27 17:15:49.833742 kernel: Policy zone: DMA May 27 17:15:49.833749 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 17:15:49.833756 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB May 27 17:15:49.833762 kernel: software IO TLB: area num 4. May 27 17:15:49.833769 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB May 27 17:15:49.833776 kernel: software IO TLB: mapped [mem 0x00000000d8c00000-0x00000000d9000000] (4MB) May 27 17:15:49.833782 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 27 17:15:49.833790 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 17:15:49.833797 kernel: rcu: RCU event tracing is enabled. May 27 17:15:49.833804 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 27 17:15:49.833811 kernel: Trampoline variant of Tasks RCU enabled. May 27 17:15:49.833817 kernel: Tracing variant of Tasks RCU enabled. May 27 17:15:49.833824 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 17:15:49.833830 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 27 17:15:49.833837 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 17:15:49.833843 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 17:15:49.833850 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 27 17:15:49.833857 kernel: GICv3: 256 SPIs implemented May 27 17:15:49.833865 kernel: GICv3: 0 Extended SPIs implemented May 27 17:15:49.833872 kernel: Root IRQ handler: gic_handle_irq May 27 17:15:49.833878 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 27 17:15:49.833885 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 May 27 17:15:49.833892 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 27 17:15:49.833898 kernel: ITS [mem 0x08080000-0x0809ffff] May 27 17:15:49.833905 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) May 27 17:15:49.833912 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) May 27 17:15:49.833918 kernel: GICv3: using LPI property table @0x00000000400f0000 May 27 17:15:49.833925 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040110000 May 27 17:15:49.833932 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 17:15:49.833938 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 27 17:15:49.833946 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 27 17:15:49.833953 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 27 17:15:49.833960 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 27 17:15:49.833967 kernel: arm-pv: using stolen time PV May 27 17:15:49.833974 kernel: Console: colour dummy device 80x25 May 27 17:15:49.833981 kernel: ACPI: Core revision 20240827 May 27 17:15:49.833988 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 27 17:15:49.833994 kernel: pid_max: default: 32768 minimum: 301 May 27 17:15:49.834001 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 17:15:49.834009 kernel: landlock: Up and running. May 27 17:15:49.834016 kernel: SELinux: Initializing. May 27 17:15:49.834022 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 17:15:49.834029 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 17:15:49.834037 kernel: rcu: Hierarchical SRCU implementation. May 27 17:15:49.834043 kernel: rcu: Max phase no-delay instances is 400. May 27 17:15:49.834051 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 17:15:49.834057 kernel: Remapping and enabling EFI services. May 27 17:15:49.834064 kernel: smp: Bringing up secondary CPUs ... May 27 17:15:49.834071 kernel: Detected PIPT I-cache on CPU1 May 27 17:15:49.834084 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 27 17:15:49.834091 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040120000 May 27 17:15:49.834100 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 27 17:15:49.834107 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 27 17:15:49.834114 kernel: Detected PIPT I-cache on CPU2 May 27 17:15:49.834121 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 27 17:15:49.834128 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040130000 May 27 17:15:49.834137 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 27 17:15:49.834144 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 27 17:15:49.834151 kernel: Detected PIPT I-cache on CPU3 May 27 17:15:49.834158 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 27 17:15:49.834165 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040140000 May 27 17:15:49.834172 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 27 17:15:49.834179 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 27 17:15:49.834186 kernel: smp: Brought up 1 node, 4 CPUs May 27 17:15:49.834193 kernel: SMP: Total of 4 processors activated. May 27 17:15:49.834200 kernel: CPU: All CPU(s) started at EL1 May 27 17:15:49.834208 kernel: CPU features: detected: 32-bit EL0 Support May 27 17:15:49.834216 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 27 17:15:49.834223 kernel: CPU features: detected: Common not Private translations May 27 17:15:49.834230 kernel: CPU features: detected: CRC32 instructions May 27 17:15:49.834237 kernel: CPU features: detected: Enhanced Virtualization Traps May 27 17:15:49.834244 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 27 17:15:49.834251 kernel: CPU features: detected: LSE atomic instructions May 27 17:15:49.834259 kernel: CPU features: detected: Privileged Access Never May 27 17:15:49.834266 kernel: CPU features: detected: RAS Extension Support May 27 17:15:49.834274 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 27 17:15:49.834287 kernel: alternatives: applying system-wide alternatives May 27 17:15:49.834297 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 May 27 17:15:49.834305 kernel: Memory: 2440984K/2572288K available (11072K kernel code, 2276K rwdata, 8936K rodata, 39424K init, 1034K bss, 125536K reserved, 0K cma-reserved) May 27 17:15:49.834312 kernel: devtmpfs: initialized May 27 17:15:49.834319 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 17:15:49.834326 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 27 17:15:49.834333 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 27 17:15:49.834340 kernel: 0 pages in range for non-PLT usage May 27 17:15:49.834350 kernel: 508544 pages in range for PLT usage May 27 17:15:49.834357 kernel: pinctrl core: initialized pinctrl subsystem May 27 17:15:49.834364 kernel: SMBIOS 3.0.0 present. May 27 17:15:49.834371 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 27 17:15:49.834378 kernel: DMI: Memory slots populated: 1/1 May 27 17:15:49.834385 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 17:15:49.834392 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 27 17:15:49.834399 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 27 17:15:49.834407 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 27 17:15:49.834415 kernel: audit: initializing netlink subsys (disabled) May 27 17:15:49.834422 kernel: audit: type=2000 audit(0.032:1): state=initialized audit_enabled=0 res=1 May 27 17:15:49.834429 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 17:15:49.834436 kernel: cpuidle: using governor menu May 27 17:15:49.834450 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 27 17:15:49.834458 kernel: ASID allocator initialised with 32768 entries May 27 17:15:49.834465 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 17:15:49.834472 kernel: Serial: AMBA PL011 UART driver May 27 17:15:49.834479 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 17:15:49.834488 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 27 17:15:49.834495 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 27 17:15:49.834503 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 27 17:15:49.834510 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 17:15:49.834517 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 27 17:15:49.834524 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 27 17:15:49.834531 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 27 17:15:49.834538 kernel: ACPI: Added _OSI(Module Device) May 27 17:15:49.834545 kernel: ACPI: Added _OSI(Processor Device) May 27 17:15:49.834554 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 17:15:49.834561 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 17:15:49.834568 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 17:15:49.834575 kernel: ACPI: Interpreter enabled May 27 17:15:49.834582 kernel: ACPI: Using GIC for interrupt routing May 27 17:15:49.834589 kernel: ACPI: MCFG table detected, 1 entries May 27 17:15:49.834596 kernel: ACPI: CPU0 has been hot-added May 27 17:15:49.834603 kernel: ACPI: CPU1 has been hot-added May 27 17:15:49.834610 kernel: ACPI: CPU2 has been hot-added May 27 17:15:49.834618 kernel: ACPI: CPU3 has been hot-added May 27 17:15:49.834625 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 27 17:15:49.834633 kernel: printk: legacy console [ttyAMA0] enabled May 27 17:15:49.834640 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 27 17:15:49.834779 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 27 17:15:49.834849 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 27 17:15:49.834912 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 27 17:15:49.834974 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 27 17:15:49.835038 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 27 17:15:49.835048 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 27 17:15:49.835055 kernel: PCI host bridge to bus 0000:00 May 27 17:15:49.835129 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 27 17:15:49.835189 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 27 17:15:49.835244 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 27 17:15:49.835315 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 27 17:15:49.835400 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint May 27 17:15:49.835486 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 27 17:15:49.835554 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] May 27 17:15:49.835617 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] May 27 17:15:49.835679 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] May 27 17:15:49.835739 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned May 27 17:15:49.835800 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned May 27 17:15:49.835864 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned May 27 17:15:49.835923 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 27 17:15:49.835980 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 27 17:15:49.836034 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 27 17:15:49.836043 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 27 17:15:49.836050 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 27 17:15:49.836057 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 27 17:15:49.836066 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 27 17:15:49.836073 kernel: iommu: Default domain type: Translated May 27 17:15:49.836080 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 27 17:15:49.836087 kernel: efivars: Registered efivars operations May 27 17:15:49.836094 kernel: vgaarb: loaded May 27 17:15:49.836101 kernel: clocksource: Switched to clocksource arch_sys_counter May 27 17:15:49.836108 kernel: VFS: Disk quotas dquot_6.6.0 May 27 17:15:49.836115 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 17:15:49.836122 kernel: pnp: PnP ACPI init May 27 17:15:49.836189 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 27 17:15:49.836199 kernel: pnp: PnP ACPI: found 1 devices May 27 17:15:49.836206 kernel: NET: Registered PF_INET protocol family May 27 17:15:49.836213 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 27 17:15:49.836220 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 27 17:15:49.836228 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 17:15:49.836235 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 17:15:49.836242 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 27 17:15:49.836252 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 27 17:15:49.836259 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 17:15:49.836266 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 17:15:49.836273 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 17:15:49.836279 kernel: PCI: CLS 0 bytes, default 64 May 27 17:15:49.836294 kernel: kvm [1]: HYP mode not available May 27 17:15:49.836302 kernel: Initialise system trusted keyrings May 27 17:15:49.836309 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 27 17:15:49.836316 kernel: Key type asymmetric registered May 27 17:15:49.836325 kernel: Asymmetric key parser 'x509' registered May 27 17:15:49.836332 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 27 17:15:49.836339 kernel: io scheduler mq-deadline registered May 27 17:15:49.836346 kernel: io scheduler kyber registered May 27 17:15:49.836353 kernel: io scheduler bfq registered May 27 17:15:49.836360 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 27 17:15:49.836366 kernel: ACPI: button: Power Button [PWRB] May 27 17:15:49.836374 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 27 17:15:49.836493 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 27 17:15:49.836509 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 17:15:49.836516 kernel: thunder_xcv, ver 1.0 May 27 17:15:49.836523 kernel: thunder_bgx, ver 1.0 May 27 17:15:49.836530 kernel: nicpf, ver 1.0 May 27 17:15:49.836537 kernel: nicvf, ver 1.0 May 27 17:15:49.836618 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 27 17:15:49.836678 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-27T17:15:49 UTC (1748366149) May 27 17:15:49.836687 kernel: hid: raw HID events driver (C) Jiri Kosina May 27 17:15:49.836696 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available May 27 17:15:49.836703 kernel: watchdog: NMI not fully supported May 27 17:15:49.836710 kernel: watchdog: Hard watchdog permanently disabled May 27 17:15:49.836717 kernel: NET: Registered PF_INET6 protocol family May 27 17:15:49.836724 kernel: Segment Routing with IPv6 May 27 17:15:49.836731 kernel: In-situ OAM (IOAM) with IPv6 May 27 17:15:49.836738 kernel: NET: Registered PF_PACKET protocol family May 27 17:15:49.836745 kernel: Key type dns_resolver registered May 27 17:15:49.836752 kernel: registered taskstats version 1 May 27 17:15:49.836760 kernel: Loading compiled-in X.509 certificates May 27 17:15:49.836767 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: 8e5e45c34fa91568ef1fa3bdfd5a71a43b4c4580' May 27 17:15:49.836774 kernel: Demotion targets for Node 0: null May 27 17:15:49.836781 kernel: Key type .fscrypt registered May 27 17:15:49.836788 kernel: Key type fscrypt-provisioning registered May 27 17:15:49.836795 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 17:15:49.836802 kernel: ima: Allocated hash algorithm: sha1 May 27 17:15:49.836809 kernel: ima: No architecture policies found May 27 17:15:49.836816 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 27 17:15:49.836826 kernel: clk: Disabling unused clocks May 27 17:15:49.836833 kernel: PM: genpd: Disabling unused power domains May 27 17:15:49.836840 kernel: Warning: unable to open an initial console. May 27 17:15:49.836847 kernel: Freeing unused kernel memory: 39424K May 27 17:15:49.836854 kernel: Run /init as init process May 27 17:15:49.836862 kernel: with arguments: May 27 17:15:49.836868 kernel: /init May 27 17:15:49.836875 kernel: with environment: May 27 17:15:49.836882 kernel: HOME=/ May 27 17:15:49.836891 kernel: TERM=linux May 27 17:15:49.836898 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 17:15:49.836906 systemd[1]: Successfully made /usr/ read-only. May 27 17:15:49.836916 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 17:15:49.836924 systemd[1]: Detected virtualization kvm. May 27 17:15:49.836932 systemd[1]: Detected architecture arm64. May 27 17:15:49.836939 systemd[1]: Running in initrd. May 27 17:15:49.836946 systemd[1]: No hostname configured, using default hostname. May 27 17:15:49.836956 systemd[1]: Hostname set to . May 27 17:15:49.836963 systemd[1]: Initializing machine ID from VM UUID. May 27 17:15:49.836971 systemd[1]: Queued start job for default target initrd.target. May 27 17:15:49.836978 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:15:49.836986 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:15:49.836994 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 17:15:49.837002 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 17:15:49.837010 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 17:15:49.837020 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 17:15:49.837028 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 17:15:49.837037 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 17:15:49.837044 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:15:49.837052 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 17:15:49.837060 systemd[1]: Reached target paths.target - Path Units. May 27 17:15:49.837069 systemd[1]: Reached target slices.target - Slice Units. May 27 17:15:49.837076 systemd[1]: Reached target swap.target - Swaps. May 27 17:15:49.837084 systemd[1]: Reached target timers.target - Timer Units. May 27 17:15:49.837092 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 17:15:49.837099 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 17:15:49.837107 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 17:15:49.837115 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 17:15:49.837122 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 17:15:49.837130 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 17:15:49.837139 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:15:49.837147 systemd[1]: Reached target sockets.target - Socket Units. May 27 17:15:49.837155 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 17:15:49.837163 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 17:15:49.837170 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 17:15:49.837178 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 17:15:49.837186 systemd[1]: Starting systemd-fsck-usr.service... May 27 17:15:49.837194 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 17:15:49.837202 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 17:15:49.837210 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:15:49.837218 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 17:15:49.837226 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:15:49.837234 systemd[1]: Finished systemd-fsck-usr.service. May 27 17:15:49.837243 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 17:15:49.837267 systemd-journald[244]: Collecting audit messages is disabled. May 27 17:15:49.837292 systemd-journald[244]: Journal started May 27 17:15:49.837313 systemd-journald[244]: Runtime Journal (/run/log/journal/6c226c4aeb24481488fdf28f28083813) is 6M, max 48.5M, 42.4M free. May 27 17:15:49.828291 systemd-modules-load[245]: Inserted module 'overlay' May 27 17:15:49.844127 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:15:49.844148 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 17:15:49.845858 systemd-modules-load[245]: Inserted module 'br_netfilter' May 27 17:15:49.847727 kernel: Bridge firewalling registered May 27 17:15:49.847749 systemd[1]: Started systemd-journald.service - Journal Service. May 27 17:15:49.849166 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 17:15:49.850638 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:15:49.856296 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 17:15:49.858231 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:15:49.860386 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 17:15:49.869668 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 17:15:49.875418 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:15:49.877680 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:15:49.881981 systemd-tmpfiles[272]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 17:15:49.885226 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:15:49.888329 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 17:15:49.890607 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 17:15:49.893377 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 17:15:49.913538 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=4e706b869299e1c88703222069cdfa08c45ebce568f762053eea5b3f5f0939c3 May 27 17:15:49.929746 systemd-resolved[288]: Positive Trust Anchors: May 27 17:15:49.929764 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 17:15:49.929796 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 17:15:49.934547 systemd-resolved[288]: Defaulting to hostname 'linux'. May 27 17:15:49.938239 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 17:15:49.940742 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 17:15:49.990473 kernel: SCSI subsystem initialized May 27 17:15:49.995461 kernel: Loading iSCSI transport class v2.0-870. May 27 17:15:50.003467 kernel: iscsi: registered transport (tcp) May 27 17:15:50.016481 kernel: iscsi: registered transport (qla4xxx) May 27 17:15:50.016550 kernel: QLogic iSCSI HBA Driver May 27 17:15:50.032765 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 17:15:50.048141 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:15:50.049740 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 17:15:50.097988 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 17:15:50.100492 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 17:15:50.162486 kernel: raid6: neonx8 gen() 15760 MB/s May 27 17:15:50.179482 kernel: raid6: neonx4 gen() 15777 MB/s May 27 17:15:50.196463 kernel: raid6: neonx2 gen() 13182 MB/s May 27 17:15:50.213469 kernel: raid6: neonx1 gen() 10536 MB/s May 27 17:15:50.230463 kernel: raid6: int64x8 gen() 6892 MB/s May 27 17:15:50.247465 kernel: raid6: int64x4 gen() 7344 MB/s May 27 17:15:50.264465 kernel: raid6: int64x2 gen() 6096 MB/s May 27 17:15:50.281597 kernel: raid6: int64x1 gen() 5046 MB/s May 27 17:15:50.281615 kernel: raid6: using algorithm neonx4 gen() 15777 MB/s May 27 17:15:50.299533 kernel: raid6: .... xor() 12393 MB/s, rmw enabled May 27 17:15:50.299550 kernel: raid6: using neon recovery algorithm May 27 17:15:50.304861 kernel: xor: measuring software checksum speed May 27 17:15:50.304883 kernel: 8regs : 20655 MB/sec May 27 17:15:50.305552 kernel: 32regs : 21658 MB/sec May 27 17:15:50.306774 kernel: arm64_neon : 28070 MB/sec May 27 17:15:50.306787 kernel: xor: using function: arm64_neon (28070 MB/sec) May 27 17:15:50.364483 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 17:15:50.370350 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 17:15:50.372903 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:15:50.405185 systemd-udevd[498]: Using default interface naming scheme 'v255'. May 27 17:15:50.409505 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:15:50.411950 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 17:15:50.449215 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation May 27 17:15:50.473175 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 17:15:50.476585 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 17:15:50.526218 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:15:50.530554 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 17:15:50.578464 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 27 17:15:50.580867 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 27 17:15:50.582613 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:15:50.592121 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 17:15:50.592141 kernel: GPT:9289727 != 19775487 May 27 17:15:50.592150 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 17:15:50.592166 kernel: GPT:9289727 != 19775487 May 27 17:15:50.592175 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 17:15:50.592183 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 17:15:50.582734 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:15:50.587813 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:15:50.594670 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:15:50.614673 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:15:50.631496 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 27 17:15:50.633000 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 17:15:50.642010 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 27 17:15:50.648379 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 27 17:15:50.649739 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 27 17:15:50.658537 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 17:15:50.659829 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 17:15:50.661916 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:15:50.664007 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 17:15:50.666763 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 17:15:50.668584 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 17:15:50.688550 disk-uuid[589]: Primary Header is updated. May 27 17:15:50.688550 disk-uuid[589]: Secondary Entries is updated. May 27 17:15:50.688550 disk-uuid[589]: Secondary Header is updated. May 27 17:15:50.692465 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 17:15:50.695215 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 17:15:51.704465 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 17:15:51.704895 disk-uuid[594]: The operation has completed successfully. May 27 17:15:51.723984 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 17:15:51.724081 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 17:15:51.754399 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 17:15:51.767200 sh[609]: Success May 27 17:15:51.782161 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 17:15:51.782210 kernel: device-mapper: uevent: version 1.0.3 May 27 17:15:51.782231 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 17:15:51.792166 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 27 17:15:51.818211 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 17:15:51.820603 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 17:15:51.833610 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 17:15:51.841059 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 17:15:51.841096 kernel: BTRFS: device fsid 3c8c76ef-f1da-40fe-979d-11bdf765e403 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (621) May 27 17:15:51.843459 kernel: BTRFS info (device dm-0): first mount of filesystem 3c8c76ef-f1da-40fe-979d-11bdf765e403 May 27 17:15:51.843479 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 27 17:15:51.843489 kernel: BTRFS info (device dm-0): using free-space-tree May 27 17:15:51.847368 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 17:15:51.848627 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 17:15:51.850057 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 17:15:51.850782 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 17:15:51.852328 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 17:15:51.877643 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (650) May 27 17:15:51.877681 kernel: BTRFS info (device vda6): first mount of filesystem 0631e8fb-ef71-4ba1-b2b8-88386996a754 May 27 17:15:51.879841 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 27 17:15:51.880727 kernel: BTRFS info (device vda6): using free-space-tree May 27 17:15:51.891449 kernel: BTRFS info (device vda6): last unmount of filesystem 0631e8fb-ef71-4ba1-b2b8-88386996a754 May 27 17:15:51.891494 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 17:15:51.893646 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 17:15:51.961484 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 17:15:51.965744 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 17:15:52.010199 systemd-networkd[797]: lo: Link UP May 27 17:15:52.010212 systemd-networkd[797]: lo: Gained carrier May 27 17:15:52.010934 systemd-networkd[797]: Enumeration completed May 27 17:15:52.011036 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 17:15:52.011813 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:15:52.011817 systemd-networkd[797]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:15:52.012616 systemd[1]: Reached target network.target - Network. May 27 17:15:52.012686 systemd-networkd[797]: eth0: Link UP May 27 17:15:52.012689 systemd-networkd[797]: eth0: Gained carrier May 27 17:15:52.012697 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:15:52.032662 ignition[701]: Ignition 2.21.0 May 27 17:15:52.032678 ignition[701]: Stage: fetch-offline May 27 17:15:52.032708 ignition[701]: no configs at "/usr/lib/ignition/base.d" May 27 17:15:52.032717 ignition[701]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:15:52.032900 ignition[701]: parsed url from cmdline: "" May 27 17:15:52.032903 ignition[701]: no config URL provided May 27 17:15:52.032907 ignition[701]: reading system config file "/usr/lib/ignition/user.ign" May 27 17:15:52.032914 ignition[701]: no config at "/usr/lib/ignition/user.ign" May 27 17:15:52.032933 ignition[701]: op(1): [started] loading QEMU firmware config module May 27 17:15:52.038487 systemd-networkd[797]: eth0: DHCPv4 address 10.0.0.128/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 17:15:52.032938 ignition[701]: op(1): executing: "modprobe" "qemu_fw_cfg" May 27 17:15:52.040305 ignition[701]: op(1): [finished] loading QEMU firmware config module May 27 17:15:52.080463 ignition[701]: parsing config with SHA512: e828fd35708c45ec59f352f6dfb871e8943ed0d6bea234407ff71eaac96a4676a64eea9caf21da8ad620471bba7d97532524f024f9da796780b13a5ca22224e1 May 27 17:15:52.086568 unknown[701]: fetched base config from "system" May 27 17:15:52.086582 unknown[701]: fetched user config from "qemu" May 27 17:15:52.086943 ignition[701]: fetch-offline: fetch-offline passed May 27 17:15:52.089162 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 17:15:52.087004 ignition[701]: Ignition finished successfully May 27 17:15:52.090531 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 27 17:15:52.091261 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 17:15:52.128209 ignition[808]: Ignition 2.21.0 May 27 17:15:52.128228 ignition[808]: Stage: kargs May 27 17:15:52.128385 ignition[808]: no configs at "/usr/lib/ignition/base.d" May 27 17:15:52.128394 ignition[808]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:15:52.131428 ignition[808]: kargs: kargs passed May 27 17:15:52.131518 ignition[808]: Ignition finished successfully May 27 17:15:52.134985 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 17:15:52.136991 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 17:15:52.164252 ignition[816]: Ignition 2.21.0 May 27 17:15:52.164270 ignition[816]: Stage: disks May 27 17:15:52.164405 ignition[816]: no configs at "/usr/lib/ignition/base.d" May 27 17:15:52.164413 ignition[816]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:15:52.165116 ignition[816]: disks: disks passed May 27 17:15:52.168871 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 17:15:52.165155 ignition[816]: Ignition finished successfully May 27 17:15:52.170184 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 17:15:52.171814 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 17:15:52.173531 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 17:15:52.175308 systemd[1]: Reached target sysinit.target - System Initialization. May 27 17:15:52.177312 systemd[1]: Reached target basic.target - Basic System. May 27 17:15:52.179778 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 17:15:52.211779 systemd-fsck[826]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 27 17:15:52.215839 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 17:15:52.218675 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 17:15:52.288367 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 17:15:52.289914 kernel: EXT4-fs (vda9): mounted filesystem a5483afc-8426-4c3e-85ef-8146f9077e7d r/w with ordered data mode. Quota mode: none. May 27 17:15:52.289650 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 17:15:52.292004 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 17:15:52.293614 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 17:15:52.294637 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 27 17:15:52.294689 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 17:15:52.294711 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 17:15:52.311996 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 17:15:52.314339 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 17:15:52.319158 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (835) May 27 17:15:52.319187 kernel: BTRFS info (device vda6): first mount of filesystem 0631e8fb-ef71-4ba1-b2b8-88386996a754 May 27 17:15:52.319199 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 27 17:15:52.320973 kernel: BTRFS info (device vda6): using free-space-tree May 27 17:15:52.323792 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 17:15:52.354028 initrd-setup-root[861]: cut: /sysroot/etc/passwd: No such file or directory May 27 17:15:52.357286 initrd-setup-root[868]: cut: /sysroot/etc/group: No such file or directory May 27 17:15:52.360490 initrd-setup-root[875]: cut: /sysroot/etc/shadow: No such file or directory May 27 17:15:52.364221 initrd-setup-root[882]: cut: /sysroot/etc/gshadow: No such file or directory May 27 17:15:52.443482 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 17:15:52.445427 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 17:15:52.446959 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 17:15:52.464461 kernel: BTRFS info (device vda6): last unmount of filesystem 0631e8fb-ef71-4ba1-b2b8-88386996a754 May 27 17:15:52.478523 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 17:15:52.488354 ignition[950]: INFO : Ignition 2.21.0 May 27 17:15:52.488354 ignition[950]: INFO : Stage: mount May 27 17:15:52.489939 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:15:52.489939 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:15:52.489939 ignition[950]: INFO : mount: mount passed May 27 17:15:52.493748 ignition[950]: INFO : Ignition finished successfully May 27 17:15:52.491319 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 17:15:52.493980 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 17:15:52.839917 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 17:15:52.841362 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 17:15:52.871509 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (963) May 27 17:15:52.871552 kernel: BTRFS info (device vda6): first mount of filesystem 0631e8fb-ef71-4ba1-b2b8-88386996a754 May 27 17:15:52.871563 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 27 17:15:52.873079 kernel: BTRFS info (device vda6): using free-space-tree May 27 17:15:52.876723 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 17:15:52.904564 ignition[980]: INFO : Ignition 2.21.0 May 27 17:15:52.904564 ignition[980]: INFO : Stage: files May 27 17:15:52.906957 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:15:52.906957 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:15:52.909182 ignition[980]: DEBUG : files: compiled without relabeling support, skipping May 27 17:15:52.909182 ignition[980]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 17:15:52.909182 ignition[980]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 17:15:52.913511 ignition[980]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 17:15:52.913511 ignition[980]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 17:15:52.913511 ignition[980]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 17:15:52.913511 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 27 17:15:52.913511 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 27 17:15:52.909903 unknown[980]: wrote ssh authorized keys file for user: core May 27 17:15:53.738547 systemd-networkd[797]: eth0: Gained IPv6LL May 27 17:15:53.750147 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 17:15:53.971644 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 27 17:15:53.971644 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 27 17:15:53.975669 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 27 17:15:53.975669 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 17:15:53.975669 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 17:15:53.975669 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 17:15:53.975669 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 17:15:53.975669 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 17:15:53.975669 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 17:15:53.975669 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 17:15:53.975669 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 17:15:53.975669 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 27 17:15:53.975669 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 27 17:15:53.975669 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 27 17:15:53.975669 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 May 27 17:15:54.495721 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 27 17:15:54.783015 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 27 17:15:54.783015 ignition[980]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 27 17:15:54.786727 ignition[980]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 17:15:54.786727 ignition[980]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 17:15:54.786727 ignition[980]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 27 17:15:54.786727 ignition[980]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 27 17:15:54.786727 ignition[980]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 17:15:54.786727 ignition[980]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 17:15:54.786727 ignition[980]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 27 17:15:54.786727 ignition[980]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 27 17:15:54.803408 ignition[980]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 27 17:15:54.806793 ignition[980]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 27 17:15:54.809660 ignition[980]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 27 17:15:54.809660 ignition[980]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 27 17:15:54.809660 ignition[980]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 27 17:15:54.809660 ignition[980]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 17:15:54.809660 ignition[980]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 17:15:54.809660 ignition[980]: INFO : files: files passed May 27 17:15:54.809660 ignition[980]: INFO : Ignition finished successfully May 27 17:15:54.810257 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 17:15:54.813571 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 17:15:54.815251 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 17:15:54.825694 initrd-setup-root-after-ignition[1008]: grep: /sysroot/oem/oem-release: No such file or directory May 27 17:15:54.824164 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 17:15:54.829476 initrd-setup-root-after-ignition[1010]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 17:15:54.829476 initrd-setup-root-after-ignition[1010]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 17:15:54.824466 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 17:15:54.835503 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 17:15:54.830496 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 17:15:54.832176 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 17:15:54.835142 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 17:15:54.872733 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 17:15:54.873631 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 17:15:54.875136 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 17:15:54.877489 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 17:15:54.879238 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 17:15:54.879972 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 17:15:54.903224 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 17:15:54.905561 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 17:15:54.938011 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 17:15:54.939329 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:15:54.941476 systemd[1]: Stopped target timers.target - Timer Units. May 27 17:15:54.943238 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 17:15:54.943371 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 17:15:54.945829 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 17:15:54.947825 systemd[1]: Stopped target basic.target - Basic System. May 27 17:15:54.949433 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 17:15:54.951120 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 17:15:54.953021 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 17:15:54.954929 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 17:15:54.956865 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 17:15:54.958665 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 17:15:54.960632 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 17:15:54.962553 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 17:15:54.964289 systemd[1]: Stopped target swap.target - Swaps. May 27 17:15:54.965825 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 17:15:54.965958 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 17:15:54.968208 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 17:15:54.969349 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:15:54.971223 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 17:15:54.974506 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:15:54.975734 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 17:15:54.975852 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 17:15:54.978878 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 17:15:54.978995 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 17:15:54.981043 systemd[1]: Stopped target paths.target - Path Units. May 27 17:15:54.982582 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 17:15:54.987507 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:15:54.988753 systemd[1]: Stopped target slices.target - Slice Units. May 27 17:15:54.990799 systemd[1]: Stopped target sockets.target - Socket Units. May 27 17:15:54.992304 systemd[1]: iscsid.socket: Deactivated successfully. May 27 17:15:54.992386 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 17:15:54.993980 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 17:15:54.994065 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 17:15:54.995577 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 17:15:54.995692 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 17:15:54.997522 systemd[1]: ignition-files.service: Deactivated successfully. May 27 17:15:54.997641 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 17:15:54.999947 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 17:15:55.001761 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 17:15:55.001893 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:15:55.018978 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 17:15:55.019854 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 17:15:55.019990 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:15:55.021883 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 17:15:55.021982 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 17:15:55.028382 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 17:15:55.030508 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 17:15:55.034494 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 17:15:55.035580 ignition[1037]: INFO : Ignition 2.21.0 May 27 17:15:55.035580 ignition[1037]: INFO : Stage: umount May 27 17:15:55.035580 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:15:55.035580 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:15:55.040305 ignition[1037]: INFO : umount: umount passed May 27 17:15:55.040305 ignition[1037]: INFO : Ignition finished successfully May 27 17:15:55.039876 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 17:15:55.039985 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 17:15:55.041479 systemd[1]: Stopped target network.target - Network. May 27 17:15:55.042959 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 17:15:55.043017 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 17:15:55.044810 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 17:15:55.044855 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 17:15:55.046641 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 17:15:55.046692 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 17:15:55.048636 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 17:15:55.048677 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 17:15:55.050471 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 17:15:55.052245 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 17:15:55.057718 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 17:15:55.057853 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 17:15:55.061100 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 17:15:55.061359 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 17:15:55.061401 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:15:55.065212 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 17:15:55.065539 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 17:15:55.065647 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 17:15:55.068326 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 17:15:55.068774 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 17:15:55.069963 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 17:15:55.069996 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 17:15:55.072942 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 17:15:55.073952 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 17:15:55.074011 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 17:15:55.076333 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 17:15:55.076376 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 17:15:55.079404 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 17:15:55.079461 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 17:15:55.081691 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:15:55.086432 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 17:15:55.102197 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 17:15:55.106630 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 17:15:55.107854 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 17:15:55.109406 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:15:55.110856 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 17:15:55.110939 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 17:15:55.113345 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 17:15:55.113398 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 17:15:55.114510 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 17:15:55.114540 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:15:55.116313 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 17:15:55.116360 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 17:15:55.119014 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 17:15:55.119061 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 17:15:55.121671 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 17:15:55.121720 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 17:15:55.124428 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 17:15:55.124498 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 17:15:55.126970 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 17:15:55.128059 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 17:15:55.128110 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:15:55.131158 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 17:15:55.131199 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:15:55.134141 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:15:55.134181 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:15:55.143980 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 17:15:55.144073 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 17:15:55.146215 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 17:15:55.148751 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 17:15:55.167763 systemd[1]: Switching root. May 27 17:15:55.203384 systemd-journald[244]: Journal stopped May 27 17:15:55.965585 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). May 27 17:15:55.965636 kernel: SELinux: policy capability network_peer_controls=1 May 27 17:15:55.965652 kernel: SELinux: policy capability open_perms=1 May 27 17:15:55.965662 kernel: SELinux: policy capability extended_socket_class=1 May 27 17:15:55.965671 kernel: SELinux: policy capability always_check_network=0 May 27 17:15:55.965684 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 17:15:55.965695 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 17:15:55.965704 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 17:15:55.965716 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 17:15:55.965728 kernel: SELinux: policy capability userspace_initial_context=0 May 27 17:15:55.965738 kernel: audit: type=1403 audit(1748366155.364:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 17:15:55.965752 systemd[1]: Successfully loaded SELinux policy in 43.048ms. May 27 17:15:55.965764 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.409ms. May 27 17:15:55.965777 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 17:15:55.965787 systemd[1]: Detected virtualization kvm. May 27 17:15:55.965797 systemd[1]: Detected architecture arm64. May 27 17:15:55.965806 systemd[1]: Detected first boot. May 27 17:15:55.965817 systemd[1]: Initializing machine ID from VM UUID. May 27 17:15:55.965826 kernel: NET: Registered PF_VSOCK protocol family May 27 17:15:55.965837 zram_generator::config[1084]: No configuration found. May 27 17:15:55.965848 systemd[1]: Populated /etc with preset unit settings. May 27 17:15:55.965858 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 17:15:55.965868 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 17:15:55.965878 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 17:15:55.965888 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 17:15:55.965898 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 17:15:55.965908 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 17:15:55.965919 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 17:15:55.965929 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 17:15:55.965939 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 17:15:55.965949 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 17:15:55.965959 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 17:15:55.965969 systemd[1]: Created slice user.slice - User and Session Slice. May 27 17:15:55.965978 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:15:55.965989 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:15:55.965999 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 17:15:55.966010 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 17:15:55.966020 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 17:15:55.966030 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 17:15:55.966040 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 27 17:15:55.966050 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:15:55.966060 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 17:15:55.966070 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 17:15:55.966082 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 17:15:55.966092 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 17:15:55.966103 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 17:15:55.966113 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:15:55.966123 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 17:15:55.966133 systemd[1]: Reached target slices.target - Slice Units. May 27 17:15:55.966143 systemd[1]: Reached target swap.target - Swaps. May 27 17:15:55.966153 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 17:15:55.966162 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 17:15:55.966174 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 17:15:55.966188 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 17:15:55.966198 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 17:15:55.966208 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:15:55.966218 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 17:15:55.966228 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 17:15:55.966238 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 17:15:55.966251 systemd[1]: Mounting media.mount - External Media Directory... May 27 17:15:55.966266 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 17:15:55.966281 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 17:15:55.966291 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 17:15:55.966302 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 17:15:55.966312 systemd[1]: Reached target machines.target - Containers. May 27 17:15:55.966322 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 17:15:55.966332 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:15:55.966342 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 17:15:55.966352 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 17:15:55.966362 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:15:55.966376 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 17:15:55.966386 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:15:55.966396 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 17:15:55.966406 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:15:55.966416 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 17:15:55.966426 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 17:15:55.966436 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 17:15:55.966457 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 17:15:55.966470 systemd[1]: Stopped systemd-fsck-usr.service. May 27 17:15:55.966481 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:15:55.966490 kernel: fuse: init (API version 7.41) May 27 17:15:55.966500 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 17:15:55.966509 kernel: loop: module loaded May 27 17:15:55.966519 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 17:15:55.966529 kernel: ACPI: bus type drm_connector registered May 27 17:15:55.966538 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 17:15:55.966549 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 17:15:55.966560 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 17:15:55.966570 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 17:15:55.966580 systemd[1]: verity-setup.service: Deactivated successfully. May 27 17:15:55.966590 systemd[1]: Stopped verity-setup.service. May 27 17:15:55.966601 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 17:15:55.966633 systemd-journald[1158]: Collecting audit messages is disabled. May 27 17:15:55.966654 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 17:15:55.966664 systemd-journald[1158]: Journal started May 27 17:15:55.966684 systemd-journald[1158]: Runtime Journal (/run/log/journal/6c226c4aeb24481488fdf28f28083813) is 6M, max 48.5M, 42.4M free. May 27 17:15:55.735111 systemd[1]: Queued start job for default target multi-user.target. May 27 17:15:55.748369 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 27 17:15:55.748752 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 17:15:55.968473 systemd[1]: Started systemd-journald.service - Journal Service. May 27 17:15:55.969971 systemd[1]: Mounted media.mount - External Media Directory. May 27 17:15:55.971181 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 17:15:55.972462 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 17:15:55.973713 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 17:15:55.976467 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 17:15:55.977927 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:15:55.979479 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 17:15:55.979646 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 17:15:55.981087 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:15:55.981244 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:15:55.982729 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 17:15:55.982879 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 17:15:55.984200 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:15:55.984370 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:15:55.985903 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 17:15:55.986079 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 17:15:55.987638 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:15:55.987790 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:15:55.989184 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 17:15:55.990676 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:15:55.992354 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 17:15:55.993921 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 17:15:56.006421 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 17:15:56.008844 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 17:15:56.010957 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 17:15:56.012228 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 17:15:56.012272 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 17:15:56.014189 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 17:15:56.020198 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 17:15:56.021419 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:15:56.022390 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 17:15:56.024462 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 17:15:56.025730 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 17:15:56.026710 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 17:15:56.027966 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 17:15:56.038222 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:15:56.039412 systemd-journald[1158]: Time spent on flushing to /var/log/journal/6c226c4aeb24481488fdf28f28083813 is 20.494ms for 880 entries. May 27 17:15:56.039412 systemd-journald[1158]: System Journal (/var/log/journal/6c226c4aeb24481488fdf28f28083813) is 8M, max 195.6M, 187.6M free. May 27 17:15:56.067673 systemd-journald[1158]: Received client request to flush runtime journal. May 27 17:15:56.067707 kernel: loop0: detected capacity change from 0 to 138376 May 27 17:15:56.041506 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 17:15:56.044593 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 17:15:56.047829 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:15:56.049365 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 17:15:56.053377 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 17:15:56.054928 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 17:15:56.058778 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 17:15:56.062131 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 17:15:56.070075 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 17:15:56.076651 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:15:56.081474 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 17:15:56.096220 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 17:15:56.099014 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 17:15:56.108648 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 17:15:56.116165 kernel: loop1: detected capacity change from 0 to 207008 May 27 17:15:56.125243 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. May 27 17:15:56.125268 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. May 27 17:15:56.129924 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:15:56.139468 kernel: loop2: detected capacity change from 0 to 107312 May 27 17:15:56.168459 kernel: loop3: detected capacity change from 0 to 138376 May 27 17:15:56.174470 kernel: loop4: detected capacity change from 0 to 207008 May 27 17:15:56.180471 kernel: loop5: detected capacity change from 0 to 107312 May 27 17:15:56.184307 (sd-merge)[1221]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 27 17:15:56.184689 (sd-merge)[1221]: Merged extensions into '/usr'. May 27 17:15:56.188408 systemd[1]: Reload requested from client PID 1199 ('systemd-sysext') (unit systemd-sysext.service)... May 27 17:15:56.188425 systemd[1]: Reloading... May 27 17:15:56.252465 zram_generator::config[1247]: No configuration found. May 27 17:15:56.310680 ldconfig[1194]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 17:15:56.323558 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:15:56.388632 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 17:15:56.388743 systemd[1]: Reloading finished in 199 ms. May 27 17:15:56.418269 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 17:15:56.419852 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 17:15:56.431799 systemd[1]: Starting ensure-sysext.service... May 27 17:15:56.433592 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 17:15:56.449189 systemd[1]: Reload requested from client PID 1281 ('systemctl') (unit ensure-sysext.service)... May 27 17:15:56.449320 systemd[1]: Reloading... May 27 17:15:56.453917 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 17:15:56.453950 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 17:15:56.454146 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 17:15:56.454336 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 17:15:56.454966 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 17:15:56.455161 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. May 27 17:15:56.455203 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. May 27 17:15:56.457890 systemd-tmpfiles[1282]: Detected autofs mount point /boot during canonicalization of boot. May 27 17:15:56.457904 systemd-tmpfiles[1282]: Skipping /boot May 27 17:15:56.466593 systemd-tmpfiles[1282]: Detected autofs mount point /boot during canonicalization of boot. May 27 17:15:56.466611 systemd-tmpfiles[1282]: Skipping /boot May 27 17:15:56.492593 zram_generator::config[1309]: No configuration found. May 27 17:15:56.563753 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:15:56.626694 systemd[1]: Reloading finished in 177 ms. May 27 17:15:56.648044 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 17:15:56.649719 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:15:56.662616 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:15:56.665054 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 17:15:56.667857 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 17:15:56.671102 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 17:15:56.680949 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:15:56.683220 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 17:15:56.691090 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:15:56.692607 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:15:56.696305 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:15:56.699419 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:15:56.702726 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:15:56.702854 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:15:56.713637 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 17:15:56.717469 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 17:15:56.719142 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:15:56.719297 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:15:56.720857 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:15:56.720986 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:15:56.722623 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:15:56.722757 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:15:56.732911 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 17:15:56.739112 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:15:56.740384 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:15:56.751049 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 17:15:56.751290 systemd-udevd[1350]: Using default interface naming scheme 'v255'. May 27 17:15:56.756527 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:15:56.760019 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:15:56.761386 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:15:56.761428 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:15:56.762683 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 17:15:56.765920 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 17:15:56.767436 systemd[1]: Finished ensure-sysext.service. May 27 17:15:56.770491 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 17:15:56.772201 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:15:56.772349 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:15:56.774638 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:15:56.776395 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 17:15:56.776602 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 17:15:56.779196 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:15:56.779347 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:15:56.786882 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:15:56.787692 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:15:56.793534 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 17:15:56.809897 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 17:15:56.811605 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 17:15:56.811680 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 17:15:56.813923 augenrules[1422]: No rules May 27 17:15:56.816007 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 27 17:15:56.817206 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 17:15:56.817718 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:15:56.817907 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:15:56.824836 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 27 17:15:56.904414 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 17:15:56.910602 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 17:15:56.951017 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 17:15:56.951278 systemd-resolved[1349]: Positive Trust Anchors: May 27 17:15:56.951296 systemd-resolved[1349]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 17:15:56.951328 systemd-resolved[1349]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 17:15:56.958338 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 27 17:15:56.959862 systemd[1]: Reached target time-set.target - System Time Set. May 27 17:15:56.961317 systemd-resolved[1349]: Defaulting to hostname 'linux'. May 27 17:15:56.965065 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 17:15:56.967073 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 17:15:56.968988 systemd-networkd[1420]: lo: Link UP May 27 17:15:56.969003 systemd-networkd[1420]: lo: Gained carrier May 27 17:15:56.969040 systemd[1]: Reached target sysinit.target - System Initialization. May 27 17:15:56.969895 systemd-networkd[1420]: Enumeration completed May 27 17:15:56.970170 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 17:15:56.970748 systemd-networkd[1420]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:15:56.970758 systemd-networkd[1420]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:15:56.971272 systemd-networkd[1420]: eth0: Link UP May 27 17:15:56.971396 systemd-networkd[1420]: eth0: Gained carrier May 27 17:15:56.971416 systemd-networkd[1420]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:15:56.971557 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 17:15:56.973651 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 17:15:56.974900 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 17:15:56.976134 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 17:15:56.977946 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 17:15:56.977987 systemd[1]: Reached target paths.target - Path Units. May 27 17:15:56.978935 systemd[1]: Reached target timers.target - Timer Units. May 27 17:15:56.981162 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 17:15:56.984070 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 17:15:56.987346 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 17:15:56.988780 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 17:15:56.990135 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 17:15:56.995540 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 17:15:56.996551 systemd-networkd[1420]: eth0: DHCPv4 address 10.0.0.128/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 17:15:56.997080 systemd-timesyncd[1427]: Network configuration changed, trying to establish connection. May 27 17:15:56.997601 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 17:15:56.998718 systemd-timesyncd[1427]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 27 17:15:56.998773 systemd-timesyncd[1427]: Initial clock synchronization to Tue 2025-05-27 17:15:57.158728 UTC. May 27 17:15:57.000071 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 17:15:57.001427 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 17:15:57.006076 systemd[1]: Reached target network.target - Network. May 27 17:15:57.007104 systemd[1]: Reached target sockets.target - Socket Units. May 27 17:15:57.008060 systemd[1]: Reached target basic.target - Basic System. May 27 17:15:57.009138 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 17:15:57.009162 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 17:15:57.010349 systemd[1]: Starting containerd.service - containerd container runtime... May 27 17:15:57.012755 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 17:15:57.027606 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 17:15:57.029571 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 17:15:57.033045 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 17:15:57.034044 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 17:15:57.041365 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 17:15:57.043989 jq[1463]: false May 27 17:15:57.045571 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 17:15:57.047585 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 17:15:57.051366 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 17:15:57.058399 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 17:15:57.061286 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 17:15:57.061946 extend-filesystems[1464]: Found loop3 May 27 17:15:57.064289 extend-filesystems[1464]: Found loop4 May 27 17:15:57.064289 extend-filesystems[1464]: Found loop5 May 27 17:15:57.064289 extend-filesystems[1464]: Found vda May 27 17:15:57.064289 extend-filesystems[1464]: Found vda1 May 27 17:15:57.064289 extend-filesystems[1464]: Found vda2 May 27 17:15:57.064289 extend-filesystems[1464]: Found vda3 May 27 17:15:57.064289 extend-filesystems[1464]: Found usr May 27 17:15:57.064289 extend-filesystems[1464]: Found vda4 May 27 17:15:57.064289 extend-filesystems[1464]: Found vda6 May 27 17:15:57.064289 extend-filesystems[1464]: Found vda7 May 27 17:15:57.064289 extend-filesystems[1464]: Found vda9 May 27 17:15:57.064289 extend-filesystems[1464]: Checking size of /dev/vda9 May 27 17:15:57.064603 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 17:15:57.080328 extend-filesystems[1464]: Resized partition /dev/vda9 May 27 17:15:57.067552 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 17:15:57.067948 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 17:15:57.073839 systemd[1]: Starting update-engine.service - Update Engine... May 27 17:15:57.078575 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 17:15:57.081843 extend-filesystems[1486]: resize2fs 1.47.2 (1-Jan-2025) May 27 17:15:57.087985 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 27 17:15:57.086496 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 17:15:57.088155 jq[1487]: true May 27 17:15:57.090361 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 17:15:57.090569 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 17:15:57.090814 systemd[1]: motdgen.service: Deactivated successfully. May 27 17:15:57.090977 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 17:15:57.093753 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 17:15:57.093927 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 17:15:57.111784 (ntainerd)[1492]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 17:15:57.114487 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 27 17:15:57.117652 jq[1491]: true May 27 17:15:57.134210 extend-filesystems[1486]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 27 17:15:57.134210 extend-filesystems[1486]: old_desc_blocks = 1, new_desc_blocks = 1 May 27 17:15:57.134210 extend-filesystems[1486]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 27 17:15:57.128139 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 17:15:57.140995 update_engine[1481]: I20250527 17:15:57.137387 1481 main.cc:92] Flatcar Update Engine starting May 27 17:15:57.141258 extend-filesystems[1464]: Resized filesystem in /dev/vda9 May 27 17:15:57.128522 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 17:15:57.132522 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 17:15:57.149812 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:15:57.168245 tar[1489]: linux-arm64/LICENSE May 27 17:15:57.171939 tar[1489]: linux-arm64/helm May 27 17:15:57.175815 dbus-daemon[1461]: [system] SELinux support is enabled May 27 17:15:57.175980 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 17:15:57.178959 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 17:15:57.178983 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 17:15:57.180300 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 17:15:57.180324 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 17:15:57.182014 update_engine[1481]: I20250527 17:15:57.181960 1481 update_check_scheduler.cc:74] Next update check in 11m47s May 27 17:15:57.182709 systemd[1]: Started update-engine.service - Update Engine. May 27 17:15:57.182727 systemd-logind[1472]: Watching system buttons on /dev/input/event0 (Power Button) May 27 17:15:57.186778 systemd-logind[1472]: New seat seat0. May 27 17:15:57.186847 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 17:15:57.197270 systemd[1]: Started systemd-logind.service - User Login Management. May 27 17:15:57.213210 bash[1527]: Updated "/home/core/.ssh/authorized_keys" May 27 17:15:57.220600 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 17:15:57.222644 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 27 17:15:57.245286 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:15:57.254842 locksmithd[1524]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 17:15:57.354130 containerd[1492]: time="2025-05-27T17:15:57Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 17:15:57.355082 containerd[1492]: time="2025-05-27T17:15:57.355051017Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 17:15:57.364574 containerd[1492]: time="2025-05-27T17:15:57.364525878Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.08µs" May 27 17:15:57.364574 containerd[1492]: time="2025-05-27T17:15:57.364571744Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 17:15:57.364699 containerd[1492]: time="2025-05-27T17:15:57.364639727Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 17:15:57.364892 containerd[1492]: time="2025-05-27T17:15:57.364864569Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 17:15:57.364947 containerd[1492]: time="2025-05-27T17:15:57.364892113Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 17:15:57.364947 containerd[1492]: time="2025-05-27T17:15:57.364933776Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 17:15:57.365019 containerd[1492]: time="2025-05-27T17:15:57.364993027Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 17:15:57.365019 containerd[1492]: time="2025-05-27T17:15:57.365014940Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 17:15:57.365301 containerd[1492]: time="2025-05-27T17:15:57.365266877Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 17:15:57.365353 containerd[1492]: time="2025-05-27T17:15:57.365305357Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 17:15:57.365353 containerd[1492]: time="2025-05-27T17:15:57.365321598Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 17:15:57.365353 containerd[1492]: time="2025-05-27T17:15:57.365335839Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 17:15:57.365476 containerd[1492]: time="2025-05-27T17:15:57.365435080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 17:15:57.366402 containerd[1492]: time="2025-05-27T17:15:57.366371499Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 17:15:57.366545 containerd[1492]: time="2025-05-27T17:15:57.366524441Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 17:15:57.366619 containerd[1492]: time="2025-05-27T17:15:57.366603890Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 17:15:57.367183 containerd[1492]: time="2025-05-27T17:15:57.367146489Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 17:15:57.367577 containerd[1492]: time="2025-05-27T17:15:57.367541289Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 17:15:57.367726 containerd[1492]: time="2025-05-27T17:15:57.367706554Z" level=info msg="metadata content store policy set" policy=shared May 27 17:15:57.371404 containerd[1492]: time="2025-05-27T17:15:57.371370619Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 17:15:57.371549 containerd[1492]: time="2025-05-27T17:15:57.371531110Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 17:15:57.372205 containerd[1492]: time="2025-05-27T17:15:57.371631208Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 17:15:57.372205 containerd[1492]: time="2025-05-27T17:15:57.371654875Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 17:15:57.372205 containerd[1492]: time="2025-05-27T17:15:57.371667974Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 17:15:57.372205 containerd[1492]: time="2025-05-27T17:15:57.371688255Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 17:15:57.372205 containerd[1492]: time="2025-05-27T17:15:57.371703108Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 17:15:57.372205 containerd[1492]: time="2025-05-27T17:15:57.371739752Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 17:15:57.372205 containerd[1492]: time="2025-05-27T17:15:57.371750729Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 17:15:57.372205 containerd[1492]: time="2025-05-27T17:15:57.371761706Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 17:15:57.372205 containerd[1492]: time="2025-05-27T17:15:57.371773009Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 17:15:57.372205 containerd[1492]: time="2025-05-27T17:15:57.371786230Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 17:15:57.372205 containerd[1492]: time="2025-05-27T17:15:57.371910077Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 17:15:57.372205 containerd[1492]: time="2025-05-27T17:15:57.371931418Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 17:15:57.372205 containerd[1492]: time="2025-05-27T17:15:57.371948639Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 17:15:57.372205 containerd[1492]: time="2025-05-27T17:15:57.371960023Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 17:15:57.372585 containerd[1492]: time="2025-05-27T17:15:57.371969735Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 17:15:57.372585 containerd[1492]: time="2025-05-27T17:15:57.371979406Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 17:15:57.372585 containerd[1492]: time="2025-05-27T17:15:57.371990424Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 17:15:57.372585 containerd[1492]: time="2025-05-27T17:15:57.372001646Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 17:15:57.372585 containerd[1492]: time="2025-05-27T17:15:57.372020702Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 17:15:57.372585 containerd[1492]: time="2025-05-27T17:15:57.372032536Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 17:15:57.372585 containerd[1492]: time="2025-05-27T17:15:57.372043227Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 17:15:57.374062 containerd[1492]: time="2025-05-27T17:15:57.374001963Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 17:15:57.374115 containerd[1492]: time="2025-05-27T17:15:57.374067008Z" level=info msg="Start snapshots syncer" May 27 17:15:57.374115 containerd[1492]: time="2025-05-27T17:15:57.374096429Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 17:15:57.374356 containerd[1492]: time="2025-05-27T17:15:57.374308132Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 17:15:57.374596 containerd[1492]: time="2025-05-27T17:15:57.374366403Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 17:15:57.374596 containerd[1492]: time="2025-05-27T17:15:57.374478293Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 17:15:57.374675 containerd[1492]: time="2025-05-27T17:15:57.374606955Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 17:15:57.374675 containerd[1492]: time="2025-05-27T17:15:57.374633071Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 17:15:57.374675 containerd[1492]: time="2025-05-27T17:15:57.374644007Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 17:15:57.374675 containerd[1492]: time="2025-05-27T17:15:57.374667756Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 17:15:57.374752 containerd[1492]: time="2025-05-27T17:15:57.374681426Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 17:15:57.374752 containerd[1492]: time="2025-05-27T17:15:57.374693056Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 17:15:57.374752 containerd[1492]: time="2025-05-27T17:15:57.374703584Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 17:15:57.374752 containerd[1492]: time="2025-05-27T17:15:57.374733617Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 17:15:57.374752 containerd[1492]: time="2025-05-27T17:15:57.374745982Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 17:15:57.374834 containerd[1492]: time="2025-05-27T17:15:57.374757203Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 17:15:57.374834 containerd[1492]: time="2025-05-27T17:15:57.374791276Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 17:15:57.374834 containerd[1492]: time="2025-05-27T17:15:57.374805069Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 17:15:57.374834 containerd[1492]: time="2025-05-27T17:15:57.374814005Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 17:15:57.374834 containerd[1492]: time="2025-05-27T17:15:57.374823350Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 17:15:57.374834 containerd[1492]: time="2025-05-27T17:15:57.374831022Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 17:15:57.374933 containerd[1492]: time="2025-05-27T17:15:57.374840325Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 17:15:57.374933 containerd[1492]: time="2025-05-27T17:15:57.374850772Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 17:15:57.374933 containerd[1492]: time="2025-05-27T17:15:57.374928875Z" level=info msg="runtime interface created" May 27 17:15:57.374982 containerd[1492]: time="2025-05-27T17:15:57.374934180Z" level=info msg="created NRI interface" May 27 17:15:57.374982 containerd[1492]: time="2025-05-27T17:15:57.374943565Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 17:15:57.374982 containerd[1492]: time="2025-05-27T17:15:57.374954787Z" level=info msg="Connect containerd service" May 27 17:15:57.375030 containerd[1492]: time="2025-05-27T17:15:57.374992043Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 17:15:57.375844 containerd[1492]: time="2025-05-27T17:15:57.375806615Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 17:15:57.500680 containerd[1492]: time="2025-05-27T17:15:57.500618299Z" level=info msg="Start subscribing containerd event" May 27 17:15:57.501078 containerd[1492]: time="2025-05-27T17:15:57.501058965Z" level=info msg="Start recovering state" May 27 17:15:57.501227 containerd[1492]: time="2025-05-27T17:15:57.501209417Z" level=info msg="Start event monitor" May 27 17:15:57.501261 containerd[1492]: time="2025-05-27T17:15:57.501236022Z" level=info msg="Start cni network conf syncer for default" May 27 17:15:57.501261 containerd[1492]: time="2025-05-27T17:15:57.501248060Z" level=info msg="Start streaming server" May 27 17:15:57.501261 containerd[1492]: time="2025-05-27T17:15:57.501256466Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 17:15:57.501340 containerd[1492]: time="2025-05-27T17:15:57.501263893Z" level=info msg="runtime interface starting up..." May 27 17:15:57.501340 containerd[1492]: time="2025-05-27T17:15:57.501270259Z" level=info msg="starting plugins..." May 27 17:15:57.501340 containerd[1492]: time="2025-05-27T17:15:57.501284296Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 17:15:57.501649 containerd[1492]: time="2025-05-27T17:15:57.501626741Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 17:15:57.501760 containerd[1492]: time="2025-05-27T17:15:57.501745120Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 17:15:57.502079 systemd[1]: Started containerd.service - containerd container runtime. May 27 17:15:57.503986 containerd[1492]: time="2025-05-27T17:15:57.503597474Z" level=info msg="containerd successfully booted in 0.149862s" May 27 17:15:57.589433 tar[1489]: linux-arm64/README.md May 27 17:15:57.603542 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 17:15:58.046093 sshd_keygen[1484]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 17:15:58.065430 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 17:15:58.068584 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 17:15:58.087338 systemd[1]: issuegen.service: Deactivated successfully. May 27 17:15:58.087618 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 17:15:58.090544 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 17:15:58.120540 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 17:15:58.124081 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 17:15:58.126901 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 27 17:15:58.128592 systemd[1]: Reached target getty.target - Login Prompts. May 27 17:15:58.667979 systemd-networkd[1420]: eth0: Gained IPv6LL May 27 17:15:58.670346 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 17:15:58.672297 systemd[1]: Reached target network-online.target - Network is Online. May 27 17:15:58.674885 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 27 17:15:58.677360 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:15:58.693381 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 17:15:58.717614 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 17:15:58.719640 systemd[1]: coreos-metadata.service: Deactivated successfully. May 27 17:15:58.719853 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 27 17:15:58.721878 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 17:15:59.251858 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:15:59.253437 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 17:15:59.255347 systemd[1]: Startup finished in 2.143s (kernel) + 5.737s (initrd) + 3.939s (userspace) = 11.819s. May 27 17:15:59.255711 (kubelet)[1600]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:15:59.688253 kubelet[1600]: E0527 17:15:59.688119 1600 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:15:59.690506 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:15:59.690641 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:15:59.691119 systemd[1]: kubelet.service: Consumed 853ms CPU time, 255.3M memory peak. May 27 17:16:03.102032 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 17:16:03.103203 systemd[1]: Started sshd@0-10.0.0.128:22-10.0.0.1:50910.service - OpenSSH per-connection server daemon (10.0.0.1:50910). May 27 17:16:03.186185 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 50910 ssh2: RSA SHA256:ZZNcfTFkFYX46lZGwGlqysxQ9Yikwv1d/hmoNWRTIVY May 27 17:16:03.187882 sshd-session[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:16:03.194186 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 17:16:03.195201 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 17:16:03.200582 systemd-logind[1472]: New session 1 of user core. May 27 17:16:03.224264 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 17:16:03.229003 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 17:16:03.244826 (systemd)[1617]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 17:16:03.247317 systemd-logind[1472]: New session c1 of user core. May 27 17:16:03.356861 systemd[1617]: Queued start job for default target default.target. May 27 17:16:03.380500 systemd[1617]: Created slice app.slice - User Application Slice. May 27 17:16:03.380660 systemd[1617]: Reached target paths.target - Paths. May 27 17:16:03.380767 systemd[1617]: Reached target timers.target - Timers. May 27 17:16:03.382196 systemd[1617]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 17:16:03.391949 systemd[1617]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 17:16:03.392022 systemd[1617]: Reached target sockets.target - Sockets. May 27 17:16:03.392066 systemd[1617]: Reached target basic.target - Basic System. May 27 17:16:03.392096 systemd[1617]: Reached target default.target - Main User Target. May 27 17:16:03.392123 systemd[1617]: Startup finished in 138ms. May 27 17:16:03.392395 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 17:16:03.394003 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 17:16:03.469033 systemd[1]: Started sshd@1-10.0.0.128:22-10.0.0.1:50920.service - OpenSSH per-connection server daemon (10.0.0.1:50920). May 27 17:16:03.528353 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 50920 ssh2: RSA SHA256:ZZNcfTFkFYX46lZGwGlqysxQ9Yikwv1d/hmoNWRTIVY May 27 17:16:03.530040 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:16:03.534425 systemd-logind[1472]: New session 2 of user core. May 27 17:16:03.547647 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 17:16:03.601787 sshd[1630]: Connection closed by 10.0.0.1 port 50920 May 27 17:16:03.602180 sshd-session[1628]: pam_unix(sshd:session): session closed for user core May 27 17:16:03.613840 systemd[1]: sshd@1-10.0.0.128:22-10.0.0.1:50920.service: Deactivated successfully. May 27 17:16:03.616072 systemd[1]: session-2.scope: Deactivated successfully. May 27 17:16:03.617980 systemd-logind[1472]: Session 2 logged out. Waiting for processes to exit. May 27 17:16:03.620863 systemd[1]: Started sshd@2-10.0.0.128:22-10.0.0.1:50922.service - OpenSSH per-connection server daemon (10.0.0.1:50922). May 27 17:16:03.621312 systemd-logind[1472]: Removed session 2. May 27 17:16:03.677699 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 50922 ssh2: RSA SHA256:ZZNcfTFkFYX46lZGwGlqysxQ9Yikwv1d/hmoNWRTIVY May 27 17:16:03.679222 sshd-session[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:16:03.683517 systemd-logind[1472]: New session 3 of user core. May 27 17:16:03.693625 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 17:16:03.743026 sshd[1638]: Connection closed by 10.0.0.1 port 50922 May 27 17:16:03.743761 sshd-session[1636]: pam_unix(sshd:session): session closed for user core May 27 17:16:03.754564 systemd[1]: sshd@2-10.0.0.128:22-10.0.0.1:50922.service: Deactivated successfully. May 27 17:16:03.756928 systemd[1]: session-3.scope: Deactivated successfully. May 27 17:16:03.757764 systemd-logind[1472]: Session 3 logged out. Waiting for processes to exit. May 27 17:16:03.761114 systemd[1]: Started sshd@3-10.0.0.128:22-10.0.0.1:50934.service - OpenSSH per-connection server daemon (10.0.0.1:50934). May 27 17:16:03.762043 systemd-logind[1472]: Removed session 3. May 27 17:16:03.820256 sshd[1644]: Accepted publickey for core from 10.0.0.1 port 50934 ssh2: RSA SHA256:ZZNcfTFkFYX46lZGwGlqysxQ9Yikwv1d/hmoNWRTIVY May 27 17:16:03.822035 sshd-session[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:16:03.826633 systemd-logind[1472]: New session 4 of user core. May 27 17:16:03.838655 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 17:16:03.891307 sshd[1646]: Connection closed by 10.0.0.1 port 50934 May 27 17:16:03.891770 sshd-session[1644]: pam_unix(sshd:session): session closed for user core May 27 17:16:03.903678 systemd[1]: sshd@3-10.0.0.128:22-10.0.0.1:50934.service: Deactivated successfully. May 27 17:16:03.905198 systemd[1]: session-4.scope: Deactivated successfully. May 27 17:16:03.905930 systemd-logind[1472]: Session 4 logged out. Waiting for processes to exit. May 27 17:16:03.908469 systemd[1]: Started sshd@4-10.0.0.128:22-10.0.0.1:50940.service - OpenSSH per-connection server daemon (10.0.0.1:50940). May 27 17:16:03.908942 systemd-logind[1472]: Removed session 4. May 27 17:16:03.980386 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 50940 ssh2: RSA SHA256:ZZNcfTFkFYX46lZGwGlqysxQ9Yikwv1d/hmoNWRTIVY May 27 17:16:03.981682 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:16:03.985557 systemd-logind[1472]: New session 5 of user core. May 27 17:16:03.991603 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 17:16:04.050369 sudo[1655]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 17:16:04.050658 sudo[1655]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:16:04.065139 sudo[1655]: pam_unix(sudo:session): session closed for user root May 27 17:16:04.070626 sshd[1654]: Connection closed by 10.0.0.1 port 50940 May 27 17:16:04.071211 sshd-session[1652]: pam_unix(sshd:session): session closed for user core May 27 17:16:04.085693 systemd[1]: sshd@4-10.0.0.128:22-10.0.0.1:50940.service: Deactivated successfully. May 27 17:16:04.087437 systemd[1]: session-5.scope: Deactivated successfully. May 27 17:16:04.089058 systemd-logind[1472]: Session 5 logged out. Waiting for processes to exit. May 27 17:16:04.090807 systemd[1]: Started sshd@5-10.0.0.128:22-10.0.0.1:50950.service - OpenSSH per-connection server daemon (10.0.0.1:50950). May 27 17:16:04.091579 systemd-logind[1472]: Removed session 5. May 27 17:16:04.147179 sshd[1661]: Accepted publickey for core from 10.0.0.1 port 50950 ssh2: RSA SHA256:ZZNcfTFkFYX46lZGwGlqysxQ9Yikwv1d/hmoNWRTIVY May 27 17:16:04.148609 sshd-session[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:16:04.152635 systemd-logind[1472]: New session 6 of user core. May 27 17:16:04.159594 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 17:16:04.211554 sudo[1665]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 17:16:04.212143 sudo[1665]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:16:04.286907 sudo[1665]: pam_unix(sudo:session): session closed for user root May 27 17:16:04.292236 sudo[1664]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 17:16:04.292543 sudo[1664]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:16:04.301227 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:16:04.347827 augenrules[1687]: No rules May 27 17:16:04.349297 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:16:04.349552 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:16:04.350681 sudo[1664]: pam_unix(sudo:session): session closed for user root May 27 17:16:04.352474 sshd[1663]: Connection closed by 10.0.0.1 port 50950 May 27 17:16:04.352870 sshd-session[1661]: pam_unix(sshd:session): session closed for user core May 27 17:16:04.363662 systemd[1]: sshd@5-10.0.0.128:22-10.0.0.1:50950.service: Deactivated successfully. May 27 17:16:04.365183 systemd[1]: session-6.scope: Deactivated successfully. May 27 17:16:04.365931 systemd-logind[1472]: Session 6 logged out. Waiting for processes to exit. May 27 17:16:04.368440 systemd[1]: Started sshd@6-10.0.0.128:22-10.0.0.1:50964.service - OpenSSH per-connection server daemon (10.0.0.1:50964). May 27 17:16:04.369263 systemd-logind[1472]: Removed session 6. May 27 17:16:04.423785 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 50964 ssh2: RSA SHA256:ZZNcfTFkFYX46lZGwGlqysxQ9Yikwv1d/hmoNWRTIVY May 27 17:16:04.425061 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:16:04.429507 systemd-logind[1472]: New session 7 of user core. May 27 17:16:04.447631 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 17:16:04.498286 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 17:16:04.498887 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:16:04.893820 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 17:16:04.909808 (dockerd)[1720]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 17:16:05.200583 dockerd[1720]: time="2025-05-27T17:16:05.199564731Z" level=info msg="Starting up" May 27 17:16:05.201118 dockerd[1720]: time="2025-05-27T17:16:05.201085591Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 17:16:05.225599 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport616777448-merged.mount: Deactivated successfully. May 27 17:16:05.249981 dockerd[1720]: time="2025-05-27T17:16:05.249942276Z" level=info msg="Loading containers: start." May 27 17:16:05.260470 kernel: Initializing XFRM netlink socket May 27 17:16:05.446414 systemd-networkd[1420]: docker0: Link UP May 27 17:16:05.449665 dockerd[1720]: time="2025-05-27T17:16:05.449628269Z" level=info msg="Loading containers: done." May 27 17:16:05.463925 dockerd[1720]: time="2025-05-27T17:16:05.463820716Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 17:16:05.463925 dockerd[1720]: time="2025-05-27T17:16:05.463900545Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 17:16:05.464062 dockerd[1720]: time="2025-05-27T17:16:05.463994471Z" level=info msg="Initializing buildkit" May 27 17:16:05.489160 dockerd[1720]: time="2025-05-27T17:16:05.489117451Z" level=info msg="Completed buildkit initialization" May 27 17:16:05.493730 dockerd[1720]: time="2025-05-27T17:16:05.493703068Z" level=info msg="Daemon has completed initialization" May 27 17:16:05.493821 dockerd[1720]: time="2025-05-27T17:16:05.493788818Z" level=info msg="API listen on /run/docker.sock" May 27 17:16:05.493880 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 17:16:06.172252 containerd[1492]: time="2025-05-27T17:16:06.172204227Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 27 17:16:06.223115 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3976566299-merged.mount: Deactivated successfully. May 27 17:16:06.746145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4220848255.mount: Deactivated successfully. May 27 17:16:08.102639 containerd[1492]: time="2025-05-27T17:16:08.102583942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:16:08.104236 containerd[1492]: time="2025-05-27T17:16:08.104206756Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=26326313" May 27 17:16:08.105087 containerd[1492]: time="2025-05-27T17:16:08.104819385Z" level=info msg="ImageCreate event name:\"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:16:08.107823 containerd[1492]: time="2025-05-27T17:16:08.107775555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:16:08.108641 containerd[1492]: time="2025-05-27T17:16:08.108613867Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"26323111\" in 1.936369007s" May 27 17:16:08.108694 containerd[1492]: time="2025-05-27T17:16:08.108649592Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\"" May 27 17:16:08.109354 containerd[1492]: time="2025-05-27T17:16:08.109332385Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 27 17:16:09.536143 containerd[1492]: time="2025-05-27T17:16:09.535712338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:16:09.536863 containerd[1492]: time="2025-05-27T17:16:09.536827486Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=22530549" May 27 17:16:09.537729 containerd[1492]: time="2025-05-27T17:16:09.537700576Z" level=info msg="ImageCreate event name:\"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:16:09.540201 containerd[1492]: time="2025-05-27T17:16:09.540171084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:16:09.541586 containerd[1492]: time="2025-05-27T17:16:09.541528211Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"24066313\" in 1.432140378s" May 27 17:16:09.541586 containerd[1492]: time="2025-05-27T17:16:09.541575723Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\"" May 27 17:16:09.542177 containerd[1492]: time="2025-05-27T17:16:09.542152575Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 27 17:16:09.940970 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 17:16:09.942646 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:16:10.068067 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:16:10.071386 (kubelet)[1995]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:16:10.107118 kubelet[1995]: E0527 17:16:10.107046 1995 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:16:10.110008 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:16:10.110158 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:16:10.110674 systemd[1]: kubelet.service: Consumed 140ms CPU time, 108.1M memory peak. May 27 17:16:10.897908 containerd[1492]: time="2025-05-27T17:16:10.897806715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:16:10.898368 containerd[1492]: time="2025-05-27T17:16:10.898305039Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=17484192" May 27 17:16:10.899323 containerd[1492]: time="2025-05-27T17:16:10.899286312Z" level=info msg="ImageCreate event name:\"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:16:10.901678 containerd[1492]: time="2025-05-27T17:16:10.901653450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:16:10.903554 containerd[1492]: time="2025-05-27T17:16:10.903504401Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"19019974\" in 1.361318335s" May 27 17:16:10.903554 containerd[1492]: time="2025-05-27T17:16:10.903538161Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\"" May 27 17:16:10.903925 containerd[1492]: time="2025-05-27T17:16:10.903893458Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 27 17:16:11.890345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1738728489.mount: Deactivated successfully. May 27 17:16:12.122509 containerd[1492]: time="2025-05-27T17:16:12.122453593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:16:12.123396 containerd[1492]: time="2025-05-27T17:16:12.123356442Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=27377377" May 27 17:16:12.124134 containerd[1492]: time="2025-05-27T17:16:12.124097893Z" level=info msg="ImageCreate event name:\"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:16:12.126068 containerd[1492]: time="2025-05-27T17:16:12.126029732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:16:12.126544 containerd[1492]: time="2025-05-27T17:16:12.126522428Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"27376394\" in 1.22252272s" May 27 17:16:12.126597 containerd[1492]: time="2025-05-27T17:16:12.126546894Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\"" May 27 17:16:12.127146 containerd[1492]: time="2025-05-27T17:16:12.127114915Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 27 17:16:12.826502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount194843652.mount: Deactivated successfully. May 27 17:16:13.658496 containerd[1492]: time="2025-05-27T17:16:13.658410268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:16:13.658940 containerd[1492]: time="2025-05-27T17:16:13.658913704Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 27 17:16:13.659864 containerd[1492]: time="2025-05-27T17:16:13.659814041Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:16:13.662323 containerd[1492]: time="2025-05-27T17:16:13.662284184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:16:13.664338 containerd[1492]: time="2025-05-27T17:16:13.664232289Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.53708112s" May 27 17:16:13.664338 containerd[1492]: time="2025-05-27T17:16:13.664273948Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 27 17:16:13.665045 containerd[1492]: time="2025-05-27T17:16:13.664876538Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 17:16:14.188201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4226702511.mount: Deactivated successfully. May 27 17:16:14.193592 containerd[1492]: time="2025-05-27T17:16:14.193537119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:16:14.194218 containerd[1492]: time="2025-05-27T17:16:14.194181217Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 27 17:16:14.194952 containerd[1492]: time="2025-05-27T17:16:14.194919672Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:16:14.197348 containerd[1492]: time="2025-05-27T17:16:14.197303304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:16:14.198638 containerd[1492]: time="2025-05-27T17:16:14.198210870Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 533.29865ms" May 27 17:16:14.198638 containerd[1492]: time="2025-05-27T17:16:14.198266185Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 27 17:16:14.198955 containerd[1492]: time="2025-05-27T17:16:14.198923230Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 27 17:16:14.766584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount332935597.mount: Deactivated successfully. May 27 17:16:16.898520 containerd[1492]: time="2025-05-27T17:16:16.898450266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:16:16.898983 containerd[1492]: time="2025-05-27T17:16:16.898937922Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" May 27 17:16:16.899980 containerd[1492]: time="2025-05-27T17:16:16.899939195Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:16:16.903088 containerd[1492]: time="2025-05-27T17:16:16.903051469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:16:16.904848 containerd[1492]: time="2025-05-27T17:16:16.904810709Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.705849484s" May 27 17:16:16.904879 containerd[1492]: time="2025-05-27T17:16:16.904852014Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 27 17:16:20.360709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 27 17:16:20.362751 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:16:20.521695 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:16:20.525279 (kubelet)[2156]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:16:20.564686 kubelet[2156]: E0527 17:16:20.563956 2156 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:16:20.567818 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:16:20.568058 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:16:20.568590 systemd[1]: kubelet.service: Consumed 145ms CPU time, 106M memory peak. May 27 17:16:21.884432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:16:21.884893 systemd[1]: kubelet.service: Consumed 145ms CPU time, 106M memory peak. May 27 17:16:21.888708 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:16:21.911658 systemd[1]: Reload requested from client PID 2171 ('systemctl') (unit session-7.scope)... May 27 17:16:21.911789 systemd[1]: Reloading... May 27 17:16:21.979529 zram_generator::config[2217]: No configuration found. May 27 17:16:22.043253 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:16:22.129024 systemd[1]: Reloading finished in 216 ms. May 27 17:16:22.179825 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:16:22.181367 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:16:22.183191 systemd[1]: kubelet.service: Deactivated successfully. May 27 17:16:22.184495 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:16:22.184533 systemd[1]: kubelet.service: Consumed 89ms CPU time, 95.2M memory peak. May 27 17:16:22.185846 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:16:22.351513 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:16:22.354913 (kubelet)[2261]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 17:16:22.387250 kubelet[2261]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:16:22.387250 kubelet[2261]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 17:16:22.387250 kubelet[2261]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:16:22.387565 kubelet[2261]: I0527 17:16:22.387295 2261 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 17:16:22.926472 kubelet[2261]: I0527 17:16:22.926298 2261 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 17:16:22.926472 kubelet[2261]: I0527 17:16:22.926338 2261 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 17:16:22.927030 kubelet[2261]: I0527 17:16:22.927004 2261 server.go:954] "Client rotation is on, will bootstrap in background" May 27 17:16:22.952945 kubelet[2261]: E0527 17:16:22.952907 2261 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.128:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" May 27 17:16:22.954025 kubelet[2261]: I0527 17:16:22.953999 2261 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 17:16:22.960671 kubelet[2261]: I0527 17:16:22.960633 2261 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 17:16:22.963358 kubelet[2261]: I0527 17:16:22.963331 2261 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 17:16:22.963992 kubelet[2261]: I0527 17:16:22.963953 2261 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 17:16:22.964162 kubelet[2261]: I0527 17:16:22.964003 2261 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 17:16:22.964246 kubelet[2261]: I0527 17:16:22.964235 2261 topology_manager.go:138] "Creating topology manager with none policy" May 27 17:16:22.964246 kubelet[2261]: I0527 17:16:22.964245 2261 container_manager_linux.go:304] "Creating device plugin manager" May 27 17:16:22.964457 kubelet[2261]: I0527 17:16:22.964428 2261 state_mem.go:36] "Initialized new in-memory state store" May 27 17:16:22.966843 kubelet[2261]: I0527 17:16:22.966810 2261 kubelet.go:446] "Attempting to sync node with API server" May 27 17:16:22.966843 kubelet[2261]: I0527 17:16:22.966836 2261 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 17:16:22.966919 kubelet[2261]: I0527 17:16:22.966860 2261 kubelet.go:352] "Adding apiserver pod source" May 27 17:16:22.966919 kubelet[2261]: I0527 17:16:22.966870 2261 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 17:16:22.971408 kubelet[2261]: I0527 17:16:22.971371 2261 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 17:16:22.972058 kubelet[2261]: I0527 17:16:22.972030 2261 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 17:16:22.973198 kubelet[2261]: W0527 17:16:22.972156 2261 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 17:16:22.973198 kubelet[2261]: W0527 17:16:22.972795 2261 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused May 27 17:16:22.973198 kubelet[2261]: E0527 17:16:22.972856 2261 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" May 27 17:16:22.973198 kubelet[2261]: I0527 17:16:22.973014 2261 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 17:16:22.973198 kubelet[2261]: I0527 17:16:22.973048 2261 server.go:1287] "Started kubelet" May 27 17:16:22.973423 kubelet[2261]: I0527 17:16:22.973392 2261 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 17:16:22.975954 kubelet[2261]: I0527 17:16:22.975935 2261 server.go:479] "Adding debug handlers to kubelet server" May 27 17:16:22.978580 kubelet[2261]: I0527 17:16:22.978557 2261 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 17:16:22.978791 kubelet[2261]: I0527 17:16:22.978770 2261 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 17:16:22.979486 kubelet[2261]: I0527 17:16:22.979460 2261 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 17:16:22.979672 kubelet[2261]: E0527 17:16:22.979650 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:16:22.980200 kubelet[2261]: I0527 17:16:22.980173 2261 reconciler.go:26] "Reconciler: start to sync state" May 27 17:16:22.980645 kubelet[2261]: I0527 17:16:22.980629 2261 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 17:16:22.981736 kubelet[2261]: E0527 17:16:22.981692 2261 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="200ms" May 27 17:16:22.982048 kubelet[2261]: I0527 17:16:22.981997 2261 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 17:16:22.983066 kubelet[2261]: I0527 17:16:22.982897 2261 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 17:16:22.983424 kubelet[2261]: W0527 17:16:22.983384 2261 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused May 27 17:16:22.983496 kubelet[2261]: E0527 17:16:22.983431 2261 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" May 27 17:16:22.983589 kubelet[2261]: I0527 17:16:22.983570 2261 factory.go:221] Registration of the systemd container factory successfully May 27 17:16:22.983655 kubelet[2261]: I0527 17:16:22.983636 2261 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 17:16:22.984418 kubelet[2261]: E0527 17:16:22.984188 2261 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.128:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.128:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184371c7b969ab13 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-27 17:16:22.973025043 +0000 UTC m=+0.615266856,LastTimestamp:2025-05-27 17:16:22.973025043 +0000 UTC m=+0.615266856,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 27 17:16:22.986965 kubelet[2261]: W0527 17:16:22.986903 2261 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused May 27 17:16:22.987099 kubelet[2261]: E0527 17:16:22.986968 2261 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" May 27 17:16:22.987192 kubelet[2261]: E0527 17:16:22.987122 2261 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 17:16:22.987887 kubelet[2261]: I0527 17:16:22.987860 2261 factory.go:221] Registration of the containerd container factory successfully May 27 17:16:22.998277 kubelet[2261]: I0527 17:16:22.998232 2261 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 17:16:22.998535 kubelet[2261]: I0527 17:16:22.998516 2261 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 17:16:22.998535 kubelet[2261]: I0527 17:16:22.998532 2261 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 17:16:22.998610 kubelet[2261]: I0527 17:16:22.998550 2261 state_mem.go:36] "Initialized new in-memory state store" May 27 17:16:22.999402 kubelet[2261]: I0527 17:16:22.999373 2261 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 17:16:22.999402 kubelet[2261]: I0527 17:16:22.999403 2261 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 17:16:22.999635 kubelet[2261]: I0527 17:16:22.999421 2261 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 17:16:22.999635 kubelet[2261]: I0527 17:16:22.999428 2261 kubelet.go:2382] "Starting kubelet main sync loop" May 27 17:16:22.999635 kubelet[2261]: E0527 17:16:22.999516 2261 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 17:16:23.080738 kubelet[2261]: E0527 17:16:23.080694 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:16:23.099995 kubelet[2261]: E0527 17:16:23.099961 2261 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 17:16:23.181347 kubelet[2261]: E0527 17:16:23.181262 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:16:23.182919 kubelet[2261]: E0527 17:16:23.182858 2261 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="400ms" May 27 17:16:23.282544 kubelet[2261]: E0527 17:16:23.282509 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:16:23.282968 kubelet[2261]: I0527 17:16:23.282941 2261 policy_none.go:49] "None policy: Start" May 27 17:16:23.282968 kubelet[2261]: I0527 17:16:23.282967 2261 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 17:16:23.283039 kubelet[2261]: I0527 17:16:23.282980 2261 state_mem.go:35] "Initializing new in-memory state store" May 27 17:16:23.283039 kubelet[2261]: W0527 17:16:23.282990 2261 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused May 27 17:16:23.283087 kubelet[2261]: E0527 17:16:23.283045 2261 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" May 27 17:16:23.287704 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 17:16:23.301235 kubelet[2261]: E0527 17:16:23.301195 2261 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 17:16:23.301676 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 17:16:23.304811 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 17:16:23.325369 kubelet[2261]: I0527 17:16:23.325295 2261 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 17:16:23.325704 kubelet[2261]: I0527 17:16:23.325536 2261 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 17:16:23.325704 kubelet[2261]: I0527 17:16:23.325550 2261 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 17:16:23.325774 kubelet[2261]: I0527 17:16:23.325766 2261 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 17:16:23.327350 kubelet[2261]: E0527 17:16:23.327268 2261 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 17:16:23.327350 kubelet[2261]: E0527 17:16:23.327328 2261 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 27 17:16:23.427719 kubelet[2261]: I0527 17:16:23.427676 2261 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 17:16:23.428258 kubelet[2261]: E0527 17:16:23.428210 2261 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" May 27 17:16:23.583888 kubelet[2261]: E0527 17:16:23.583767 2261 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="800ms" May 27 17:16:23.630122 kubelet[2261]: I0527 17:16:23.630087 2261 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 17:16:23.630386 kubelet[2261]: E0527 17:16:23.630363 2261 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" May 27 17:16:23.709457 systemd[1]: Created slice kubepods-burstable-pod285392ad00cb7b1258272b7286522e3a.slice - libcontainer container kubepods-burstable-pod285392ad00cb7b1258272b7286522e3a.slice. May 27 17:16:23.739934 kubelet[2261]: E0527 17:16:23.739893 2261 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:16:23.742643 systemd[1]: Created slice kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice - libcontainer container kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice. May 27 17:16:23.754477 kubelet[2261]: E0527 17:16:23.754433 2261 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:16:23.756775 systemd[1]: Created slice kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice - libcontainer container kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice. May 27 17:16:23.758337 kubelet[2261]: E0527 17:16:23.758315 2261 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:16:23.785012 kubelet[2261]: I0527 17:16:23.784963 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/285392ad00cb7b1258272b7286522e3a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"285392ad00cb7b1258272b7286522e3a\") " pod="kube-system/kube-apiserver-localhost" May 27 17:16:23.785218 kubelet[2261]: I0527 17:16:23.785178 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/285392ad00cb7b1258272b7286522e3a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"285392ad00cb7b1258272b7286522e3a\") " pod="kube-system/kube-apiserver-localhost" May 27 17:16:23.785309 kubelet[2261]: I0527 17:16:23.785296 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:16:23.785425 kubelet[2261]: I0527 17:16:23.785393 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 27 17:16:23.785488 kubelet[2261]: I0527 17:16:23.785454 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/285392ad00cb7b1258272b7286522e3a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"285392ad00cb7b1258272b7286522e3a\") " pod="kube-system/kube-apiserver-localhost" May 27 17:16:23.785488 kubelet[2261]: I0527 17:16:23.785480 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:16:23.785549 kubelet[2261]: I0527 17:16:23.785495 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:16:23.785549 kubelet[2261]: I0527 17:16:23.785525 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:16:23.785549 kubelet[2261]: I0527 17:16:23.785542 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:16:24.013395 kubelet[2261]: W0527 17:16:24.013292 2261 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused May 27 17:16:24.013395 kubelet[2261]: E0527 17:16:24.013332 2261 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" May 27 17:16:24.031706 kubelet[2261]: I0527 17:16:24.031679 2261 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 17:16:24.032094 kubelet[2261]: E0527 17:16:24.032059 2261 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" May 27 17:16:24.040930 kubelet[2261]: W0527 17:16:24.040858 2261 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused May 27 17:16:24.040930 kubelet[2261]: E0527 17:16:24.040896 2261 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" May 27 17:16:24.041024 containerd[1492]: time="2025-05-27T17:16:24.040861676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:285392ad00cb7b1258272b7286522e3a,Namespace:kube-system,Attempt:0,}" May 27 17:16:24.055806 containerd[1492]: time="2025-05-27T17:16:24.055766515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,}" May 27 17:16:24.062407 containerd[1492]: time="2025-05-27T17:16:24.061800298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,}" May 27 17:16:24.063786 containerd[1492]: time="2025-05-27T17:16:24.063750926Z" level=info msg="connecting to shim b6fcfb101e0507aaabe50e78e0d5464c260219fc24a89aff0a88c2a0268300d7" address="unix:///run/containerd/s/4f71efda6d4d2b3fe4a5278529e1de4065cb35c509df32f2ec580a20b977b200" namespace=k8s.io protocol=ttrpc version=3 May 27 17:16:24.080515 containerd[1492]: time="2025-05-27T17:16:24.080466397Z" level=info msg="connecting to shim de8ac7cb80d6ca1ebca47f1087410d03f66459c1cadf046917380fdd30240239" address="unix:///run/containerd/s/7d5e6086021ce55904b800ddba3576c0fd5bf55357b04ecc0da88aae502816a8" namespace=k8s.io protocol=ttrpc version=3 May 27 17:16:24.090400 containerd[1492]: time="2025-05-27T17:16:24.090360293Z" level=info msg="connecting to shim bb1486843bc8ffa88bd426f4beb54f38d7e24512dacb427a322116673cd7ea06" address="unix:///run/containerd/s/b58b710cf905851c1b85cbc8dd1717b0b97af582d9d694e1c7f08b38165da985" namespace=k8s.io protocol=ttrpc version=3 May 27 17:16:24.099665 systemd[1]: Started cri-containerd-b6fcfb101e0507aaabe50e78e0d5464c260219fc24a89aff0a88c2a0268300d7.scope - libcontainer container b6fcfb101e0507aaabe50e78e0d5464c260219fc24a89aff0a88c2a0268300d7. May 27 17:16:24.107137 systemd[1]: Started cri-containerd-de8ac7cb80d6ca1ebca47f1087410d03f66459c1cadf046917380fdd30240239.scope - libcontainer container de8ac7cb80d6ca1ebca47f1087410d03f66459c1cadf046917380fdd30240239. May 27 17:16:24.113379 systemd[1]: Started cri-containerd-bb1486843bc8ffa88bd426f4beb54f38d7e24512dacb427a322116673cd7ea06.scope - libcontainer container bb1486843bc8ffa88bd426f4beb54f38d7e24512dacb427a322116673cd7ea06. May 27 17:16:24.147919 containerd[1492]: time="2025-05-27T17:16:24.147879221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:285392ad00cb7b1258272b7286522e3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6fcfb101e0507aaabe50e78e0d5464c260219fc24a89aff0a88c2a0268300d7\"" May 27 17:16:24.150757 containerd[1492]: time="2025-05-27T17:16:24.150680515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"de8ac7cb80d6ca1ebca47f1087410d03f66459c1cadf046917380fdd30240239\"" May 27 17:16:24.152646 containerd[1492]: time="2025-05-27T17:16:24.152612812Z" level=info msg="CreateContainer within sandbox \"b6fcfb101e0507aaabe50e78e0d5464c260219fc24a89aff0a88c2a0268300d7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 17:16:24.154047 containerd[1492]: time="2025-05-27T17:16:24.153804705Z" level=info msg="CreateContainer within sandbox \"de8ac7cb80d6ca1ebca47f1087410d03f66459c1cadf046917380fdd30240239\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 17:16:24.160779 containerd[1492]: time="2025-05-27T17:16:24.160673865Z" level=info msg="Container aa964be9b16827f4107ca2d8c7b7a7fff1dad3ac89655ca211de9da5731ea480: CDI devices from CRI Config.CDIDevices: []" May 27 17:16:24.169934 containerd[1492]: time="2025-05-27T17:16:24.169885828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb1486843bc8ffa88bd426f4beb54f38d7e24512dacb427a322116673cd7ea06\"" May 27 17:16:24.172227 containerd[1492]: time="2025-05-27T17:16:24.172197814Z" level=info msg="CreateContainer within sandbox \"bb1486843bc8ffa88bd426f4beb54f38d7e24512dacb427a322116673cd7ea06\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 17:16:24.175066 containerd[1492]: time="2025-05-27T17:16:24.175034967Z" level=info msg="Container 8cbdb6567b797d4e8e7cee0929b910e2348dee1c489042a906cff79c77ef0af6: CDI devices from CRI Config.CDIDevices: []" May 27 17:16:24.177745 containerd[1492]: time="2025-05-27T17:16:24.177701067Z" level=info msg="CreateContainer within sandbox \"de8ac7cb80d6ca1ebca47f1087410d03f66459c1cadf046917380fdd30240239\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"aa964be9b16827f4107ca2d8c7b7a7fff1dad3ac89655ca211de9da5731ea480\"" May 27 17:16:24.178432 containerd[1492]: time="2025-05-27T17:16:24.178406653Z" level=info msg="StartContainer for \"aa964be9b16827f4107ca2d8c7b7a7fff1dad3ac89655ca211de9da5731ea480\"" May 27 17:16:24.179580 containerd[1492]: time="2025-05-27T17:16:24.179552520Z" level=info msg="connecting to shim aa964be9b16827f4107ca2d8c7b7a7fff1dad3ac89655ca211de9da5731ea480" address="unix:///run/containerd/s/7d5e6086021ce55904b800ddba3576c0fd5bf55357b04ecc0da88aae502816a8" protocol=ttrpc version=3 May 27 17:16:24.181152 containerd[1492]: time="2025-05-27T17:16:24.181062267Z" level=info msg="CreateContainer within sandbox \"b6fcfb101e0507aaabe50e78e0d5464c260219fc24a89aff0a88c2a0268300d7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8cbdb6567b797d4e8e7cee0929b910e2348dee1c489042a906cff79c77ef0af6\"" May 27 17:16:24.181695 containerd[1492]: time="2025-05-27T17:16:24.181669439Z" level=info msg="StartContainer for \"8cbdb6567b797d4e8e7cee0929b910e2348dee1c489042a906cff79c77ef0af6\"" May 27 17:16:24.182735 containerd[1492]: time="2025-05-27T17:16:24.182707687Z" level=info msg="connecting to shim 8cbdb6567b797d4e8e7cee0929b910e2348dee1c489042a906cff79c77ef0af6" address="unix:///run/containerd/s/4f71efda6d4d2b3fe4a5278529e1de4065cb35c509df32f2ec580a20b977b200" protocol=ttrpc version=3 May 27 17:16:24.184972 containerd[1492]: time="2025-05-27T17:16:24.184941911Z" level=info msg="Container 38f814cc10a7234c339cfe98348ff8d73b06722b5f2f76f3a3b4ddf3da2830ce: CDI devices from CRI Config.CDIDevices: []" May 27 17:16:24.191366 containerd[1492]: time="2025-05-27T17:16:24.191322163Z" level=info msg="CreateContainer within sandbox \"bb1486843bc8ffa88bd426f4beb54f38d7e24512dacb427a322116673cd7ea06\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"38f814cc10a7234c339cfe98348ff8d73b06722b5f2f76f3a3b4ddf3da2830ce\"" May 27 17:16:24.192111 containerd[1492]: time="2025-05-27T17:16:24.192056725Z" level=info msg="StartContainer for \"38f814cc10a7234c339cfe98348ff8d73b06722b5f2f76f3a3b4ddf3da2830ce\"" May 27 17:16:24.194885 containerd[1492]: time="2025-05-27T17:16:24.194817637Z" level=info msg="connecting to shim 38f814cc10a7234c339cfe98348ff8d73b06722b5f2f76f3a3b4ddf3da2830ce" address="unix:///run/containerd/s/b58b710cf905851c1b85cbc8dd1717b0b97af582d9d694e1c7f08b38165da985" protocol=ttrpc version=3 May 27 17:16:24.199620 systemd[1]: Started cri-containerd-aa964be9b16827f4107ca2d8c7b7a7fff1dad3ac89655ca211de9da5731ea480.scope - libcontainer container aa964be9b16827f4107ca2d8c7b7a7fff1dad3ac89655ca211de9da5731ea480. May 27 17:16:24.203123 systemd[1]: Started cri-containerd-8cbdb6567b797d4e8e7cee0929b910e2348dee1c489042a906cff79c77ef0af6.scope - libcontainer container 8cbdb6567b797d4e8e7cee0929b910e2348dee1c489042a906cff79c77ef0af6. May 27 17:16:24.220614 systemd[1]: Started cri-containerd-38f814cc10a7234c339cfe98348ff8d73b06722b5f2f76f3a3b4ddf3da2830ce.scope - libcontainer container 38f814cc10a7234c339cfe98348ff8d73b06722b5f2f76f3a3b4ddf3da2830ce. May 27 17:16:24.256842 containerd[1492]: time="2025-05-27T17:16:24.256770672Z" level=info msg="StartContainer for \"aa964be9b16827f4107ca2d8c7b7a7fff1dad3ac89655ca211de9da5731ea480\" returns successfully" May 27 17:16:24.279201 containerd[1492]: time="2025-05-27T17:16:24.275213689Z" level=info msg="StartContainer for \"8cbdb6567b797d4e8e7cee0929b910e2348dee1c489042a906cff79c77ef0af6\" returns successfully" May 27 17:16:24.296161 containerd[1492]: time="2025-05-27T17:16:24.294306541Z" level=info msg="StartContainer for \"38f814cc10a7234c339cfe98348ff8d73b06722b5f2f76f3a3b4ddf3da2830ce\" returns successfully" May 27 17:16:24.385522 kubelet[2261]: E0527 17:16:24.384627 2261 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="1.6s" May 27 17:16:24.423315 kubelet[2261]: W0527 17:16:24.423232 2261 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused May 27 17:16:24.423465 kubelet[2261]: E0527 17:16:24.423316 2261 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" May 27 17:16:24.833752 kubelet[2261]: I0527 17:16:24.833724 2261 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 17:16:25.017139 kubelet[2261]: E0527 17:16:25.017028 2261 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:16:25.021188 kubelet[2261]: E0527 17:16:25.020827 2261 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:16:25.021188 kubelet[2261]: E0527 17:16:25.021191 2261 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:16:25.935765 kubelet[2261]: I0527 17:16:25.935707 2261 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 27 17:16:25.968747 kubelet[2261]: I0527 17:16:25.968706 2261 apiserver.go:52] "Watching apiserver" May 27 17:16:25.980504 kubelet[2261]: I0527 17:16:25.980467 2261 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 17:16:25.981621 kubelet[2261]: I0527 17:16:25.981580 2261 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 17:16:25.988978 kubelet[2261]: E0527 17:16:25.988921 2261 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 27 17:16:25.988978 kubelet[2261]: I0527 17:16:25.988969 2261 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 17:16:25.991929 kubelet[2261]: E0527 17:16:25.991376 2261 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 27 17:16:25.991929 kubelet[2261]: I0527 17:16:25.991407 2261 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 17:16:25.994339 kubelet[2261]: E0527 17:16:25.994298 2261 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 27 17:16:26.027104 kubelet[2261]: I0527 17:16:26.026844 2261 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 17:16:26.027104 kubelet[2261]: I0527 17:16:26.026861 2261 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 17:16:26.029574 kubelet[2261]: E0527 17:16:26.029535 2261 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 27 17:16:26.030452 kubelet[2261]: E0527 17:16:26.029712 2261 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 27 17:16:27.817203 systemd[1]: Reload requested from client PID 2543 ('systemctl') (unit session-7.scope)... May 27 17:16:27.817218 systemd[1]: Reloading... May 27 17:16:27.887533 zram_generator::config[2586]: No configuration found. May 27 17:16:27.957421 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:16:28.056335 systemd[1]: Reloading finished in 238 ms. May 27 17:16:28.080489 kubelet[2261]: I0527 17:16:28.079161 2261 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 17:16:28.079268 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:16:28.097751 systemd[1]: kubelet.service: Deactivated successfully. May 27 17:16:28.098156 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:16:28.098285 systemd[1]: kubelet.service: Consumed 990ms CPU time, 128.5M memory peak. May 27 17:16:28.101597 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:16:28.258032 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:16:28.261498 (kubelet)[2628]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 17:16:28.298231 kubelet[2628]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:16:28.298231 kubelet[2628]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 17:16:28.298231 kubelet[2628]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:16:28.298825 kubelet[2628]: I0527 17:16:28.298264 2628 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 17:16:28.304896 kubelet[2628]: I0527 17:16:28.304852 2628 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 17:16:28.304896 kubelet[2628]: I0527 17:16:28.304880 2628 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 17:16:28.305122 kubelet[2628]: I0527 17:16:28.305106 2628 server.go:954] "Client rotation is on, will bootstrap in background" May 27 17:16:28.306315 kubelet[2628]: I0527 17:16:28.306288 2628 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 27 17:16:28.308526 kubelet[2628]: I0527 17:16:28.308479 2628 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 17:16:28.312904 kubelet[2628]: I0527 17:16:28.312857 2628 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 17:16:28.319004 kubelet[2628]: I0527 17:16:28.318973 2628 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 17:16:28.319355 kubelet[2628]: I0527 17:16:28.319317 2628 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 17:16:28.319620 kubelet[2628]: I0527 17:16:28.319345 2628 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 17:16:28.319697 kubelet[2628]: I0527 17:16:28.319624 2628 topology_manager.go:138] "Creating topology manager with none policy" May 27 17:16:28.319697 kubelet[2628]: I0527 17:16:28.319646 2628 container_manager_linux.go:304] "Creating device plugin manager" May 27 17:16:28.319697 kubelet[2628]: I0527 17:16:28.319694 2628 state_mem.go:36] "Initialized new in-memory state store" May 27 17:16:28.320035 kubelet[2628]: I0527 17:16:28.320000 2628 kubelet.go:446] "Attempting to sync node with API server" May 27 17:16:28.320035 kubelet[2628]: I0527 17:16:28.320029 2628 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 17:16:28.320083 kubelet[2628]: I0527 17:16:28.320059 2628 kubelet.go:352] "Adding apiserver pod source" May 27 17:16:28.320083 kubelet[2628]: I0527 17:16:28.320073 2628 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 17:16:28.321399 kubelet[2628]: I0527 17:16:28.321375 2628 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 17:16:28.321880 kubelet[2628]: I0527 17:16:28.321866 2628 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 17:16:28.323063 kubelet[2628]: I0527 17:16:28.322258 2628 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 17:16:28.323063 kubelet[2628]: I0527 17:16:28.322286 2628 server.go:1287] "Started kubelet" May 27 17:16:28.323206 kubelet[2628]: I0527 17:16:28.323163 2628 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 17:16:28.323518 kubelet[2628]: I0527 17:16:28.323502 2628 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 17:16:28.323699 kubelet[2628]: I0527 17:16:28.323667 2628 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 17:16:28.324737 kubelet[2628]: I0527 17:16:28.324716 2628 server.go:479] "Adding debug handlers to kubelet server" May 27 17:16:28.325052 kubelet[2628]: I0527 17:16:28.325019 2628 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 17:16:28.326255 kubelet[2628]: I0527 17:16:28.326230 2628 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 17:16:28.328247 kubelet[2628]: E0527 17:16:28.326388 2628 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:16:28.328511 kubelet[2628]: I0527 17:16:28.328489 2628 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 17:16:28.328832 kubelet[2628]: I0527 17:16:28.328814 2628 reconciler.go:26] "Reconciler: start to sync state" May 27 17:16:28.328905 kubelet[2628]: I0527 17:16:28.328894 2628 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 17:16:28.332312 kubelet[2628]: I0527 17:16:28.332222 2628 factory.go:221] Registration of the systemd container factory successfully May 27 17:16:28.332377 kubelet[2628]: I0527 17:16:28.332323 2628 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 17:16:28.333499 kubelet[2628]: I0527 17:16:28.333465 2628 factory.go:221] Registration of the containerd container factory successfully May 27 17:16:28.333989 kubelet[2628]: E0527 17:16:28.333953 2628 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 17:16:28.351249 kubelet[2628]: I0527 17:16:28.351215 2628 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 17:16:28.352500 kubelet[2628]: I0527 17:16:28.352188 2628 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 17:16:28.352500 kubelet[2628]: I0527 17:16:28.352223 2628 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 17:16:28.352500 kubelet[2628]: I0527 17:16:28.352241 2628 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 17:16:28.352500 kubelet[2628]: I0527 17:16:28.352247 2628 kubelet.go:2382] "Starting kubelet main sync loop" May 27 17:16:28.352500 kubelet[2628]: E0527 17:16:28.352285 2628 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 17:16:28.376374 kubelet[2628]: I0527 17:16:28.376341 2628 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 17:16:28.376374 kubelet[2628]: I0527 17:16:28.376362 2628 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 17:16:28.376374 kubelet[2628]: I0527 17:16:28.376383 2628 state_mem.go:36] "Initialized new in-memory state store" May 27 17:16:28.376614 kubelet[2628]: I0527 17:16:28.376588 2628 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 17:16:28.376653 kubelet[2628]: I0527 17:16:28.376613 2628 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 17:16:28.376653 kubelet[2628]: I0527 17:16:28.376632 2628 policy_none.go:49] "None policy: Start" May 27 17:16:28.376653 kubelet[2628]: I0527 17:16:28.376641 2628 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 17:16:28.376653 kubelet[2628]: I0527 17:16:28.376651 2628 state_mem.go:35] "Initializing new in-memory state store" May 27 17:16:28.376761 kubelet[2628]: I0527 17:16:28.376745 2628 state_mem.go:75] "Updated machine memory state" May 27 17:16:28.380891 kubelet[2628]: I0527 17:16:28.380726 2628 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 17:16:28.380891 kubelet[2628]: I0527 17:16:28.380881 2628 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 17:16:28.380982 kubelet[2628]: I0527 17:16:28.380893 2628 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 17:16:28.381150 kubelet[2628]: I0527 17:16:28.381126 2628 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 17:16:28.383189 kubelet[2628]: E0527 17:16:28.383008 2628 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 17:16:28.452885 kubelet[2628]: I0527 17:16:28.452835 2628 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 17:16:28.453074 kubelet[2628]: I0527 17:16:28.452857 2628 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 17:16:28.453296 kubelet[2628]: I0527 17:16:28.453283 2628 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 17:16:28.482974 kubelet[2628]: I0527 17:16:28.482940 2628 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 17:16:28.489087 kubelet[2628]: I0527 17:16:28.489042 2628 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 27 17:16:28.489174 kubelet[2628]: I0527 17:16:28.489114 2628 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 27 17:16:28.630737 kubelet[2628]: I0527 17:16:28.630592 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/285392ad00cb7b1258272b7286522e3a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"285392ad00cb7b1258272b7286522e3a\") " pod="kube-system/kube-apiserver-localhost" May 27 17:16:28.630737 kubelet[2628]: I0527 17:16:28.630637 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/285392ad00cb7b1258272b7286522e3a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"285392ad00cb7b1258272b7286522e3a\") " pod="kube-system/kube-apiserver-localhost" May 27 17:16:28.630737 kubelet[2628]: I0527 17:16:28.630668 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:16:28.630737 kubelet[2628]: I0527 17:16:28.630687 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:16:28.630737 kubelet[2628]: I0527 17:16:28.630704 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:16:28.631012 kubelet[2628]: I0527 17:16:28.630719 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 27 17:16:28.631012 kubelet[2628]: I0527 17:16:28.630734 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/285392ad00cb7b1258272b7286522e3a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"285392ad00cb7b1258272b7286522e3a\") " pod="kube-system/kube-apiserver-localhost" May 27 17:16:28.631012 kubelet[2628]: I0527 17:16:28.630749 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:16:28.631012 kubelet[2628]: I0527 17:16:28.630774 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:16:29.321057 kubelet[2628]: I0527 17:16:29.321014 2628 apiserver.go:52] "Watching apiserver" May 27 17:16:29.330131 kubelet[2628]: I0527 17:16:29.329991 2628 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 17:16:29.363746 kubelet[2628]: I0527 17:16:29.363494 2628 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 17:16:29.363746 kubelet[2628]: I0527 17:16:29.363677 2628 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 17:16:29.371278 kubelet[2628]: E0527 17:16:29.371241 2628 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 27 17:16:29.371536 kubelet[2628]: E0527 17:16:29.371510 2628 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 27 17:16:29.410499 kubelet[2628]: I0527 17:16:29.410169 2628 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.410153268 podStartE2EDuration="1.410153268s" podCreationTimestamp="2025-05-27 17:16:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:16:29.402976129 +0000 UTC m=+1.138555230" watchObservedRunningTime="2025-05-27 17:16:29.410153268 +0000 UTC m=+1.145732329" May 27 17:16:29.417784 kubelet[2628]: I0527 17:16:29.417734 2628 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.417702682 podStartE2EDuration="1.417702682s" podCreationTimestamp="2025-05-27 17:16:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:16:29.410670668 +0000 UTC m=+1.146249729" watchObservedRunningTime="2025-05-27 17:16:29.417702682 +0000 UTC m=+1.153281743" May 27 17:16:29.417991 kubelet[2628]: I0527 17:16:29.417871 2628 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.417865372 podStartE2EDuration="1.417865372s" podCreationTimestamp="2025-05-27 17:16:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:16:29.417677114 +0000 UTC m=+1.153256135" watchObservedRunningTime="2025-05-27 17:16:29.417865372 +0000 UTC m=+1.153444433" May 27 17:16:33.812904 kubelet[2628]: I0527 17:16:33.812872 2628 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 17:16:33.813672 containerd[1492]: time="2025-05-27T17:16:33.813637352Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 17:16:33.814204 kubelet[2628]: I0527 17:16:33.814039 2628 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 17:16:33.847747 systemd[1]: Created slice kubepods-besteffort-pod9bad8c31_41fc_4a29_96cd_31589bdc0e98.slice - libcontainer container kubepods-besteffort-pod9bad8c31_41fc_4a29_96cd_31589bdc0e98.slice. May 27 17:16:33.863558 kubelet[2628]: I0527 17:16:33.863525 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9bad8c31-41fc-4a29-96cd-31589bdc0e98-kube-proxy\") pod \"kube-proxy-kczw2\" (UID: \"9bad8c31-41fc-4a29-96cd-31589bdc0e98\") " pod="kube-system/kube-proxy-kczw2" May 27 17:16:33.863558 kubelet[2628]: I0527 17:16:33.863565 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9bad8c31-41fc-4a29-96cd-31589bdc0e98-xtables-lock\") pod \"kube-proxy-kczw2\" (UID: \"9bad8c31-41fc-4a29-96cd-31589bdc0e98\") " pod="kube-system/kube-proxy-kczw2" May 27 17:16:33.863692 kubelet[2628]: I0527 17:16:33.863584 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9bad8c31-41fc-4a29-96cd-31589bdc0e98-lib-modules\") pod \"kube-proxy-kczw2\" (UID: \"9bad8c31-41fc-4a29-96cd-31589bdc0e98\") " pod="kube-system/kube-proxy-kczw2" May 27 17:16:33.863692 kubelet[2628]: I0527 17:16:33.863601 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wm6v\" (UniqueName: \"kubernetes.io/projected/9bad8c31-41fc-4a29-96cd-31589bdc0e98-kube-api-access-6wm6v\") pod \"kube-proxy-kczw2\" (UID: \"9bad8c31-41fc-4a29-96cd-31589bdc0e98\") " pod="kube-system/kube-proxy-kczw2" May 27 17:16:33.971988 kubelet[2628]: E0527 17:16:33.971919 2628 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 27 17:16:33.972208 kubelet[2628]: E0527 17:16:33.972072 2628 projected.go:194] Error preparing data for projected volume kube-api-access-6wm6v for pod kube-system/kube-proxy-kczw2: configmap "kube-root-ca.crt" not found May 27 17:16:33.972302 kubelet[2628]: E0527 17:16:33.972282 2628 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9bad8c31-41fc-4a29-96cd-31589bdc0e98-kube-api-access-6wm6v podName:9bad8c31-41fc-4a29-96cd-31589bdc0e98 nodeName:}" failed. No retries permitted until 2025-05-27 17:16:34.472257152 +0000 UTC m=+6.207836213 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6wm6v" (UniqueName: "kubernetes.io/projected/9bad8c31-41fc-4a29-96cd-31589bdc0e98-kube-api-access-6wm6v") pod "kube-proxy-kczw2" (UID: "9bad8c31-41fc-4a29-96cd-31589bdc0e98") : configmap "kube-root-ca.crt" not found May 27 17:16:34.568507 kubelet[2628]: E0527 17:16:34.568422 2628 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 27 17:16:34.568507 kubelet[2628]: E0527 17:16:34.568473 2628 projected.go:194] Error preparing data for projected volume kube-api-access-6wm6v for pod kube-system/kube-proxy-kczw2: configmap "kube-root-ca.crt" not found May 27 17:16:34.568966 kubelet[2628]: E0527 17:16:34.568945 2628 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9bad8c31-41fc-4a29-96cd-31589bdc0e98-kube-api-access-6wm6v podName:9bad8c31-41fc-4a29-96cd-31589bdc0e98 nodeName:}" failed. No retries permitted until 2025-05-27 17:16:35.568511438 +0000 UTC m=+7.304090499 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-6wm6v" (UniqueName: "kubernetes.io/projected/9bad8c31-41fc-4a29-96cd-31589bdc0e98-kube-api-access-6wm6v") pod "kube-proxy-kczw2" (UID: "9bad8c31-41fc-4a29-96cd-31589bdc0e98") : configmap "kube-root-ca.crt" not found May 27 17:16:34.979475 systemd[1]: Created slice kubepods-besteffort-pod0343c7ed_075b_48a0_8b7b_7fa8d6a961df.slice - libcontainer container kubepods-besteffort-pod0343c7ed_075b_48a0_8b7b_7fa8d6a961df.slice. May 27 17:16:35.072495 kubelet[2628]: I0527 17:16:35.072426 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0343c7ed-075b-48a0-8b7b-7fa8d6a961df-var-lib-calico\") pod \"tigera-operator-844669ff44-sgmcj\" (UID: \"0343c7ed-075b-48a0-8b7b-7fa8d6a961df\") " pod="tigera-operator/tigera-operator-844669ff44-sgmcj" May 27 17:16:35.072495 kubelet[2628]: I0527 17:16:35.072493 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfg7x\" (UniqueName: \"kubernetes.io/projected/0343c7ed-075b-48a0-8b7b-7fa8d6a961df-kube-api-access-lfg7x\") pod \"tigera-operator-844669ff44-sgmcj\" (UID: \"0343c7ed-075b-48a0-8b7b-7fa8d6a961df\") " pod="tigera-operator/tigera-operator-844669ff44-sgmcj" May 27 17:16:35.285166 containerd[1492]: time="2025-05-27T17:16:35.285035624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-sgmcj,Uid:0343c7ed-075b-48a0-8b7b-7fa8d6a961df,Namespace:tigera-operator,Attempt:0,}" May 27 17:16:35.335629 containerd[1492]: time="2025-05-27T17:16:35.335526532Z" level=info msg="connecting to shim 2abc358e1dfded51a78668e458b675b704679338c72af7f566dc71965ba82d90" address="unix:///run/containerd/s/24435ef4ed9ab0878d3f9a52b0edc393489ed4a4d4e65e971ebc5bbdd6c09f04" namespace=k8s.io protocol=ttrpc version=3 May 27 17:16:35.362645 systemd[1]: Started cri-containerd-2abc358e1dfded51a78668e458b675b704679338c72af7f566dc71965ba82d90.scope - libcontainer container 2abc358e1dfded51a78668e458b675b704679338c72af7f566dc71965ba82d90. May 27 17:16:35.403330 containerd[1492]: time="2025-05-27T17:16:35.403253993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-sgmcj,Uid:0343c7ed-075b-48a0-8b7b-7fa8d6a961df,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2abc358e1dfded51a78668e458b675b704679338c72af7f566dc71965ba82d90\"" May 27 17:16:35.406737 containerd[1492]: time="2025-05-27T17:16:35.406703537Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 27 17:16:35.656318 containerd[1492]: time="2025-05-27T17:16:35.656233633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kczw2,Uid:9bad8c31-41fc-4a29-96cd-31589bdc0e98,Namespace:kube-system,Attempt:0,}" May 27 17:16:35.673222 containerd[1492]: time="2025-05-27T17:16:35.673170409Z" level=info msg="connecting to shim fb8cb8f319d2b8a1bf81ca9f8d93b7ca60f46a98fd30dcb2f1713d16d4e10b65" address="unix:///run/containerd/s/8e361772bd560c86ed86bf6021097964b7074f685e34086c6b7d2a8290b64dbe" namespace=k8s.io protocol=ttrpc version=3 May 27 17:16:35.705613 systemd[1]: Started cri-containerd-fb8cb8f319d2b8a1bf81ca9f8d93b7ca60f46a98fd30dcb2f1713d16d4e10b65.scope - libcontainer container fb8cb8f319d2b8a1bf81ca9f8d93b7ca60f46a98fd30dcb2f1713d16d4e10b65. May 27 17:16:35.727087 containerd[1492]: time="2025-05-27T17:16:35.727045887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kczw2,Uid:9bad8c31-41fc-4a29-96cd-31589bdc0e98,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb8cb8f319d2b8a1bf81ca9f8d93b7ca60f46a98fd30dcb2f1713d16d4e10b65\"" May 27 17:16:35.729944 containerd[1492]: time="2025-05-27T17:16:35.729895235Z" level=info msg="CreateContainer within sandbox \"fb8cb8f319d2b8a1bf81ca9f8d93b7ca60f46a98fd30dcb2f1713d16d4e10b65\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 17:16:35.742379 containerd[1492]: time="2025-05-27T17:16:35.742321624Z" level=info msg="Container 364e7ceaa48536432b475f5e1e728dd940d17a6355765d9fd6da9ef368eb279e: CDI devices from CRI Config.CDIDevices: []" May 27 17:16:35.755541 containerd[1492]: time="2025-05-27T17:16:35.755506079Z" level=info msg="CreateContainer within sandbox \"fb8cb8f319d2b8a1bf81ca9f8d93b7ca60f46a98fd30dcb2f1713d16d4e10b65\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"364e7ceaa48536432b475f5e1e728dd940d17a6355765d9fd6da9ef368eb279e\"" May 27 17:16:35.757370 containerd[1492]: time="2025-05-27T17:16:35.757311627Z" level=info msg="StartContainer for \"364e7ceaa48536432b475f5e1e728dd940d17a6355765d9fd6da9ef368eb279e\"" May 27 17:16:35.759629 containerd[1492]: time="2025-05-27T17:16:35.759596226Z" level=info msg="connecting to shim 364e7ceaa48536432b475f5e1e728dd940d17a6355765d9fd6da9ef368eb279e" address="unix:///run/containerd/s/8e361772bd560c86ed86bf6021097964b7074f685e34086c6b7d2a8290b64dbe" protocol=ttrpc version=3 May 27 17:16:35.779605 systemd[1]: Started cri-containerd-364e7ceaa48536432b475f5e1e728dd940d17a6355765d9fd6da9ef368eb279e.scope - libcontainer container 364e7ceaa48536432b475f5e1e728dd940d17a6355765d9fd6da9ef368eb279e. May 27 17:16:35.812969 containerd[1492]: time="2025-05-27T17:16:35.812923199Z" level=info msg="StartContainer for \"364e7ceaa48536432b475f5e1e728dd940d17a6355765d9fd6da9ef368eb279e\" returns successfully" May 27 17:16:36.774722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount176462077.mount: Deactivated successfully. May 27 17:16:37.551632 containerd[1492]: time="2025-05-27T17:16:37.551581297Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:16:37.552024 containerd[1492]: time="2025-05-27T17:16:37.551964883Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=22143480" May 27 17:16:37.552864 containerd[1492]: time="2025-05-27T17:16:37.552832313Z" level=info msg="ImageCreate event name:\"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:16:37.554933 containerd[1492]: time="2025-05-27T17:16:37.554895508Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:16:37.555552 containerd[1492]: time="2025-05-27T17:16:37.555527017Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"22139475\" in 2.148784394s" May 27 17:16:37.555580 containerd[1492]: time="2025-05-27T17:16:37.555558223Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\"" May 27 17:16:37.559241 containerd[1492]: time="2025-05-27T17:16:37.559209812Z" level=info msg="CreateContainer within sandbox \"2abc358e1dfded51a78668e458b675b704679338c72af7f566dc71965ba82d90\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 27 17:16:37.564662 containerd[1492]: time="2025-05-27T17:16:37.564623466Z" level=info msg="Container cc92eed8ac2aec0eca29a2e2b4b967ed00fb8456c648be09d5bdb8e11193c20d: CDI devices from CRI Config.CDIDevices: []" May 27 17:16:37.571302 containerd[1492]: time="2025-05-27T17:16:37.571257489Z" level=info msg="CreateContainer within sandbox \"2abc358e1dfded51a78668e458b675b704679338c72af7f566dc71965ba82d90\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"cc92eed8ac2aec0eca29a2e2b4b967ed00fb8456c648be09d5bdb8e11193c20d\"" May 27 17:16:37.571771 containerd[1492]: time="2025-05-27T17:16:37.571739052Z" level=info msg="StartContainer for \"cc92eed8ac2aec0eca29a2e2b4b967ed00fb8456c648be09d5bdb8e11193c20d\"" May 27 17:16:37.572603 containerd[1492]: time="2025-05-27T17:16:37.572574236Z" level=info msg="connecting to shim cc92eed8ac2aec0eca29a2e2b4b967ed00fb8456c648be09d5bdb8e11193c20d" address="unix:///run/containerd/s/24435ef4ed9ab0878d3f9a52b0edc393489ed4a4d4e65e971ebc5bbdd6c09f04" protocol=ttrpc version=3 May 27 17:16:37.597698 systemd[1]: Started cri-containerd-cc92eed8ac2aec0eca29a2e2b4b967ed00fb8456c648be09d5bdb8e11193c20d.scope - libcontainer container cc92eed8ac2aec0eca29a2e2b4b967ed00fb8456c648be09d5bdb8e11193c20d. May 27 17:16:37.622659 containerd[1492]: time="2025-05-27T17:16:37.622616824Z" level=info msg="StartContainer for \"cc92eed8ac2aec0eca29a2e2b4b967ed00fb8456c648be09d5bdb8e11193c20d\" returns successfully" May 27 17:16:38.392132 kubelet[2628]: I0527 17:16:38.392053 2628 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kczw2" podStartSLOduration=5.392033521 podStartE2EDuration="5.392033521s" podCreationTimestamp="2025-05-27 17:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:16:36.402133688 +0000 UTC m=+8.137712749" watchObservedRunningTime="2025-05-27 17:16:38.392033521 +0000 UTC m=+10.127612582" May 27 17:16:38.392666 kubelet[2628]: I0527 17:16:38.392243 2628 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-sgmcj" podStartSLOduration=2.24018281 podStartE2EDuration="4.392236714s" podCreationTimestamp="2025-05-27 17:16:34 +0000 UTC" firstStartedPulling="2025-05-27 17:16:35.405341315 +0000 UTC m=+7.140920376" lastFinishedPulling="2025-05-27 17:16:37.557395219 +0000 UTC m=+9.292974280" observedRunningTime="2025-05-27 17:16:38.391677582 +0000 UTC m=+10.127256644" watchObservedRunningTime="2025-05-27 17:16:38.392236714 +0000 UTC m=+10.127815735" May 27 17:16:42.439282 update_engine[1481]: I20250527 17:16:42.439209 1481 update_attempter.cc:509] Updating boot flags... May 27 17:16:42.819002 sudo[1700]: pam_unix(sudo:session): session closed for user root May 27 17:16:42.827815 sshd[1699]: Connection closed by 10.0.0.1 port 50964 May 27 17:16:42.828743 sshd-session[1696]: pam_unix(sshd:session): session closed for user core May 27 17:16:42.834378 systemd[1]: sshd@6-10.0.0.128:22-10.0.0.1:50964.service: Deactivated successfully. May 27 17:16:42.837179 systemd[1]: session-7.scope: Deactivated successfully. May 27 17:16:42.838631 systemd[1]: session-7.scope: Consumed 7.006s CPU time, 229M memory peak. May 27 17:16:42.839946 systemd-logind[1472]: Session 7 logged out. Waiting for processes to exit. May 27 17:16:42.842459 systemd-logind[1472]: Removed session 7. May 27 17:16:46.549616 systemd[1]: Created slice kubepods-besteffort-pod3be11038_5d7b_4672_b674_edca7ab25112.slice - libcontainer container kubepods-besteffort-pod3be11038_5d7b_4672_b674_edca7ab25112.slice. May 27 17:16:46.561315 kubelet[2628]: I0527 17:16:46.560934 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3be11038-5d7b-4672-b674-edca7ab25112-typha-certs\") pod \"calico-typha-5cf7b6b547-mfsb5\" (UID: \"3be11038-5d7b-4672-b674-edca7ab25112\") " pod="calico-system/calico-typha-5cf7b6b547-mfsb5" May 27 17:16:46.561315 kubelet[2628]: I0527 17:16:46.560990 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3be11038-5d7b-4672-b674-edca7ab25112-tigera-ca-bundle\") pod \"calico-typha-5cf7b6b547-mfsb5\" (UID: \"3be11038-5d7b-4672-b674-edca7ab25112\") " pod="calico-system/calico-typha-5cf7b6b547-mfsb5" May 27 17:16:46.561315 kubelet[2628]: I0527 17:16:46.561015 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mvh6\" (UniqueName: \"kubernetes.io/projected/3be11038-5d7b-4672-b674-edca7ab25112-kube-api-access-9mvh6\") pod \"calico-typha-5cf7b6b547-mfsb5\" (UID: \"3be11038-5d7b-4672-b674-edca7ab25112\") " pod="calico-system/calico-typha-5cf7b6b547-mfsb5" May 27 17:16:46.856372 containerd[1492]: time="2025-05-27T17:16:46.856322780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cf7b6b547-mfsb5,Uid:3be11038-5d7b-4672-b674-edca7ab25112,Namespace:calico-system,Attempt:0,}" May 27 17:16:46.889698 containerd[1492]: time="2025-05-27T17:16:46.889477551Z" level=info msg="connecting to shim d0dc0758a13ef26cffef33d30f3b6d63980aaa5fac8fd53771aa30d7f4db2890" address="unix:///run/containerd/s/72479dfc5a391e07b4886588e94258dc961692cf5c9b507997ba4c2a71e8f249" namespace=k8s.io protocol=ttrpc version=3 May 27 17:16:46.942144 systemd[1]: Created slice kubepods-besteffort-pod4b9ed168_5c9f_46ad_a5e6_38effa1e3e84.slice - libcontainer container kubepods-besteffort-pod4b9ed168_5c9f_46ad_a5e6_38effa1e3e84.slice. May 27 17:16:46.962619 systemd[1]: Started cri-containerd-d0dc0758a13ef26cffef33d30f3b6d63980aaa5fac8fd53771aa30d7f4db2890.scope - libcontainer container d0dc0758a13ef26cffef33d30f3b6d63980aaa5fac8fd53771aa30d7f4db2890. May 27 17:16:46.963688 kubelet[2628]: I0527 17:16:46.963581 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4b9ed168-5c9f-46ad-a5e6-38effa1e3e84-flexvol-driver-host\") pod \"calico-node-78x4s\" (UID: \"4b9ed168-5c9f-46ad-a5e6-38effa1e3e84\") " pod="calico-system/calico-node-78x4s" May 27 17:16:46.963688 kubelet[2628]: I0527 17:16:46.963618 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr9gf\" (UniqueName: \"kubernetes.io/projected/4b9ed168-5c9f-46ad-a5e6-38effa1e3e84-kube-api-access-rr9gf\") pod \"calico-node-78x4s\" (UID: \"4b9ed168-5c9f-46ad-a5e6-38effa1e3e84\") " pod="calico-system/calico-node-78x4s" May 27 17:16:46.963688 kubelet[2628]: I0527 17:16:46.963637 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4b9ed168-5c9f-46ad-a5e6-38effa1e3e84-cni-log-dir\") pod \"calico-node-78x4s\" (UID: \"4b9ed168-5c9f-46ad-a5e6-38effa1e3e84\") " pod="calico-system/calico-node-78x4s" May 27 17:16:46.963688 kubelet[2628]: I0527 17:16:46.963654 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b9ed168-5c9f-46ad-a5e6-38effa1e3e84-tigera-ca-bundle\") pod \"calico-node-78x4s\" (UID: \"4b9ed168-5c9f-46ad-a5e6-38effa1e3e84\") " pod="calico-system/calico-node-78x4s" May 27 17:16:46.963688 kubelet[2628]: I0527 17:16:46.963669 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b9ed168-5c9f-46ad-a5e6-38effa1e3e84-xtables-lock\") pod \"calico-node-78x4s\" (UID: \"4b9ed168-5c9f-46ad-a5e6-38effa1e3e84\") " pod="calico-system/calico-node-78x4s" May 27 17:16:46.963868 kubelet[2628]: I0527 17:16:46.963722 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4b9ed168-5c9f-46ad-a5e6-38effa1e3e84-cni-net-dir\") pod \"calico-node-78x4s\" (UID: \"4b9ed168-5c9f-46ad-a5e6-38effa1e3e84\") " pod="calico-system/calico-node-78x4s" May 27 17:16:46.963868 kubelet[2628]: I0527 17:16:46.963757 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b9ed168-5c9f-46ad-a5e6-38effa1e3e84-lib-modules\") pod \"calico-node-78x4s\" (UID: \"4b9ed168-5c9f-46ad-a5e6-38effa1e3e84\") " pod="calico-system/calico-node-78x4s" May 27 17:16:46.963868 kubelet[2628]: I0527 17:16:46.963772 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4b9ed168-5c9f-46ad-a5e6-38effa1e3e84-var-run-calico\") pod \"calico-node-78x4s\" (UID: \"4b9ed168-5c9f-46ad-a5e6-38effa1e3e84\") " pod="calico-system/calico-node-78x4s" May 27 17:16:46.963868 kubelet[2628]: I0527 17:16:46.963790 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4b9ed168-5c9f-46ad-a5e6-38effa1e3e84-policysync\") pod \"calico-node-78x4s\" (UID: \"4b9ed168-5c9f-46ad-a5e6-38effa1e3e84\") " pod="calico-system/calico-node-78x4s" May 27 17:16:46.963868 kubelet[2628]: I0527 17:16:46.963830 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4b9ed168-5c9f-46ad-a5e6-38effa1e3e84-cni-bin-dir\") pod \"calico-node-78x4s\" (UID: \"4b9ed168-5c9f-46ad-a5e6-38effa1e3e84\") " pod="calico-system/calico-node-78x4s" May 27 17:16:46.963988 kubelet[2628]: I0527 17:16:46.963865 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4b9ed168-5c9f-46ad-a5e6-38effa1e3e84-node-certs\") pod \"calico-node-78x4s\" (UID: \"4b9ed168-5c9f-46ad-a5e6-38effa1e3e84\") " pod="calico-system/calico-node-78x4s" May 27 17:16:46.963988 kubelet[2628]: I0527 17:16:46.963884 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4b9ed168-5c9f-46ad-a5e6-38effa1e3e84-var-lib-calico\") pod \"calico-node-78x4s\" (UID: \"4b9ed168-5c9f-46ad-a5e6-38effa1e3e84\") " pod="calico-system/calico-node-78x4s" May 27 17:16:47.017056 containerd[1492]: time="2025-05-27T17:16:47.016946798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cf7b6b547-mfsb5,Uid:3be11038-5d7b-4672-b674-edca7ab25112,Namespace:calico-system,Attempt:0,} returns sandbox id \"d0dc0758a13ef26cffef33d30f3b6d63980aaa5fac8fd53771aa30d7f4db2890\"" May 27 17:16:47.027404 containerd[1492]: time="2025-05-27T17:16:47.027363160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 27 17:16:47.066869 kubelet[2628]: E0527 17:16:47.066826 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.066869 kubelet[2628]: W0527 17:16:47.066852 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.069494 kubelet[2628]: E0527 17:16:47.069431 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.069494 kubelet[2628]: W0527 17:16:47.069464 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.069731 kubelet[2628]: E0527 17:16:47.069690 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.070452 kubelet[2628]: E0527 17:16:47.070400 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.071551 kubelet[2628]: E0527 17:16:47.071529 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.071551 kubelet[2628]: W0527 17:16:47.071544 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.072558 kubelet[2628]: E0527 17:16:47.071564 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.072558 kubelet[2628]: E0527 17:16:47.071749 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.072558 kubelet[2628]: W0527 17:16:47.071758 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.072558 kubelet[2628]: E0527 17:16:47.071775 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.072558 kubelet[2628]: E0527 17:16:47.071902 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.072558 kubelet[2628]: W0527 17:16:47.071918 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.072558 kubelet[2628]: E0527 17:16:47.071926 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.078509 kubelet[2628]: E0527 17:16:47.078489 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.078509 kubelet[2628]: W0527 17:16:47.078507 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.078587 kubelet[2628]: E0527 17:16:47.078522 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.220595 kubelet[2628]: E0527 17:16:47.219808 2628 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v7797" podUID="b3279a25-7c72-48a0-98d2-a10f51772f5c" May 27 17:16:47.246343 containerd[1492]: time="2025-05-27T17:16:47.246278543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-78x4s,Uid:4b9ed168-5c9f-46ad-a5e6-38effa1e3e84,Namespace:calico-system,Attempt:0,}" May 27 17:16:47.248655 kubelet[2628]: E0527 17:16:47.248627 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.248655 kubelet[2628]: W0527 17:16:47.248648 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.249586 kubelet[2628]: E0527 17:16:47.248668 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.249711 kubelet[2628]: E0527 17:16:47.249690 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.249762 kubelet[2628]: W0527 17:16:47.249714 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.249762 kubelet[2628]: E0527 17:16:47.249763 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.249984 kubelet[2628]: E0527 17:16:47.249970 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.249984 kubelet[2628]: W0527 17:16:47.249983 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.250057 kubelet[2628]: E0527 17:16:47.249992 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.250486 kubelet[2628]: E0527 17:16:47.250426 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.250486 kubelet[2628]: W0527 17:16:47.250453 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.250486 kubelet[2628]: E0527 17:16:47.250465 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.250964 kubelet[2628]: E0527 17:16:47.250908 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.250964 kubelet[2628]: W0527 17:16:47.250938 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.250964 kubelet[2628]: E0527 17:16:47.250950 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.251420 kubelet[2628]: E0527 17:16:47.251397 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.251420 kubelet[2628]: W0527 17:16:47.251413 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.251529 kubelet[2628]: E0527 17:16:47.251425 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.252117 kubelet[2628]: E0527 17:16:47.252093 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.252117 kubelet[2628]: W0527 17:16:47.252114 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.252203 kubelet[2628]: E0527 17:16:47.252126 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.252339 kubelet[2628]: E0527 17:16:47.252315 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.252339 kubelet[2628]: W0527 17:16:47.252328 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.252339 kubelet[2628]: E0527 17:16:47.252338 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.252591 kubelet[2628]: E0527 17:16:47.252571 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.252591 kubelet[2628]: W0527 17:16:47.252583 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.252665 kubelet[2628]: E0527 17:16:47.252593 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.252771 kubelet[2628]: E0527 17:16:47.252755 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.252806 kubelet[2628]: W0527 17:16:47.252768 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.252806 kubelet[2628]: E0527 17:16:47.252794 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.252943 kubelet[2628]: E0527 17:16:47.252917 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.253005 kubelet[2628]: W0527 17:16:47.252988 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.253033 kubelet[2628]: E0527 17:16:47.253005 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.253349 kubelet[2628]: E0527 17:16:47.253213 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.253349 kubelet[2628]: W0527 17:16:47.253228 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.253349 kubelet[2628]: E0527 17:16:47.253237 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.253789 kubelet[2628]: E0527 17:16:47.253493 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.253789 kubelet[2628]: W0527 17:16:47.253510 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.253949 kubelet[2628]: E0527 17:16:47.253916 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.254173 kubelet[2628]: E0527 17:16:47.254152 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.254173 kubelet[2628]: W0527 17:16:47.254167 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.254305 kubelet[2628]: E0527 17:16:47.254179 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.254593 kubelet[2628]: E0527 17:16:47.254572 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.254593 kubelet[2628]: W0527 17:16:47.254586 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.254671 kubelet[2628]: E0527 17:16:47.254598 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.255228 kubelet[2628]: E0527 17:16:47.255204 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.255228 kubelet[2628]: W0527 17:16:47.255219 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.255228 kubelet[2628]: E0527 17:16:47.255231 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.255625 kubelet[2628]: E0527 17:16:47.255601 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.255625 kubelet[2628]: W0527 17:16:47.255621 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.255697 kubelet[2628]: E0527 17:16:47.255633 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.255890 kubelet[2628]: E0527 17:16:47.255874 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.255933 kubelet[2628]: W0527 17:16:47.255891 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.255933 kubelet[2628]: E0527 17:16:47.255902 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.256175 kubelet[2628]: E0527 17:16:47.256156 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.256175 kubelet[2628]: W0527 17:16:47.256172 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.256309 kubelet[2628]: E0527 17:16:47.256184 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.257465 kubelet[2628]: E0527 17:16:47.256483 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.257465 kubelet[2628]: W0527 17:16:47.256496 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.257465 kubelet[2628]: E0527 17:16:47.256526 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.265950 kubelet[2628]: E0527 17:16:47.265903 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.265950 kubelet[2628]: W0527 17:16:47.265932 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.265950 kubelet[2628]: E0527 17:16:47.265951 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.266249 kubelet[2628]: I0527 17:16:47.265980 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b3279a25-7c72-48a0-98d2-a10f51772f5c-kubelet-dir\") pod \"csi-node-driver-v7797\" (UID: \"b3279a25-7c72-48a0-98d2-a10f51772f5c\") " pod="calico-system/csi-node-driver-v7797" May 27 17:16:47.266341 kubelet[2628]: E0527 17:16:47.266320 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.266341 kubelet[2628]: W0527 17:16:47.266336 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.266414 kubelet[2628]: E0527 17:16:47.266354 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.266414 kubelet[2628]: I0527 17:16:47.266373 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b3279a25-7c72-48a0-98d2-a10f51772f5c-registration-dir\") pod \"csi-node-driver-v7797\" (UID: \"b3279a25-7c72-48a0-98d2-a10f51772f5c\") " pod="calico-system/csi-node-driver-v7797" May 27 17:16:47.268606 kubelet[2628]: E0527 17:16:47.268577 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.268606 kubelet[2628]: W0527 17:16:47.268598 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.268713 kubelet[2628]: E0527 17:16:47.268620 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.268713 kubelet[2628]: I0527 17:16:47.268642 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b3279a25-7c72-48a0-98d2-a10f51772f5c-socket-dir\") pod \"csi-node-driver-v7797\" (UID: \"b3279a25-7c72-48a0-98d2-a10f51772f5c\") " pod="calico-system/csi-node-driver-v7797" May 27 17:16:47.268873 kubelet[2628]: E0527 17:16:47.268858 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.268908 kubelet[2628]: W0527 17:16:47.268873 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.268981 kubelet[2628]: E0527 17:16:47.268966 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.269009 kubelet[2628]: I0527 17:16:47.268994 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km95r\" (UniqueName: \"kubernetes.io/projected/b3279a25-7c72-48a0-98d2-a10f51772f5c-kube-api-access-km95r\") pod \"csi-node-driver-v7797\" (UID: \"b3279a25-7c72-48a0-98d2-a10f51772f5c\") " pod="calico-system/csi-node-driver-v7797" May 27 17:16:47.269179 kubelet[2628]: E0527 17:16:47.269156 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.269179 kubelet[2628]: W0527 17:16:47.269168 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.269278 kubelet[2628]: E0527 17:16:47.269204 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.269610 kubelet[2628]: E0527 17:16:47.269358 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.269610 kubelet[2628]: W0527 17:16:47.269373 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.269610 kubelet[2628]: E0527 17:16:47.269457 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.269610 kubelet[2628]: E0527 17:16:47.269603 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.269610 kubelet[2628]: W0527 17:16:47.269613 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.270166 kubelet[2628]: E0527 17:16:47.269629 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.270166 kubelet[2628]: E0527 17:16:47.269892 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.270166 kubelet[2628]: W0527 17:16:47.269903 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.270166 kubelet[2628]: E0527 17:16:47.269920 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.270166 kubelet[2628]: I0527 17:16:47.270032 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b3279a25-7c72-48a0-98d2-a10f51772f5c-varrun\") pod \"csi-node-driver-v7797\" (UID: \"b3279a25-7c72-48a0-98d2-a10f51772f5c\") " pod="calico-system/csi-node-driver-v7797" May 27 17:16:47.270276 kubelet[2628]: E0527 17:16:47.270178 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.270276 kubelet[2628]: W0527 17:16:47.270191 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.270276 kubelet[2628]: E0527 17:16:47.270204 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.270911 kubelet[2628]: E0527 17:16:47.270364 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.270911 kubelet[2628]: W0527 17:16:47.270376 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.270911 kubelet[2628]: E0527 17:16:47.270392 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.270911 kubelet[2628]: E0527 17:16:47.270742 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.270911 kubelet[2628]: W0527 17:16:47.270754 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.270911 kubelet[2628]: E0527 17:16:47.270767 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.271147 kubelet[2628]: E0527 17:16:47.270945 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.271147 kubelet[2628]: W0527 17:16:47.270955 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.271147 kubelet[2628]: E0527 17:16:47.270964 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.271147 kubelet[2628]: E0527 17:16:47.271142 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.271699 kubelet[2628]: W0527 17:16:47.271152 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.271699 kubelet[2628]: E0527 17:16:47.271161 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.271699 kubelet[2628]: E0527 17:16:47.271326 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.271699 kubelet[2628]: W0527 17:16:47.271335 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.271699 kubelet[2628]: E0527 17:16:47.271343 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.271699 kubelet[2628]: E0527 17:16:47.271593 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.271699 kubelet[2628]: W0527 17:16:47.271602 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.271699 kubelet[2628]: E0527 17:16:47.271645 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.288791 containerd[1492]: time="2025-05-27T17:16:47.288721833Z" level=info msg="connecting to shim c3333df1a74be71ca2ff72e6e6e0aaf8b04bf991de74fc72e47618d5b039021b" address="unix:///run/containerd/s/eb901725afac4a04705f96ef3afecc6dac63444874d393216a8f6ee16797a90c" namespace=k8s.io protocol=ttrpc version=3 May 27 17:16:47.327661 systemd[1]: Started cri-containerd-c3333df1a74be71ca2ff72e6e6e0aaf8b04bf991de74fc72e47618d5b039021b.scope - libcontainer container c3333df1a74be71ca2ff72e6e6e0aaf8b04bf991de74fc72e47618d5b039021b. May 27 17:16:47.372653 kubelet[2628]: E0527 17:16:47.372601 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.372653 kubelet[2628]: W0527 17:16:47.372646 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.372827 kubelet[2628]: E0527 17:16:47.372668 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.372921 kubelet[2628]: E0527 17:16:47.372909 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.372921 kubelet[2628]: W0527 17:16:47.372919 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.372984 kubelet[2628]: E0527 17:16:47.372960 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.373250 kubelet[2628]: E0527 17:16:47.373231 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.373288 kubelet[2628]: W0527 17:16:47.373268 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.373324 kubelet[2628]: E0527 17:16:47.373290 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.373510 kubelet[2628]: E0527 17:16:47.373498 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.373557 kubelet[2628]: W0527 17:16:47.373511 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.373557 kubelet[2628]: E0527 17:16:47.373527 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.373728 kubelet[2628]: E0527 17:16:47.373712 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.373856 kubelet[2628]: W0527 17:16:47.373839 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.373856 kubelet[2628]: E0527 17:16:47.373865 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.374247 kubelet[2628]: E0527 17:16:47.374230 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.374247 kubelet[2628]: W0527 17:16:47.374244 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.374413 kubelet[2628]: E0527 17:16:47.374261 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.374501 kubelet[2628]: E0527 17:16:47.374485 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.374501 kubelet[2628]: W0527 17:16:47.374498 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.375378 kubelet[2628]: E0527 17:16:47.375257 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.375585 kubelet[2628]: E0527 17:16:47.375567 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.375781 kubelet[2628]: W0527 17:16:47.375659 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.375781 kubelet[2628]: E0527 17:16:47.375690 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.376552 containerd[1492]: time="2025-05-27T17:16:47.376150796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-78x4s,Uid:4b9ed168-5c9f-46ad-a5e6-38effa1e3e84,Namespace:calico-system,Attempt:0,} returns sandbox id \"c3333df1a74be71ca2ff72e6e6e0aaf8b04bf991de74fc72e47618d5b039021b\"" May 27 17:16:47.376958 kubelet[2628]: E0527 17:16:47.376659 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.376958 kubelet[2628]: W0527 17:16:47.376679 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.376958 kubelet[2628]: E0527 17:16:47.376704 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.377861 kubelet[2628]: E0527 17:16:47.377531 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.377861 kubelet[2628]: W0527 17:16:47.377551 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.377861 kubelet[2628]: E0527 17:16:47.377586 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.378333 kubelet[2628]: E0527 17:16:47.378313 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.378723 kubelet[2628]: W0527 17:16:47.378613 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.378723 kubelet[2628]: E0527 17:16:47.378678 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.378911 kubelet[2628]: E0527 17:16:47.378896 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.378979 kubelet[2628]: W0527 17:16:47.378967 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.379088 kubelet[2628]: E0527 17:16:47.379058 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.379427 kubelet[2628]: E0527 17:16:47.379270 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.379427 kubelet[2628]: W0527 17:16:47.379290 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.379546 kubelet[2628]: E0527 17:16:47.379473 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.380393 kubelet[2628]: E0527 17:16:47.380228 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.380393 kubelet[2628]: W0527 17:16:47.380244 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.380393 kubelet[2628]: E0527 17:16:47.380263 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.380591 kubelet[2628]: E0527 17:16:47.380575 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.381004 kubelet[2628]: W0527 17:16:47.380798 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.381004 kubelet[2628]: E0527 17:16:47.380854 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.381661 kubelet[2628]: E0527 17:16:47.381642 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.382150 kubelet[2628]: W0527 17:16:47.381998 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.382150 kubelet[2628]: E0527 17:16:47.382073 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.382401 kubelet[2628]: E0527 17:16:47.382304 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.382401 kubelet[2628]: W0527 17:16:47.382320 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.382401 kubelet[2628]: E0527 17:16:47.382365 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.382647 kubelet[2628]: E0527 17:16:47.382564 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.382878 kubelet[2628]: W0527 17:16:47.382711 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.382878 kubelet[2628]: E0527 17:16:47.382818 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.383193 kubelet[2628]: E0527 17:16:47.383075 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.383193 kubelet[2628]: W0527 17:16:47.383092 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.383193 kubelet[2628]: E0527 17:16:47.383106 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.383487 kubelet[2628]: E0527 17:16:47.383382 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.383487 kubelet[2628]: W0527 17:16:47.383395 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.383487 kubelet[2628]: E0527 17:16:47.383411 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.383759 kubelet[2628]: E0527 17:16:47.383713 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.383759 kubelet[2628]: W0527 17:16:47.383731 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.383759 kubelet[2628]: E0527 17:16:47.383750 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.384220 kubelet[2628]: E0527 17:16:47.384085 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.384220 kubelet[2628]: W0527 17:16:47.384097 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.384220 kubelet[2628]: E0527 17:16:47.384112 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.384543 kubelet[2628]: E0527 17:16:47.384375 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.384543 kubelet[2628]: W0527 17:16:47.384392 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.384543 kubelet[2628]: E0527 17:16:47.384423 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.386465 kubelet[2628]: E0527 17:16:47.385844 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.386596 kubelet[2628]: W0527 17:16:47.386575 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.386773 kubelet[2628]: E0527 17:16:47.386643 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.386877 kubelet[2628]: E0527 17:16:47.386863 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.387201 kubelet[2628]: W0527 17:16:47.387180 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.387773 kubelet[2628]: E0527 17:16:47.387498 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:47.400002 kubelet[2628]: E0527 17:16:47.399974 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:47.400002 kubelet[2628]: W0527 17:16:47.399995 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:47.400138 kubelet[2628]: E0527 17:16:47.400014 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:48.118142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2419794081.mount: Deactivated successfully. May 27 17:16:48.353278 kubelet[2628]: E0527 17:16:48.353231 2628 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v7797" podUID="b3279a25-7c72-48a0-98d2-a10f51772f5c" May 27 17:16:49.530235 containerd[1492]: time="2025-05-27T17:16:49.530181771Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:16:49.531002 containerd[1492]: time="2025-05-27T17:16:49.530973967Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=33020269" May 27 17:16:49.532168 containerd[1492]: time="2025-05-27T17:16:49.531881292Z" level=info msg="ImageCreate event name:\"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:16:49.534029 containerd[1492]: time="2025-05-27T17:16:49.533985132Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:16:49.534628 containerd[1492]: time="2025-05-27T17:16:49.534538184Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"33020123\" in 2.507131939s" May 27 17:16:49.534628 containerd[1492]: time="2025-05-27T17:16:49.534563227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\"" May 27 17:16:49.550944 containerd[1492]: time="2025-05-27T17:16:49.550737799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 27 17:16:49.573184 containerd[1492]: time="2025-05-27T17:16:49.573125280Z" level=info msg="CreateContainer within sandbox \"d0dc0758a13ef26cffef33d30f3b6d63980aaa5fac8fd53771aa30d7f4db2890\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 27 17:16:49.579292 containerd[1492]: time="2025-05-27T17:16:49.579236419Z" level=info msg="Container d9f4c18f5e2a4cd39ae8cf43b37b0c975ebafa4eec7c5f9bd0e9c7696ed51f0b: CDI devices from CRI Config.CDIDevices: []" May 27 17:16:49.588606 containerd[1492]: time="2025-05-27T17:16:49.587834353Z" level=info msg="CreateContainer within sandbox \"d0dc0758a13ef26cffef33d30f3b6d63980aaa5fac8fd53771aa30d7f4db2890\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d9f4c18f5e2a4cd39ae8cf43b37b0c975ebafa4eec7c5f9bd0e9c7696ed51f0b\"" May 27 17:16:49.589967 containerd[1492]: time="2025-05-27T17:16:49.589882587Z" level=info msg="StartContainer for \"d9f4c18f5e2a4cd39ae8cf43b37b0c975ebafa4eec7c5f9bd0e9c7696ed51f0b\"" May 27 17:16:49.591507 containerd[1492]: time="2025-05-27T17:16:49.591462137Z" level=info msg="connecting to shim d9f4c18f5e2a4cd39ae8cf43b37b0c975ebafa4eec7c5f9bd0e9c7696ed51f0b" address="unix:///run/containerd/s/72479dfc5a391e07b4886588e94258dc961692cf5c9b507997ba4c2a71e8f249" protocol=ttrpc version=3 May 27 17:16:49.621660 systemd[1]: Started cri-containerd-d9f4c18f5e2a4cd39ae8cf43b37b0c975ebafa4eec7c5f9bd0e9c7696ed51f0b.scope - libcontainer container d9f4c18f5e2a4cd39ae8cf43b37b0c975ebafa4eec7c5f9bd0e9c7696ed51f0b. May 27 17:16:49.673107 containerd[1492]: time="2025-05-27T17:16:49.670827896Z" level=info msg="StartContainer for \"d9f4c18f5e2a4cd39ae8cf43b37b0c975ebafa4eec7c5f9bd0e9c7696ed51f0b\" returns successfully" May 27 17:16:50.353670 kubelet[2628]: E0527 17:16:50.353575 2628 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v7797" podUID="b3279a25-7c72-48a0-98d2-a10f51772f5c" May 27 17:16:50.434985 kubelet[2628]: I0527 17:16:50.434902 2628 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5cf7b6b547-mfsb5" podStartSLOduration=1.9060957539999999 podStartE2EDuration="4.434882157s" podCreationTimestamp="2025-05-27 17:16:46 +0000 UTC" firstStartedPulling="2025-05-27 17:16:47.021708773 +0000 UTC m=+18.757287834" lastFinishedPulling="2025-05-27 17:16:49.550495216 +0000 UTC m=+21.286074237" observedRunningTime="2025-05-27 17:16:50.434295984 +0000 UTC m=+22.169875125" watchObservedRunningTime="2025-05-27 17:16:50.434882157 +0000 UTC m=+22.170461218" May 27 17:16:50.474507 kubelet[2628]: E0527 17:16:50.474414 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:50.474507 kubelet[2628]: W0527 17:16:50.474459 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:50.474507 kubelet[2628]: E0527 17:16:50.474481 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:50.474686 kubelet[2628]: E0527 17:16:50.474664 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:50.474726 kubelet[2628]: W0527 17:16:50.474674 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:50.474726 kubelet[2628]: E0527 17:16:50.474722 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:50.474904 kubelet[2628]: E0527 17:16:50.474877 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:50.474904 kubelet[2628]: W0527 17:16:50.474890 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:50.474904 kubelet[2628]: E0527 17:16:50.474899 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:50.475052 kubelet[2628]: E0527 17:16:50.475034 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:50.475052 kubelet[2628]: W0527 17:16:50.475046 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:50.475103 kubelet[2628]: E0527 17:16:50.475054 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:50.475201 kubelet[2628]: E0527 17:16:50.475182 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:50.475201 kubelet[2628]: W0527 17:16:50.475194 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:50.475270 kubelet[2628]: E0527 17:16:50.475201 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:50.475355 kubelet[2628]: E0527 17:16:50.475342 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:50.475386 kubelet[2628]: W0527 17:16:50.475355 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:50.475386 kubelet[2628]: E0527 17:16:50.475365 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:50.475666 kubelet[2628]: E0527 17:16:50.475645 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:50.475666 kubelet[2628]: W0527 17:16:50.475658 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:50.475724 kubelet[2628]: E0527 17:16:50.475667 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:50.475816 kubelet[2628]: E0527 17:16:50.475804 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:50.475844 kubelet[2628]: W0527 17:16:50.475816 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:50.475844 kubelet[2628]: E0527 17:16:50.475824 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:50.475974 kubelet[2628]: E0527 17:16:50.475964 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:50.475998 kubelet[2628]: W0527 17:16:50.475974 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:50.475998 kubelet[2628]: E0527 17:16:50.475981 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:50.476105 kubelet[2628]: E0527 17:16:50.476094 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:50.476146 kubelet[2628]: W0527 17:16:50.476105 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:50.476146 kubelet[2628]: E0527 17:16:50.476121 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:50.476259 kubelet[2628]: E0527 17:16:50.476249 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:50.476259 kubelet[2628]: W0527 17:16:50.476258 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:50.476301 kubelet[2628]: E0527 17:16:50.476266 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:50.476395 kubelet[2628]: E0527 17:16:50.476385 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:50.476429 kubelet[2628]: W0527 17:16:50.476395 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:50.476429 kubelet[2628]: E0527 17:16:50.476404 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:50.476559 kubelet[2628]: E0527 17:16:50.476546 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:50.476559 kubelet[2628]: W0527 17:16:50.476557 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:50.476615 kubelet[2628]: E0527 17:16:50.476565 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:50.476695 kubelet[2628]: E0527 17:16:50.476685 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:50.476723 kubelet[2628]: W0527 17:16:50.476695 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:50.476723 kubelet[2628]: E0527 17:16:50.476702 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:50.476822 kubelet[2628]: E0527 17:16:50.476813 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:50.476849 kubelet[2628]: W0527 17:16:50.476822 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:50.476849 kubelet[2628]: E0527 17:16:50.476829 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:50.506369 kubelet[2628]: E0527 17:16:50.506331 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:50.506369 kubelet[2628]: W0527 17:16:50.506356 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:50.506369 kubelet[2628]: E0527 17:16:50.506376 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:50.506609 kubelet[2628]: E0527 17:16:50.506585 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:50.506609 kubelet[2628]: W0527 17:16:50.506597 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:50.506666 kubelet[2628]: E0527 17:16:50.506613 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:50.506808 kubelet[2628]: E0527 17:16:50.506788 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:50.506808 kubelet[2628]: W0527 17:16:50.506799 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:50.506862 kubelet[2628]: E0527 17:16:50.506813 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:50.507004 kubelet[2628]: E0527 17:16:50.506981 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:50.507004 kubelet[2628]: W0527 17:16:50.506994 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:50.507063 kubelet[2628]: E0527 17:16:50.507011 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:50.507164 kubelet[2628]: E0527 17:16:50.507144 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:50.507164 kubelet[2628]: W0527 17:16:50.507156 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:50.507224 kubelet[2628]: E0527 17:16:50.507171 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:50.507308 kubelet[2628]: E0527 17:16:50.507296 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:50.507308 kubelet[2628]: W0527 17:16:50.507306 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:50.507366 kubelet[2628]: E0527 17:16:50.507320 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:50.507495 kubelet[2628]: E0527 17:16:50.507483 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:50.507495 kubelet[2628]: W0527 17:16:50.507494 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:50.507554 kubelet[2628]: E0527 17:16:50.507507 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:50.507779 kubelet[2628]: E0527 17:16:50.507745 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:50.507779 kubelet[2628]: W0527 17:16:50.507769 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:50.507841 kubelet[2628]: E0527 17:16:50.507788 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:50.507976 kubelet[2628]: E0527 17:16:50.507960 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:50.508001 kubelet[2628]: W0527 17:16:50.507976 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:50.508001 kubelet[2628]: E0527 17:16:50.507993 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:50.508192 kubelet[2628]: E0527 17:16:50.508179 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:50.508192 kubelet[2628]: W0527 17:16:50.508190 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:50.508249 kubelet[2628]: E0527 17:16:50.508202 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:50.508371 kubelet[2628]: E0527 17:16:50.508357 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:50.508371 kubelet[2628]: W0527 17:16:50.508369 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:50.508433 kubelet[2628]: E0527 17:16:50.508394 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:50.508515 kubelet[2628]: E0527 17:16:50.508503 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:50.508515 kubelet[2628]: W0527 17:16:50.508513 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:50.508556 kubelet[2628]: E0527 17:16:50.508526 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:50.508675 kubelet[2628]: E0527 17:16:50.508664 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:50.508675 kubelet[2628]: W0527 17:16:50.508674 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:50.508745 kubelet[2628]: E0527 17:16:50.508686 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:50.508824 kubelet[2628]: E0527 17:16:50.508813 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:50.508824 kubelet[2628]: W0527 17:16:50.508822 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:50.508885 kubelet[2628]: E0527 17:16:50.508834 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:50.508978 kubelet[2628]: E0527 17:16:50.508968 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:50.509008 kubelet[2628]: W0527 17:16:50.508979 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:50.509008 kubelet[2628]: E0527 17:16:50.508991 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:50.509257 kubelet[2628]: E0527 17:16:50.509221 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:50.509257 kubelet[2628]: W0527 17:16:50.509245 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:50.509312 kubelet[2628]: E0527 17:16:50.509263 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:50.509509 kubelet[2628]: E0527 17:16:50.509496 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:50.509545 kubelet[2628]: W0527 17:16:50.509509 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:50.509545 kubelet[2628]: E0527 17:16:50.509524 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:50.509681 kubelet[2628]: E0527 17:16:50.509669 2628 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:16:50.509709 kubelet[2628]: W0527 17:16:50.509679 2628 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:16:50.509709 kubelet[2628]: E0527 17:16:50.509690 2628 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:16:50.564208 containerd[1492]: time="2025-05-27T17:16:50.564153588Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:16:50.565369 containerd[1492]: time="2025-05-27T17:16:50.565331254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4264304" May 27 17:16:50.565920 containerd[1492]: time="2025-05-27T17:16:50.565888705Z" level=info msg="ImageCreate event name:\"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:16:50.568155 containerd[1492]: time="2025-05-27T17:16:50.568105065Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:16:50.569072 containerd[1492]: time="2025-05-27T17:16:50.568654635Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5633505\" in 1.017838389s" May 27 17:16:50.569072 containerd[1492]: time="2025-05-27T17:16:50.568693239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\"" May 27 17:16:50.573561 containerd[1492]: time="2025-05-27T17:16:50.573530677Z" level=info msg="CreateContainer within sandbox \"c3333df1a74be71ca2ff72e6e6e0aaf8b04bf991de74fc72e47618d5b039021b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 27 17:16:50.583165 containerd[1492]: time="2025-05-27T17:16:50.582747752Z" level=info msg="Container 76f1c205a6b6ab14eb3f0dfd08b71de7996469c76552eed5e4b5762e47fad542: CDI devices from CRI Config.CDIDevices: []" May 27 17:16:50.591952 containerd[1492]: time="2025-05-27T17:16:50.591900381Z" level=info msg="CreateContainer within sandbox \"c3333df1a74be71ca2ff72e6e6e0aaf8b04bf991de74fc72e47618d5b039021b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"76f1c205a6b6ab14eb3f0dfd08b71de7996469c76552eed5e4b5762e47fad542\"" May 27 17:16:50.593109 containerd[1492]: time="2025-05-27T17:16:50.592377664Z" level=info msg="StartContainer for \"76f1c205a6b6ab14eb3f0dfd08b71de7996469c76552eed5e4b5762e47fad542\"" May 27 17:16:50.594766 containerd[1492]: time="2025-05-27T17:16:50.594738998Z" level=info msg="connecting to shim 76f1c205a6b6ab14eb3f0dfd08b71de7996469c76552eed5e4b5762e47fad542" address="unix:///run/containerd/s/eb901725afac4a04705f96ef3afecc6dac63444874d393216a8f6ee16797a90c" protocol=ttrpc version=3 May 27 17:16:50.624675 systemd[1]: Started cri-containerd-76f1c205a6b6ab14eb3f0dfd08b71de7996469c76552eed5e4b5762e47fad542.scope - libcontainer container 76f1c205a6b6ab14eb3f0dfd08b71de7996469c76552eed5e4b5762e47fad542. May 27 17:16:50.662571 containerd[1492]: time="2025-05-27T17:16:50.662508297Z" level=info msg="StartContainer for \"76f1c205a6b6ab14eb3f0dfd08b71de7996469c76552eed5e4b5762e47fad542\" returns successfully" May 27 17:16:50.707365 systemd[1]: cri-containerd-76f1c205a6b6ab14eb3f0dfd08b71de7996469c76552eed5e4b5762e47fad542.scope: Deactivated successfully. May 27 17:16:50.740684 containerd[1492]: time="2025-05-27T17:16:50.740600571Z" level=info msg="TaskExit event in podsandbox handler container_id:\"76f1c205a6b6ab14eb3f0dfd08b71de7996469c76552eed5e4b5762e47fad542\" id:\"76f1c205a6b6ab14eb3f0dfd08b71de7996469c76552eed5e4b5762e47fad542\" pid:3323 exited_at:{seconds:1748366210 nanos:731866140}" May 27 17:16:50.741695 containerd[1492]: time="2025-05-27T17:16:50.741561738Z" level=info msg="received exit event container_id:\"76f1c205a6b6ab14eb3f0dfd08b71de7996469c76552eed5e4b5762e47fad542\" id:\"76f1c205a6b6ab14eb3f0dfd08b71de7996469c76552eed5e4b5762e47fad542\" pid:3323 exited_at:{seconds:1748366210 nanos:731866140}" May 27 17:16:50.775720 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76f1c205a6b6ab14eb3f0dfd08b71de7996469c76552eed5e4b5762e47fad542-rootfs.mount: Deactivated successfully. May 27 17:16:51.412270 kubelet[2628]: I0527 17:16:51.412238 2628 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:16:51.416683 containerd[1492]: time="2025-05-27T17:16:51.416649432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 27 17:16:52.354184 kubelet[2628]: E0527 17:16:52.353719 2628 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v7797" podUID="b3279a25-7c72-48a0-98d2-a10f51772f5c" May 27 17:16:54.265529 containerd[1492]: time="2025-05-27T17:16:54.265474694Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:16:54.266722 containerd[1492]: time="2025-05-27T17:16:54.266681906Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=65748976" May 27 17:16:54.267977 containerd[1492]: time="2025-05-27T17:16:54.267946723Z" level=info msg="ImageCreate event name:\"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:16:54.269755 containerd[1492]: time="2025-05-27T17:16:54.269730899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:16:54.270320 containerd[1492]: time="2025-05-27T17:16:54.270298062Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"67118217\" in 2.853608667s" May 27 17:16:54.270363 containerd[1492]: time="2025-05-27T17:16:54.270322824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\"" May 27 17:16:54.275825 containerd[1492]: time="2025-05-27T17:16:54.275792962Z" level=info msg="CreateContainer within sandbox \"c3333df1a74be71ca2ff72e6e6e0aaf8b04bf991de74fc72e47618d5b039021b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 27 17:16:54.282996 containerd[1492]: time="2025-05-27T17:16:54.282633405Z" level=info msg="Container e86f49e1dfac722afd037e278178bb7c623aa610f8c83cbbc1b85d5718fd895b: CDI devices from CRI Config.CDIDevices: []" May 27 17:16:54.290514 containerd[1492]: time="2025-05-27T17:16:54.290469844Z" level=info msg="CreateContainer within sandbox \"c3333df1a74be71ca2ff72e6e6e0aaf8b04bf991de74fc72e47618d5b039021b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e86f49e1dfac722afd037e278178bb7c623aa610f8c83cbbc1b85d5718fd895b\"" May 27 17:16:54.291751 containerd[1492]: time="2025-05-27T17:16:54.291330669Z" level=info msg="StartContainer for \"e86f49e1dfac722afd037e278178bb7c623aa610f8c83cbbc1b85d5718fd895b\"" May 27 17:16:54.293194 containerd[1492]: time="2025-05-27T17:16:54.292908750Z" level=info msg="connecting to shim e86f49e1dfac722afd037e278178bb7c623aa610f8c83cbbc1b85d5718fd895b" address="unix:///run/containerd/s/eb901725afac4a04705f96ef3afecc6dac63444874d393216a8f6ee16797a90c" protocol=ttrpc version=3 May 27 17:16:54.316616 systemd[1]: Started cri-containerd-e86f49e1dfac722afd037e278178bb7c623aa610f8c83cbbc1b85d5718fd895b.scope - libcontainer container e86f49e1dfac722afd037e278178bb7c623aa610f8c83cbbc1b85d5718fd895b. May 27 17:16:54.354321 kubelet[2628]: E0527 17:16:54.354211 2628 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v7797" podUID="b3279a25-7c72-48a0-98d2-a10f51772f5c" May 27 17:16:54.361877 containerd[1492]: time="2025-05-27T17:16:54.361832617Z" level=info msg="StartContainer for \"e86f49e1dfac722afd037e278178bb7c623aa610f8c83cbbc1b85d5718fd895b\" returns successfully" May 27 17:16:55.027255 systemd[1]: cri-containerd-e86f49e1dfac722afd037e278178bb7c623aa610f8c83cbbc1b85d5718fd895b.scope: Deactivated successfully. May 27 17:16:55.027551 systemd[1]: cri-containerd-e86f49e1dfac722afd037e278178bb7c623aa610f8c83cbbc1b85d5718fd895b.scope: Consumed 515ms CPU time, 171.8M memory peak, 1.6M read from disk, 165.5M written to disk. May 27 17:16:55.028357 containerd[1492]: time="2025-05-27T17:16:55.028321747Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e86f49e1dfac722afd037e278178bb7c623aa610f8c83cbbc1b85d5718fd895b\" id:\"e86f49e1dfac722afd037e278178bb7c623aa610f8c83cbbc1b85d5718fd895b\" pid:3384 exited_at:{seconds:1748366215 nanos:27966961}" May 27 17:16:55.039791 containerd[1492]: time="2025-05-27T17:16:55.039631897Z" level=info msg="received exit event container_id:\"e86f49e1dfac722afd037e278178bb7c623aa610f8c83cbbc1b85d5718fd895b\" id:\"e86f49e1dfac722afd037e278178bb7c623aa610f8c83cbbc1b85d5718fd895b\" pid:3384 exited_at:{seconds:1748366215 nanos:27966961}" May 27 17:16:55.061799 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e86f49e1dfac722afd037e278178bb7c623aa610f8c83cbbc1b85d5718fd895b-rootfs.mount: Deactivated successfully. May 27 17:16:55.098042 kubelet[2628]: I0527 17:16:55.098012 2628 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 17:16:55.159418 systemd[1]: Created slice kubepods-burstable-pod9298ee2d_a669_4c63_86c0_df6bc71a1662.slice - libcontainer container kubepods-burstable-pod9298ee2d_a669_4c63_86c0_df6bc71a1662.slice. May 27 17:16:55.168159 systemd[1]: Created slice kubepods-burstable-pod667feff9_38eb_4edb_848b_27af34fbc7ec.slice - libcontainer container kubepods-burstable-pod667feff9_38eb_4edb_848b_27af34fbc7ec.slice. May 27 17:16:55.177496 systemd[1]: Created slice kubepods-besteffort-pod8fe5a4c6_ca92_4cfe_a53a_76c1984b9ca3.slice - libcontainer container kubepods-besteffort-pod8fe5a4c6_ca92_4cfe_a53a_76c1984b9ca3.slice. May 27 17:16:55.190478 systemd[1]: Created slice kubepods-besteffort-poddeaf44af_7419_4d5c_a162_bb9c189b0506.slice - libcontainer container kubepods-besteffort-poddeaf44af_7419_4d5c_a162_bb9c189b0506.slice. May 27 17:16:55.196434 systemd[1]: Created slice kubepods-besteffort-pod2361ebd6_31bb_45ed_be79_5effc02d6972.slice - libcontainer container kubepods-besteffort-pod2361ebd6_31bb_45ed_be79_5effc02d6972.slice. May 27 17:16:55.203587 systemd[1]: Created slice kubepods-besteffort-pod0d42faa6_418d_403d_9754_a618c97afe0d.slice - libcontainer container kubepods-besteffort-pod0d42faa6_418d_403d_9754_a618c97afe0d.slice. May 27 17:16:55.209101 systemd[1]: Created slice kubepods-besteffort-pod493ad74e_e26f_46b2_b905_4b7831d09a1c.slice - libcontainer container kubepods-besteffort-pod493ad74e_e26f_46b2_b905_4b7831d09a1c.slice. May 27 17:16:55.240881 kubelet[2628]: I0527 17:16:55.240544 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/493ad74e-e26f-46b2-b905-4b7831d09a1c-config\") pod \"goldmane-78d55f7ddc-vm748\" (UID: \"493ad74e-e26f-46b2-b905-4b7831d09a1c\") " pod="calico-system/goldmane-78d55f7ddc-vm748" May 27 17:16:55.240881 kubelet[2628]: I0527 17:16:55.240596 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/493ad74e-e26f-46b2-b905-4b7831d09a1c-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-vm748\" (UID: \"493ad74e-e26f-46b2-b905-4b7831d09a1c\") " pod="calico-system/goldmane-78d55f7ddc-vm748" May 27 17:16:55.240881 kubelet[2628]: I0527 17:16:55.240636 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvzh5\" (UniqueName: \"kubernetes.io/projected/deaf44af-7419-4d5c-a162-bb9c189b0506-kube-api-access-hvzh5\") pod \"calico-apiserver-848495b658-6kgv7\" (UID: \"deaf44af-7419-4d5c-a162-bb9c189b0506\") " pod="calico-apiserver/calico-apiserver-848495b658-6kgv7" May 27 17:16:55.240881 kubelet[2628]: I0527 17:16:55.240667 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0d42faa6-418d-403d-9754-a618c97afe0d-whisker-backend-key-pair\") pod \"whisker-6d5bf77ff5-gkjds\" (UID: \"0d42faa6-418d-403d-9754-a618c97afe0d\") " pod="calico-system/whisker-6d5bf77ff5-gkjds" May 27 17:16:55.240881 kubelet[2628]: I0527 17:16:55.240694 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/493ad74e-e26f-46b2-b905-4b7831d09a1c-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-vm748\" (UID: \"493ad74e-e26f-46b2-b905-4b7831d09a1c\") " pod="calico-system/goldmane-78d55f7ddc-vm748" May 27 17:16:55.241157 kubelet[2628]: I0527 17:16:55.240725 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2qsf\" (UniqueName: \"kubernetes.io/projected/493ad74e-e26f-46b2-b905-4b7831d09a1c-kube-api-access-f2qsf\") pod \"goldmane-78d55f7ddc-vm748\" (UID: \"493ad74e-e26f-46b2-b905-4b7831d09a1c\") " pod="calico-system/goldmane-78d55f7ddc-vm748" May 27 17:16:55.241157 kubelet[2628]: I0527 17:16:55.240740 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8kf8\" (UniqueName: \"kubernetes.io/projected/667feff9-38eb-4edb-848b-27af34fbc7ec-kube-api-access-r8kf8\") pod \"coredns-668d6bf9bc-d69xj\" (UID: \"667feff9-38eb-4edb-848b-27af34fbc7ec\") " pod="kube-system/coredns-668d6bf9bc-d69xj" May 27 17:16:55.241157 kubelet[2628]: I0527 17:16:55.240758 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/667feff9-38eb-4edb-848b-27af34fbc7ec-config-volume\") pod \"coredns-668d6bf9bc-d69xj\" (UID: \"667feff9-38eb-4edb-848b-27af34fbc7ec\") " pod="kube-system/coredns-668d6bf9bc-d69xj" May 27 17:16:55.241157 kubelet[2628]: I0527 17:16:55.240783 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fe5a4c6-ca92-4cfe-a53a-76c1984b9ca3-tigera-ca-bundle\") pod \"calico-kube-controllers-5f7778c848-f6n2h\" (UID: \"8fe5a4c6-ca92-4cfe-a53a-76c1984b9ca3\") " pod="calico-system/calico-kube-controllers-5f7778c848-f6n2h" May 27 17:16:55.241157 kubelet[2628]: I0527 17:16:55.240802 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc6x5\" (UniqueName: \"kubernetes.io/projected/2361ebd6-31bb-45ed-be79-5effc02d6972-kube-api-access-pc6x5\") pod \"calico-apiserver-848495b658-zwg5f\" (UID: \"2361ebd6-31bb-45ed-be79-5effc02d6972\") " pod="calico-apiserver/calico-apiserver-848495b658-zwg5f" May 27 17:16:55.241294 kubelet[2628]: I0527 17:16:55.240819 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtlkd\" (UniqueName: \"kubernetes.io/projected/9298ee2d-a669-4c63-86c0-df6bc71a1662-kube-api-access-qtlkd\") pod \"coredns-668d6bf9bc-59wwh\" (UID: \"9298ee2d-a669-4c63-86c0-df6bc71a1662\") " pod="kube-system/coredns-668d6bf9bc-59wwh" May 27 17:16:55.241294 kubelet[2628]: I0527 17:16:55.240844 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2361ebd6-31bb-45ed-be79-5effc02d6972-calico-apiserver-certs\") pod \"calico-apiserver-848495b658-zwg5f\" (UID: \"2361ebd6-31bb-45ed-be79-5effc02d6972\") " pod="calico-apiserver/calico-apiserver-848495b658-zwg5f" May 27 17:16:55.241294 kubelet[2628]: I0527 17:16:55.240926 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d42faa6-418d-403d-9754-a618c97afe0d-whisker-ca-bundle\") pod \"whisker-6d5bf77ff5-gkjds\" (UID: \"0d42faa6-418d-403d-9754-a618c97afe0d\") " pod="calico-system/whisker-6d5bf77ff5-gkjds" May 27 17:16:55.241294 kubelet[2628]: I0527 17:16:55.240968 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx4hc\" (UniqueName: \"kubernetes.io/projected/0d42faa6-418d-403d-9754-a618c97afe0d-kube-api-access-sx4hc\") pod \"whisker-6d5bf77ff5-gkjds\" (UID: \"0d42faa6-418d-403d-9754-a618c97afe0d\") " pod="calico-system/whisker-6d5bf77ff5-gkjds" May 27 17:16:55.241294 kubelet[2628]: I0527 17:16:55.240986 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9298ee2d-a669-4c63-86c0-df6bc71a1662-config-volume\") pod \"coredns-668d6bf9bc-59wwh\" (UID: \"9298ee2d-a669-4c63-86c0-df6bc71a1662\") " pod="kube-system/coredns-668d6bf9bc-59wwh" May 27 17:16:55.241499 kubelet[2628]: I0527 17:16:55.241009 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tlq7\" (UniqueName: \"kubernetes.io/projected/8fe5a4c6-ca92-4cfe-a53a-76c1984b9ca3-kube-api-access-9tlq7\") pod \"calico-kube-controllers-5f7778c848-f6n2h\" (UID: \"8fe5a4c6-ca92-4cfe-a53a-76c1984b9ca3\") " pod="calico-system/calico-kube-controllers-5f7778c848-f6n2h" May 27 17:16:55.241499 kubelet[2628]: I0527 17:16:55.241027 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/deaf44af-7419-4d5c-a162-bb9c189b0506-calico-apiserver-certs\") pod \"calico-apiserver-848495b658-6kgv7\" (UID: \"deaf44af-7419-4d5c-a162-bb9c189b0506\") " pod="calico-apiserver/calico-apiserver-848495b658-6kgv7" May 27 17:16:55.428608 containerd[1492]: time="2025-05-27T17:16:55.428554008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 27 17:16:55.465221 containerd[1492]: time="2025-05-27T17:16:55.465144654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-59wwh,Uid:9298ee2d-a669-4c63-86c0-df6bc71a1662,Namespace:kube-system,Attempt:0,}" May 27 17:16:55.474336 containerd[1492]: time="2025-05-27T17:16:55.474295606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d69xj,Uid:667feff9-38eb-4edb-848b-27af34fbc7ec,Namespace:kube-system,Attempt:0,}" May 27 17:16:55.488802 containerd[1492]: time="2025-05-27T17:16:55.487254997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f7778c848-f6n2h,Uid:8fe5a4c6-ca92-4cfe-a53a-76c1984b9ca3,Namespace:calico-system,Attempt:0,}" May 27 17:16:55.500919 containerd[1492]: time="2025-05-27T17:16:55.494940281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-848495b658-6kgv7,Uid:deaf44af-7419-4d5c-a162-bb9c189b0506,Namespace:calico-apiserver,Attempt:0,}" May 27 17:16:55.519740 containerd[1492]: time="2025-05-27T17:16:55.515164966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-vm748,Uid:493ad74e-e26f-46b2-b905-4b7831d09a1c,Namespace:calico-system,Attempt:0,}" May 27 17:16:55.519740 containerd[1492]: time="2025-05-27T17:16:55.515324378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d5bf77ff5-gkjds,Uid:0d42faa6-418d-403d-9754-a618c97afe0d,Namespace:calico-system,Attempt:0,}" May 27 17:16:55.519740 containerd[1492]: time="2025-05-27T17:16:55.515403544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-848495b658-zwg5f,Uid:2361ebd6-31bb-45ed-be79-5effc02d6972,Namespace:calico-apiserver,Attempt:0,}" May 27 17:16:55.960029 containerd[1492]: time="2025-05-27T17:16:55.959976980Z" level=error msg="Failed to destroy network for sandbox \"8aef11d02aa074ebe431f0650f5a8f1e6821332220749fd050b2d295a3b9b138\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:16:55.966074 containerd[1492]: time="2025-05-27T17:16:55.966002862Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-59wwh,Uid:9298ee2d-a669-4c63-86c0-df6bc71a1662,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8aef11d02aa074ebe431f0650f5a8f1e6821332220749fd050b2d295a3b9b138\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:16:55.971731 kubelet[2628]: E0527 17:16:55.971629 2628 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8aef11d02aa074ebe431f0650f5a8f1e6821332220749fd050b2d295a3b9b138\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:16:55.972113 kubelet[2628]: E0527 17:16:55.971760 2628 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8aef11d02aa074ebe431f0650f5a8f1e6821332220749fd050b2d295a3b9b138\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-59wwh" May 27 17:16:55.972113 kubelet[2628]: E0527 17:16:55.971784 2628 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8aef11d02aa074ebe431f0650f5a8f1e6821332220749fd050b2d295a3b9b138\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-59wwh" May 27 17:16:55.972113 kubelet[2628]: E0527 17:16:55.971837 2628 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-59wwh_kube-system(9298ee2d-a669-4c63-86c0-df6bc71a1662)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-59wwh_kube-system(9298ee2d-a669-4c63-86c0-df6bc71a1662)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8aef11d02aa074ebe431f0650f5a8f1e6821332220749fd050b2d295a3b9b138\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-59wwh" podUID="9298ee2d-a669-4c63-86c0-df6bc71a1662" May 27 17:16:55.977032 containerd[1492]: time="2025-05-27T17:16:55.976871060Z" level=error msg="Failed to destroy network for sandbox \"08549847d8f22834e6871e0988cbf0cae0d02278f79d9f3ba91c425a90cfbaf8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:16:55.978552 containerd[1492]: time="2025-05-27T17:16:55.978491619Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f7778c848-f6n2h,Uid:8fe5a4c6-ca92-4cfe-a53a-76c1984b9ca3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"08549847d8f22834e6871e0988cbf0cae0d02278f79d9f3ba91c425a90cfbaf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:16:55.978895 kubelet[2628]: E0527 17:16:55.978735 2628 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08549847d8f22834e6871e0988cbf0cae0d02278f79d9f3ba91c425a90cfbaf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:16:55.978895 kubelet[2628]: E0527 17:16:55.978808 2628 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08549847d8f22834e6871e0988cbf0cae0d02278f79d9f3ba91c425a90cfbaf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f7778c848-f6n2h" May 27 17:16:55.978895 kubelet[2628]: E0527 17:16:55.978830 2628 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08549847d8f22834e6871e0988cbf0cae0d02278f79d9f3ba91c425a90cfbaf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f7778c848-f6n2h" May 27 17:16:55.979026 kubelet[2628]: E0527 17:16:55.978875 2628 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f7778c848-f6n2h_calico-system(8fe5a4c6-ca92-4cfe-a53a-76c1984b9ca3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f7778c848-f6n2h_calico-system(8fe5a4c6-ca92-4cfe-a53a-76c1984b9ca3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"08549847d8f22834e6871e0988cbf0cae0d02278f79d9f3ba91c425a90cfbaf8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f7778c848-f6n2h" podUID="8fe5a4c6-ca92-4cfe-a53a-76c1984b9ca3" May 27 17:16:55.980628 containerd[1492]: time="2025-05-27T17:16:55.980577092Z" level=error msg="Failed to destroy network for sandbox \"0089179c846d5509a04b342939864bbb1619092bfd951aeb5ef953ac4fbb964b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:16:55.980788 containerd[1492]: time="2025-05-27T17:16:55.980747464Z" level=error msg="Failed to destroy network for sandbox \"53421c60d10c3b9498c245bba9caeb31de1a309d498b5a78c2e98cf001e7fd7a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:16:55.981623 containerd[1492]: time="2025-05-27T17:16:55.981562244Z" level=error msg="Failed to destroy network for sandbox \"41aaf264d2b8581bf5616178e39de17e0dbf716f04d02d3146b87215b386a6a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:16:55.981896 containerd[1492]: time="2025-05-27T17:16:55.981775980Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-848495b658-6kgv7,Uid:deaf44af-7419-4d5c-a162-bb9c189b0506,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0089179c846d5509a04b342939864bbb1619092bfd951aeb5ef953ac4fbb964b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:16:55.982399 containerd[1492]: time="2025-05-27T17:16:55.982273577Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d5bf77ff5-gkjds,Uid:0d42faa6-418d-403d-9754-a618c97afe0d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"53421c60d10c3b9498c245bba9caeb31de1a309d498b5a78c2e98cf001e7fd7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:16:55.982701 kubelet[2628]: E0527 17:16:55.982611 2628 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53421c60d10c3b9498c245bba9caeb31de1a309d498b5a78c2e98cf001e7fd7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:16:55.982774 kubelet[2628]: E0527 17:16:55.982718 2628 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0089179c846d5509a04b342939864bbb1619092bfd951aeb5ef953ac4fbb964b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:16:55.982774 kubelet[2628]: E0527 17:16:55.982753 2628 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53421c60d10c3b9498c245bba9caeb31de1a309d498b5a78c2e98cf001e7fd7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d5bf77ff5-gkjds" May 27 17:16:55.982821 kubelet[2628]: E0527 17:16:55.982777 2628 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0089179c846d5509a04b342939864bbb1619092bfd951aeb5ef953ac4fbb964b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-848495b658-6kgv7" May 27 17:16:55.982821 kubelet[2628]: E0527 17:16:55.982797 2628 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0089179c846d5509a04b342939864bbb1619092bfd951aeb5ef953ac4fbb964b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-848495b658-6kgv7" May 27 17:16:55.983197 kubelet[2628]: E0527 17:16:55.983126 2628 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-848495b658-6kgv7_calico-apiserver(deaf44af-7419-4d5c-a162-bb9c189b0506)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-848495b658-6kgv7_calico-apiserver(deaf44af-7419-4d5c-a162-bb9c189b0506)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0089179c846d5509a04b342939864bbb1619092bfd951aeb5ef953ac4fbb964b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-848495b658-6kgv7" podUID="deaf44af-7419-4d5c-a162-bb9c189b0506" May 27 17:16:55.983255 kubelet[2628]: E0527 17:16:55.982775 2628 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53421c60d10c3b9498c245bba9caeb31de1a309d498b5a78c2e98cf001e7fd7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d5bf77ff5-gkjds" May 27 17:16:55.983288 kubelet[2628]: E0527 17:16:55.983264 2628 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6d5bf77ff5-gkjds_calico-system(0d42faa6-418d-403d-9754-a618c97afe0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6d5bf77ff5-gkjds_calico-system(0d42faa6-418d-403d-9754-a618c97afe0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53421c60d10c3b9498c245bba9caeb31de1a309d498b5a78c2e98cf001e7fd7a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d5bf77ff5-gkjds" podUID="0d42faa6-418d-403d-9754-a618c97afe0d" May 27 17:16:55.983551 containerd[1492]: time="2025-05-27T17:16:55.983492386Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d69xj,Uid:667feff9-38eb-4edb-848b-27af34fbc7ec,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"41aaf264d2b8581bf5616178e39de17e0dbf716f04d02d3146b87215b386a6a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:16:55.983869 kubelet[2628]: E0527 17:16:55.983703 2628 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41aaf264d2b8581bf5616178e39de17e0dbf716f04d02d3146b87215b386a6a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:16:55.983920 kubelet[2628]: E0527 17:16:55.983893 2628 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41aaf264d2b8581bf5616178e39de17e0dbf716f04d02d3146b87215b386a6a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d69xj" May 27 17:16:55.983956 kubelet[2628]: E0527 17:16:55.983922 2628 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41aaf264d2b8581bf5616178e39de17e0dbf716f04d02d3146b87215b386a6a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d69xj" May 27 17:16:55.983980 kubelet[2628]: E0527 17:16:55.983963 2628 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-d69xj_kube-system(667feff9-38eb-4edb-848b-27af34fbc7ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-d69xj_kube-system(667feff9-38eb-4edb-848b-27af34fbc7ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41aaf264d2b8581bf5616178e39de17e0dbf716f04d02d3146b87215b386a6a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-d69xj" podUID="667feff9-38eb-4edb-848b-27af34fbc7ec" May 27 17:16:55.985085 containerd[1492]: time="2025-05-27T17:16:55.985044420Z" level=error msg="Failed to destroy network for sandbox \"fecab17e0c79edd21fba924958e32f1d9f6c4ea31570e8f82582d858de403a37\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:16:55.986709 containerd[1492]: time="2025-05-27T17:16:55.986647858Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-848495b658-zwg5f,Uid:2361ebd6-31bb-45ed-be79-5effc02d6972,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fecab17e0c79edd21fba924958e32f1d9f6c4ea31570e8f82582d858de403a37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:16:55.986910 kubelet[2628]: E0527 17:16:55.986867 2628 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fecab17e0c79edd21fba924958e32f1d9f6c4ea31570e8f82582d858de403a37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:16:55.986953 kubelet[2628]: E0527 17:16:55.986929 2628 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fecab17e0c79edd21fba924958e32f1d9f6c4ea31570e8f82582d858de403a37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-848495b658-zwg5f" May 27 17:16:55.986981 kubelet[2628]: E0527 17:16:55.986952 2628 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fecab17e0c79edd21fba924958e32f1d9f6c4ea31570e8f82582d858de403a37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-848495b658-zwg5f" May 27 17:16:55.987037 kubelet[2628]: E0527 17:16:55.987002 2628 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-848495b658-zwg5f_calico-apiserver(2361ebd6-31bb-45ed-be79-5effc02d6972)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-848495b658-zwg5f_calico-apiserver(2361ebd6-31bb-45ed-be79-5effc02d6972)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fecab17e0c79edd21fba924958e32f1d9f6c4ea31570e8f82582d858de403a37\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-848495b658-zwg5f" podUID="2361ebd6-31bb-45ed-be79-5effc02d6972" May 27 17:16:55.989140 containerd[1492]: time="2025-05-27T17:16:55.989071876Z" level=error msg="Failed to destroy network for sandbox \"d5305706fe3cdd25486bab4687d0b9c9154d88668e04ac04910038e935f09d57\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:16:55.993226 containerd[1492]: time="2025-05-27T17:16:55.992433922Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-vm748,Uid:493ad74e-e26f-46b2-b905-4b7831d09a1c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5305706fe3cdd25486bab4687d0b9c9154d88668e04ac04910038e935f09d57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:16:55.993617 kubelet[2628]: E0527 17:16:55.993549 2628 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5305706fe3cdd25486bab4687d0b9c9154d88668e04ac04910038e935f09d57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:16:55.993896 kubelet[2628]: E0527 17:16:55.993753 2628 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5305706fe3cdd25486bab4687d0b9c9154d88668e04ac04910038e935f09d57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-vm748" May 27 17:16:55.993896 kubelet[2628]: E0527 17:16:55.993783 2628 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5305706fe3cdd25486bab4687d0b9c9154d88668e04ac04910038e935f09d57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-vm748" May 27 17:16:55.993896 kubelet[2628]: E0527 17:16:55.993839 2628 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-vm748_calico-system(493ad74e-e26f-46b2-b905-4b7831d09a1c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-vm748_calico-system(493ad74e-e26f-46b2-b905-4b7831d09a1c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5305706fe3cdd25486bab4687d0b9c9154d88668e04ac04910038e935f09d57\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-vm748" podUID="493ad74e-e26f-46b2-b905-4b7831d09a1c" May 27 17:16:56.360703 systemd[1]: Created slice kubepods-besteffort-podb3279a25_7c72_48a0_98d2_a10f51772f5c.slice - libcontainer container kubepods-besteffort-podb3279a25_7c72_48a0_98d2_a10f51772f5c.slice. May 27 17:16:56.363313 containerd[1492]: time="2025-05-27T17:16:56.363251244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v7797,Uid:b3279a25-7c72-48a0-98d2-a10f51772f5c,Namespace:calico-system,Attempt:0,}" May 27 17:16:56.436626 containerd[1492]: time="2025-05-27T17:16:56.436563979Z" level=error msg="Failed to destroy network for sandbox \"0a870f085021445a559c8d9d97bc3199720273132ef17912d9c7ec164e043662\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:16:56.438734 systemd[1]: run-netns-cni\x2dac03d04f\x2d3114\x2d9ce9\x2dd217\x2d71f917430dc2.mount: Deactivated successfully. May 27 17:16:56.447205 containerd[1492]: time="2025-05-27T17:16:56.447144486Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v7797,Uid:b3279a25-7c72-48a0-98d2-a10f51772f5c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a870f085021445a559c8d9d97bc3199720273132ef17912d9c7ec164e043662\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:16:56.447451 kubelet[2628]: E0527 17:16:56.447405 2628 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a870f085021445a559c8d9d97bc3199720273132ef17912d9c7ec164e043662\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:16:56.447567 kubelet[2628]: E0527 17:16:56.447528 2628 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a870f085021445a559c8d9d97bc3199720273132ef17912d9c7ec164e043662\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v7797" May 27 17:16:56.447567 kubelet[2628]: E0527 17:16:56.447553 2628 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a870f085021445a559c8d9d97bc3199720273132ef17912d9c7ec164e043662\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v7797" May 27 17:16:56.447643 kubelet[2628]: E0527 17:16:56.447616 2628 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-v7797_calico-system(b3279a25-7c72-48a0-98d2-a10f51772f5c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-v7797_calico-system(b3279a25-7c72-48a0-98d2-a10f51772f5c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a870f085021445a559c8d9d97bc3199720273132ef17912d9c7ec164e043662\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v7797" podUID="b3279a25-7c72-48a0-98d2-a10f51772f5c" May 27 17:16:59.322817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1731973595.mount: Deactivated successfully. May 27 17:16:59.562676 containerd[1492]: time="2025-05-27T17:16:59.543169003Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=150465379" May 27 17:16:59.563141 containerd[1492]: time="2025-05-27T17:16:59.545749686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:16:59.563141 containerd[1492]: time="2025-05-27T17:16:59.546039664Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"150465241\" in 4.117428171s" May 27 17:16:59.563141 containerd[1492]: time="2025-05-27T17:16:59.562792922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\"" May 27 17:16:59.563353 containerd[1492]: time="2025-05-27T17:16:59.563330476Z" level=info msg="ImageCreate event name:\"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:16:59.563907 containerd[1492]: time="2025-05-27T17:16:59.563875030Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:16:59.575260 containerd[1492]: time="2025-05-27T17:16:59.575142182Z" level=info msg="CreateContainer within sandbox \"c3333df1a74be71ca2ff72e6e6e0aaf8b04bf991de74fc72e47618d5b039021b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 27 17:16:59.594477 containerd[1492]: time="2025-05-27T17:16:59.593775758Z" level=info msg="Container a3228acf259a7bee02746a073490e847b52994e7358789bf14d654c01e6a8418: CDI devices from CRI Config.CDIDevices: []" May 27 17:16:59.595613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2140326191.mount: Deactivated successfully. May 27 17:16:59.609019 containerd[1492]: time="2025-05-27T17:16:59.608967758Z" level=info msg="CreateContainer within sandbox \"c3333df1a74be71ca2ff72e6e6e0aaf8b04bf991de74fc72e47618d5b039021b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a3228acf259a7bee02746a073490e847b52994e7358789bf14d654c01e6a8418\"" May 27 17:16:59.609952 containerd[1492]: time="2025-05-27T17:16:59.609921898Z" level=info msg="StartContainer for \"a3228acf259a7bee02746a073490e847b52994e7358789bf14d654c01e6a8418\"" May 27 17:16:59.611557 containerd[1492]: time="2025-05-27T17:16:59.611464915Z" level=info msg="connecting to shim a3228acf259a7bee02746a073490e847b52994e7358789bf14d654c01e6a8418" address="unix:///run/containerd/s/eb901725afac4a04705f96ef3afecc6dac63444874d393216a8f6ee16797a90c" protocol=ttrpc version=3 May 27 17:16:59.636035 systemd[1]: Started cri-containerd-a3228acf259a7bee02746a073490e847b52994e7358789bf14d654c01e6a8418.scope - libcontainer container a3228acf259a7bee02746a073490e847b52994e7358789bf14d654c01e6a8418. May 27 17:16:59.695411 containerd[1492]: time="2025-05-27T17:16:59.695278808Z" level=info msg="StartContainer for \"a3228acf259a7bee02746a073490e847b52994e7358789bf14d654c01e6a8418\" returns successfully" May 27 17:16:59.886564 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 27 17:16:59.886672 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 27 17:17:00.089133 kubelet[2628]: I0527 17:17:00.089093 2628 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0d42faa6-418d-403d-9754-a618c97afe0d-whisker-backend-key-pair\") pod \"0d42faa6-418d-403d-9754-a618c97afe0d\" (UID: \"0d42faa6-418d-403d-9754-a618c97afe0d\") " May 27 17:17:00.089133 kubelet[2628]: I0527 17:17:00.089140 2628 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d42faa6-418d-403d-9754-a618c97afe0d-whisker-ca-bundle\") pod \"0d42faa6-418d-403d-9754-a618c97afe0d\" (UID: \"0d42faa6-418d-403d-9754-a618c97afe0d\") " May 27 17:17:00.090150 kubelet[2628]: I0527 17:17:00.089167 2628 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sx4hc\" (UniqueName: \"kubernetes.io/projected/0d42faa6-418d-403d-9754-a618c97afe0d-kube-api-access-sx4hc\") pod \"0d42faa6-418d-403d-9754-a618c97afe0d\" (UID: \"0d42faa6-418d-403d-9754-a618c97afe0d\") " May 27 17:17:00.101552 kubelet[2628]: I0527 17:17:00.101499 2628 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d42faa6-418d-403d-9754-a618c97afe0d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0d42faa6-418d-403d-9754-a618c97afe0d" (UID: "0d42faa6-418d-403d-9754-a618c97afe0d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 17:17:00.103469 kubelet[2628]: I0527 17:17:00.102807 2628 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d42faa6-418d-403d-9754-a618c97afe0d-kube-api-access-sx4hc" (OuterVolumeSpecName: "kube-api-access-sx4hc") pod "0d42faa6-418d-403d-9754-a618c97afe0d" (UID: "0d42faa6-418d-403d-9754-a618c97afe0d"). InnerVolumeSpecName "kube-api-access-sx4hc". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 17:17:00.113131 kubelet[2628]: I0527 17:17:00.113075 2628 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d42faa6-418d-403d-9754-a618c97afe0d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0d42faa6-418d-403d-9754-a618c97afe0d" (UID: "0d42faa6-418d-403d-9754-a618c97afe0d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 17:17:00.189647 kubelet[2628]: I0527 17:17:00.189611 2628 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d42faa6-418d-403d-9754-a618c97afe0d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 27 17:17:00.189647 kubelet[2628]: I0527 17:17:00.189643 2628 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sx4hc\" (UniqueName: \"kubernetes.io/projected/0d42faa6-418d-403d-9754-a618c97afe0d-kube-api-access-sx4hc\") on node \"localhost\" DevicePath \"\"" May 27 17:17:00.189647 kubelet[2628]: I0527 17:17:00.189654 2628 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0d42faa6-418d-403d-9754-a618c97afe0d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" May 27 17:17:00.323609 systemd[1]: var-lib-kubelet-pods-0d42faa6\x2d418d\x2d403d\x2d9754\x2da618c97afe0d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsx4hc.mount: Deactivated successfully. May 27 17:17:00.323702 systemd[1]: var-lib-kubelet-pods-0d42faa6\x2d418d\x2d403d\x2d9754\x2da618c97afe0d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 27 17:17:00.359844 systemd[1]: Removed slice kubepods-besteffort-pod0d42faa6_418d_403d_9754_a618c97afe0d.slice - libcontainer container kubepods-besteffort-pod0d42faa6_418d_403d_9754_a618c97afe0d.slice. May 27 17:17:00.522559 kubelet[2628]: I0527 17:17:00.522487 2628 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-78x4s" podStartSLOduration=2.337822832 podStartE2EDuration="14.52232498s" podCreationTimestamp="2025-05-27 17:16:46 +0000 UTC" firstStartedPulling="2025-05-27 17:16:47.379152628 +0000 UTC m=+19.114731689" lastFinishedPulling="2025-05-27 17:16:59.563654776 +0000 UTC m=+31.299233837" observedRunningTime="2025-05-27 17:17:00.502051104 +0000 UTC m=+32.237630165" watchObservedRunningTime="2025-05-27 17:17:00.52232498 +0000 UTC m=+32.257904041" May 27 17:17:00.540220 systemd[1]: Created slice kubepods-besteffort-pod2fb07cfb_44c7_4704_bd84_8ea56f199724.slice - libcontainer container kubepods-besteffort-pod2fb07cfb_44c7_4704_bd84_8ea56f199724.slice. May 27 17:17:00.591990 kubelet[2628]: I0527 17:17:00.591927 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgjcl\" (UniqueName: \"kubernetes.io/projected/2fb07cfb-44c7-4704-bd84-8ea56f199724-kube-api-access-kgjcl\") pod \"whisker-8687f89567-wlhb2\" (UID: \"2fb07cfb-44c7-4704-bd84-8ea56f199724\") " pod="calico-system/whisker-8687f89567-wlhb2" May 27 17:17:00.591990 kubelet[2628]: I0527 17:17:00.592036 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fb07cfb-44c7-4704-bd84-8ea56f199724-whisker-ca-bundle\") pod \"whisker-8687f89567-wlhb2\" (UID: \"2fb07cfb-44c7-4704-bd84-8ea56f199724\") " pod="calico-system/whisker-8687f89567-wlhb2" May 27 17:17:00.591990 kubelet[2628]: I0527 17:17:00.592056 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2fb07cfb-44c7-4704-bd84-8ea56f199724-whisker-backend-key-pair\") pod \"whisker-8687f89567-wlhb2\" (UID: \"2fb07cfb-44c7-4704-bd84-8ea56f199724\") " pod="calico-system/whisker-8687f89567-wlhb2" May 27 17:17:00.849081 containerd[1492]: time="2025-05-27T17:17:00.849033220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8687f89567-wlhb2,Uid:2fb07cfb-44c7-4704-bd84-8ea56f199724,Namespace:calico-system,Attempt:0,}" May 27 17:17:01.088133 systemd-networkd[1420]: cali826bcc44ecf: Link UP May 27 17:17:01.088910 systemd-networkd[1420]: cali826bcc44ecf: Gained carrier May 27 17:17:01.106792 containerd[1492]: 2025-05-27 17:17:00.875 [INFO][3762] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:17:01.106792 containerd[1492]: 2025-05-27 17:17:00.909 [INFO][3762] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--8687f89567--wlhb2-eth0 whisker-8687f89567- calico-system 2fb07cfb-44c7-4704-bd84-8ea56f199724 850 0 2025-05-27 17:17:00 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8687f89567 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-8687f89567-wlhb2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali826bcc44ecf [] [] }} ContainerID="98aeb755ce2ca0e330226eadd6273a9074a5df673ddee0da6f3c0def8271a837" Namespace="calico-system" Pod="whisker-8687f89567-wlhb2" WorkloadEndpoint="localhost-k8s-whisker--8687f89567--wlhb2-" May 27 17:17:01.106792 containerd[1492]: 2025-05-27 17:17:00.910 [INFO][3762] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="98aeb755ce2ca0e330226eadd6273a9074a5df673ddee0da6f3c0def8271a837" Namespace="calico-system" Pod="whisker-8687f89567-wlhb2" WorkloadEndpoint="localhost-k8s-whisker--8687f89567--wlhb2-eth0" May 27 17:17:01.106792 containerd[1492]: 2025-05-27 17:17:01.034 [INFO][3776] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="98aeb755ce2ca0e330226eadd6273a9074a5df673ddee0da6f3c0def8271a837" HandleID="k8s-pod-network.98aeb755ce2ca0e330226eadd6273a9074a5df673ddee0da6f3c0def8271a837" Workload="localhost-k8s-whisker--8687f89567--wlhb2-eth0" May 27 17:17:01.107028 containerd[1492]: 2025-05-27 17:17:01.034 [INFO][3776] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="98aeb755ce2ca0e330226eadd6273a9074a5df673ddee0da6f3c0def8271a837" HandleID="k8s-pod-network.98aeb755ce2ca0e330226eadd6273a9074a5df673ddee0da6f3c0def8271a837" Workload="localhost-k8s-whisker--8687f89567--wlhb2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000345e60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-8687f89567-wlhb2", "timestamp":"2025-05-27 17:17:01.034589385 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:17:01.107028 containerd[1492]: 2025-05-27 17:17:01.034 [INFO][3776] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:17:01.107028 containerd[1492]: 2025-05-27 17:17:01.034 [INFO][3776] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:17:01.107028 containerd[1492]: 2025-05-27 17:17:01.035 [INFO][3776] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 17:17:01.107028 containerd[1492]: 2025-05-27 17:17:01.050 [INFO][3776] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.98aeb755ce2ca0e330226eadd6273a9074a5df673ddee0da6f3c0def8271a837" host="localhost" May 27 17:17:01.107028 containerd[1492]: 2025-05-27 17:17:01.055 [INFO][3776] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 17:17:01.107028 containerd[1492]: 2025-05-27 17:17:01.059 [INFO][3776] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 17:17:01.107028 containerd[1492]: 2025-05-27 17:17:01.061 [INFO][3776] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 17:17:01.107028 containerd[1492]: 2025-05-27 17:17:01.063 [INFO][3776] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 17:17:01.107028 containerd[1492]: 2025-05-27 17:17:01.063 [INFO][3776] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.98aeb755ce2ca0e330226eadd6273a9074a5df673ddee0da6f3c0def8271a837" host="localhost" May 27 17:17:01.107229 containerd[1492]: 2025-05-27 17:17:01.065 [INFO][3776] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.98aeb755ce2ca0e330226eadd6273a9074a5df673ddee0da6f3c0def8271a837 May 27 17:17:01.107229 containerd[1492]: 2025-05-27 17:17:01.068 [INFO][3776] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.98aeb755ce2ca0e330226eadd6273a9074a5df673ddee0da6f3c0def8271a837" host="localhost" May 27 17:17:01.107229 containerd[1492]: 2025-05-27 17:17:01.074 [INFO][3776] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.98aeb755ce2ca0e330226eadd6273a9074a5df673ddee0da6f3c0def8271a837" host="localhost" May 27 17:17:01.107229 containerd[1492]: 2025-05-27 17:17:01.075 [INFO][3776] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.98aeb755ce2ca0e330226eadd6273a9074a5df673ddee0da6f3c0def8271a837" host="localhost" May 27 17:17:01.107229 containerd[1492]: 2025-05-27 17:17:01.075 [INFO][3776] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:17:01.107229 containerd[1492]: 2025-05-27 17:17:01.075 [INFO][3776] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="98aeb755ce2ca0e330226eadd6273a9074a5df673ddee0da6f3c0def8271a837" HandleID="k8s-pod-network.98aeb755ce2ca0e330226eadd6273a9074a5df673ddee0da6f3c0def8271a837" Workload="localhost-k8s-whisker--8687f89567--wlhb2-eth0" May 27 17:17:01.107340 containerd[1492]: 2025-05-27 17:17:01.078 [INFO][3762] cni-plugin/k8s.go 418: Populated endpoint ContainerID="98aeb755ce2ca0e330226eadd6273a9074a5df673ddee0da6f3c0def8271a837" Namespace="calico-system" Pod="whisker-8687f89567-wlhb2" WorkloadEndpoint="localhost-k8s-whisker--8687f89567--wlhb2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8687f89567--wlhb2-eth0", GenerateName:"whisker-8687f89567-", Namespace:"calico-system", SelfLink:"", UID:"2fb07cfb-44c7-4704-bd84-8ea56f199724", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 17, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8687f89567", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-8687f89567-wlhb2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali826bcc44ecf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:17:01.107340 containerd[1492]: 2025-05-27 17:17:01.078 [INFO][3762] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="98aeb755ce2ca0e330226eadd6273a9074a5df673ddee0da6f3c0def8271a837" Namespace="calico-system" Pod="whisker-8687f89567-wlhb2" WorkloadEndpoint="localhost-k8s-whisker--8687f89567--wlhb2-eth0" May 27 17:17:01.107407 containerd[1492]: 2025-05-27 17:17:01.078 [INFO][3762] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali826bcc44ecf ContainerID="98aeb755ce2ca0e330226eadd6273a9074a5df673ddee0da6f3c0def8271a837" Namespace="calico-system" Pod="whisker-8687f89567-wlhb2" WorkloadEndpoint="localhost-k8s-whisker--8687f89567--wlhb2-eth0" May 27 17:17:01.107407 containerd[1492]: 2025-05-27 17:17:01.089 [INFO][3762] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="98aeb755ce2ca0e330226eadd6273a9074a5df673ddee0da6f3c0def8271a837" Namespace="calico-system" Pod="whisker-8687f89567-wlhb2" WorkloadEndpoint="localhost-k8s-whisker--8687f89567--wlhb2-eth0" May 27 17:17:01.107467 containerd[1492]: 2025-05-27 17:17:01.089 [INFO][3762] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="98aeb755ce2ca0e330226eadd6273a9074a5df673ddee0da6f3c0def8271a837" Namespace="calico-system" Pod="whisker-8687f89567-wlhb2" WorkloadEndpoint="localhost-k8s-whisker--8687f89567--wlhb2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8687f89567--wlhb2-eth0", GenerateName:"whisker-8687f89567-", Namespace:"calico-system", SelfLink:"", UID:"2fb07cfb-44c7-4704-bd84-8ea56f199724", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 17, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8687f89567", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"98aeb755ce2ca0e330226eadd6273a9074a5df673ddee0da6f3c0def8271a837", Pod:"whisker-8687f89567-wlhb2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali826bcc44ecf", MAC:"ba:02:cc:a9:d9:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:17:01.107993 containerd[1492]: 2025-05-27 17:17:01.103 [INFO][3762] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="98aeb755ce2ca0e330226eadd6273a9074a5df673ddee0da6f3c0def8271a837" Namespace="calico-system" Pod="whisker-8687f89567-wlhb2" WorkloadEndpoint="localhost-k8s-whisker--8687f89567--wlhb2-eth0" May 27 17:17:01.394756 containerd[1492]: time="2025-05-27T17:17:01.393409010Z" level=info msg="connecting to shim 98aeb755ce2ca0e330226eadd6273a9074a5df673ddee0da6f3c0def8271a837" address="unix:///run/containerd/s/88f64f171bf07681eefeab3a748d03fdcea86b73c2bb5ccaac5bedededbb3c09" namespace=k8s.io protocol=ttrpc version=3 May 27 17:17:01.431625 systemd[1]: Started cri-containerd-98aeb755ce2ca0e330226eadd6273a9074a5df673ddee0da6f3c0def8271a837.scope - libcontainer container 98aeb755ce2ca0e330226eadd6273a9074a5df673ddee0da6f3c0def8271a837. May 27 17:17:01.447535 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 17:17:01.489677 containerd[1492]: time="2025-05-27T17:17:01.489634921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8687f89567-wlhb2,Uid:2fb07cfb-44c7-4704-bd84-8ea56f199724,Namespace:calico-system,Attempt:0,} returns sandbox id \"98aeb755ce2ca0e330226eadd6273a9074a5df673ddee0da6f3c0def8271a837\"" May 27 17:17:01.491339 containerd[1492]: time="2025-05-27T17:17:01.491279698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 17:17:01.607966 containerd[1492]: time="2025-05-27T17:17:01.607886209Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a3228acf259a7bee02746a073490e847b52994e7358789bf14d654c01e6a8418\" id:\"0456ab5b95e4bb6a26fc735ecb6b9f03b391c4dd73d81faa05da97c34cf5ee05\" pid:3950 exit_status:1 exited_at:{seconds:1748366221 nanos:607501867}" May 27 17:17:01.671127 containerd[1492]: time="2025-05-27T17:17:01.670745753Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:17:01.672044 containerd[1492]: time="2025-05-27T17:17:01.671932423Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:17:01.672044 containerd[1492]: time="2025-05-27T17:17:01.671988747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 17:17:01.672208 kubelet[2628]: E0527 17:17:01.672163 2628 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:17:01.672588 kubelet[2628]: E0527 17:17:01.672221 2628 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:17:01.679568 kubelet[2628]: E0527 17:17:01.679516 2628 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:85fac0f966784eb3ae65fa9fa496f2d0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kgjcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8687f89567-wlhb2_calico-system(2fb07cfb-44c7-4704-bd84-8ea56f199724): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:17:01.681726 containerd[1492]: time="2025-05-27T17:17:01.681694839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 17:17:01.853561 containerd[1492]: time="2025-05-27T17:17:01.853515804Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:17:01.854434 containerd[1492]: time="2025-05-27T17:17:01.854393056Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:17:01.854582 containerd[1492]: time="2025-05-27T17:17:01.854461540Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 17:17:01.854678 kubelet[2628]: E0527 17:17:01.854640 2628 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:17:01.854761 kubelet[2628]: E0527 17:17:01.854693 2628 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:17:01.854849 kubelet[2628]: E0527 17:17:01.854806 2628 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kgjcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8687f89567-wlhb2_calico-system(2fb07cfb-44c7-4704-bd84-8ea56f199724): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:17:01.856246 kubelet[2628]: E0527 17:17:01.856180 2628 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-8687f89567-wlhb2" podUID="2fb07cfb-44c7-4704-bd84-8ea56f199724" May 27 17:17:02.154600 systemd-networkd[1420]: cali826bcc44ecf: Gained IPv6LL May 27 17:17:02.356047 kubelet[2628]: I0527 17:17:02.355997 2628 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d42faa6-418d-403d-9754-a618c97afe0d" path="/var/lib/kubelet/pods/0d42faa6-418d-403d-9754-a618c97afe0d/volumes" May 27 17:17:02.469799 kubelet[2628]: E0527 17:17:02.469531 2628 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-8687f89567-wlhb2" podUID="2fb07cfb-44c7-4704-bd84-8ea56f199724" May 27 17:17:02.559709 containerd[1492]: time="2025-05-27T17:17:02.559666270Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a3228acf259a7bee02746a073490e847b52994e7358789bf14d654c01e6a8418\" id:\"ba7c1ace6297c5581b268127c73c5ca6b75f6da7bd3378ba52652d8a38ebba40\" pid:3998 exit_status:1 exited_at:{seconds:1748366222 nanos:558844983}" May 27 17:17:03.469794 kubelet[2628]: E0527 17:17:03.469736 2628 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-8687f89567-wlhb2" podUID="2fb07cfb-44c7-4704-bd84-8ea56f199724" May 27 17:17:07.354267 containerd[1492]: time="2025-05-27T17:17:07.354163913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d69xj,Uid:667feff9-38eb-4edb-848b-27af34fbc7ec,Namespace:kube-system,Attempt:0,}" May 27 17:17:07.354679 containerd[1492]: time="2025-05-27T17:17:07.354212475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-848495b658-zwg5f,Uid:2361ebd6-31bb-45ed-be79-5effc02d6972,Namespace:calico-apiserver,Attempt:0,}" May 27 17:17:07.468126 systemd-networkd[1420]: cali9e129371e88: Link UP May 27 17:17:07.468263 systemd-networkd[1420]: cali9e129371e88: Gained carrier May 27 17:17:07.483004 containerd[1492]: 2025-05-27 17:17:07.380 [INFO][4128] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:17:07.483004 containerd[1492]: 2025-05-27 17:17:07.400 [INFO][4128] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--d69xj-eth0 coredns-668d6bf9bc- kube-system 667feff9-38eb-4edb-848b-27af34fbc7ec 789 0 2025-05-27 17:16:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-d69xj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9e129371e88 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="15d5c30b852364d0538d29514ad8601224eb7911400810367c9e3c63d955d91f" Namespace="kube-system" Pod="coredns-668d6bf9bc-d69xj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d69xj-" May 27 17:17:07.483004 containerd[1492]: 2025-05-27 17:17:07.400 [INFO][4128] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="15d5c30b852364d0538d29514ad8601224eb7911400810367c9e3c63d955d91f" Namespace="kube-system" Pod="coredns-668d6bf9bc-d69xj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d69xj-eth0" May 27 17:17:07.483004 containerd[1492]: 2025-05-27 17:17:07.425 [INFO][4150] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15d5c30b852364d0538d29514ad8601224eb7911400810367c9e3c63d955d91f" HandleID="k8s-pod-network.15d5c30b852364d0538d29514ad8601224eb7911400810367c9e3c63d955d91f" Workload="localhost-k8s-coredns--668d6bf9bc--d69xj-eth0" May 27 17:17:07.483211 containerd[1492]: 2025-05-27 17:17:07.425 [INFO][4150] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="15d5c30b852364d0538d29514ad8601224eb7911400810367c9e3c63d955d91f" HandleID="k8s-pod-network.15d5c30b852364d0538d29514ad8601224eb7911400810367c9e3c63d955d91f" Workload="localhost-k8s-coredns--668d6bf9bc--d69xj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e0e80), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-d69xj", "timestamp":"2025-05-27 17:17:07.425427532 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:17:07.483211 containerd[1492]: 2025-05-27 17:17:07.425 [INFO][4150] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:17:07.483211 containerd[1492]: 2025-05-27 17:17:07.425 [INFO][4150] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:17:07.483211 containerd[1492]: 2025-05-27 17:17:07.425 [INFO][4150] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 17:17:07.483211 containerd[1492]: 2025-05-27 17:17:07.435 [INFO][4150] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.15d5c30b852364d0538d29514ad8601224eb7911400810367c9e3c63d955d91f" host="localhost" May 27 17:17:07.483211 containerd[1492]: 2025-05-27 17:17:07.439 [INFO][4150] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 17:17:07.483211 containerd[1492]: 2025-05-27 17:17:07.444 [INFO][4150] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 17:17:07.483211 containerd[1492]: 2025-05-27 17:17:07.446 [INFO][4150] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 17:17:07.483211 containerd[1492]: 2025-05-27 17:17:07.448 [INFO][4150] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 17:17:07.483211 containerd[1492]: 2025-05-27 17:17:07.448 [INFO][4150] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.15d5c30b852364d0538d29514ad8601224eb7911400810367c9e3c63d955d91f" host="localhost" May 27 17:17:07.483454 containerd[1492]: 2025-05-27 17:17:07.449 [INFO][4150] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.15d5c30b852364d0538d29514ad8601224eb7911400810367c9e3c63d955d91f May 27 17:17:07.483454 containerd[1492]: 2025-05-27 17:17:07.453 [INFO][4150] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.15d5c30b852364d0538d29514ad8601224eb7911400810367c9e3c63d955d91f" host="localhost" May 27 17:17:07.483454 containerd[1492]: 2025-05-27 17:17:07.462 [INFO][4150] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.15d5c30b852364d0538d29514ad8601224eb7911400810367c9e3c63d955d91f" host="localhost" May 27 17:17:07.483454 containerd[1492]: 2025-05-27 17:17:07.462 [INFO][4150] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.15d5c30b852364d0538d29514ad8601224eb7911400810367c9e3c63d955d91f" host="localhost" May 27 17:17:07.483454 containerd[1492]: 2025-05-27 17:17:07.462 [INFO][4150] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:17:07.483454 containerd[1492]: 2025-05-27 17:17:07.462 [INFO][4150] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="15d5c30b852364d0538d29514ad8601224eb7911400810367c9e3c63d955d91f" HandleID="k8s-pod-network.15d5c30b852364d0538d29514ad8601224eb7911400810367c9e3c63d955d91f" Workload="localhost-k8s-coredns--668d6bf9bc--d69xj-eth0" May 27 17:17:07.483571 containerd[1492]: 2025-05-27 17:17:07.465 [INFO][4128] cni-plugin/k8s.go 418: Populated endpoint ContainerID="15d5c30b852364d0538d29514ad8601224eb7911400810367c9e3c63d955d91f" Namespace="kube-system" Pod="coredns-668d6bf9bc-d69xj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d69xj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--d69xj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"667feff9-38eb-4edb-848b-27af34fbc7ec", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 16, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-d69xj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9e129371e88", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:17:07.483627 containerd[1492]: 2025-05-27 17:17:07.465 [INFO][4128] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="15d5c30b852364d0538d29514ad8601224eb7911400810367c9e3c63d955d91f" Namespace="kube-system" Pod="coredns-668d6bf9bc-d69xj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d69xj-eth0" May 27 17:17:07.483627 containerd[1492]: 2025-05-27 17:17:07.465 [INFO][4128] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9e129371e88 ContainerID="15d5c30b852364d0538d29514ad8601224eb7911400810367c9e3c63d955d91f" Namespace="kube-system" Pod="coredns-668d6bf9bc-d69xj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d69xj-eth0" May 27 17:17:07.483627 containerd[1492]: 2025-05-27 17:17:07.467 [INFO][4128] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="15d5c30b852364d0538d29514ad8601224eb7911400810367c9e3c63d955d91f" Namespace="kube-system" Pod="coredns-668d6bf9bc-d69xj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d69xj-eth0" May 27 17:17:07.483693 containerd[1492]: 2025-05-27 17:17:07.467 [INFO][4128] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="15d5c30b852364d0538d29514ad8601224eb7911400810367c9e3c63d955d91f" Namespace="kube-system" Pod="coredns-668d6bf9bc-d69xj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d69xj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--d69xj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"667feff9-38eb-4edb-848b-27af34fbc7ec", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 16, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"15d5c30b852364d0538d29514ad8601224eb7911400810367c9e3c63d955d91f", Pod:"coredns-668d6bf9bc-d69xj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9e129371e88", MAC:"ca:9e:7a:3e:12:84", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:17:07.483693 containerd[1492]: 2025-05-27 17:17:07.479 [INFO][4128] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="15d5c30b852364d0538d29514ad8601224eb7911400810367c9e3c63d955d91f" Namespace="kube-system" Pod="coredns-668d6bf9bc-d69xj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d69xj-eth0" May 27 17:17:07.506645 containerd[1492]: time="2025-05-27T17:17:07.506601317Z" level=info msg="connecting to shim 15d5c30b852364d0538d29514ad8601224eb7911400810367c9e3c63d955d91f" address="unix:///run/containerd/s/3e344b5286981b540bfeee998ed12906c9480c2588dd767f69071241008777ab" namespace=k8s.io protocol=ttrpc version=3 May 27 17:17:07.533605 systemd[1]: Started cri-containerd-15d5c30b852364d0538d29514ad8601224eb7911400810367c9e3c63d955d91f.scope - libcontainer container 15d5c30b852364d0538d29514ad8601224eb7911400810367c9e3c63d955d91f. May 27 17:17:07.547511 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 17:17:07.569841 systemd-networkd[1420]: calife5480e45b6: Link UP May 27 17:17:07.570744 systemd-networkd[1420]: calife5480e45b6: Gained carrier May 27 17:17:07.576950 containerd[1492]: time="2025-05-27T17:17:07.576921210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d69xj,Uid:667feff9-38eb-4edb-848b-27af34fbc7ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"15d5c30b852364d0538d29514ad8601224eb7911400810367c9e3c63d955d91f\"" May 27 17:17:07.583328 containerd[1492]: time="2025-05-27T17:17:07.583261801Z" level=info msg="CreateContainer within sandbox \"15d5c30b852364d0538d29514ad8601224eb7911400810367c9e3c63d955d91f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 17:17:07.583768 containerd[1492]: 2025-05-27 17:17:07.380 [INFO][4116] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:17:07.583768 containerd[1492]: 2025-05-27 17:17:07.399 [INFO][4116] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--848495b658--zwg5f-eth0 calico-apiserver-848495b658- calico-apiserver 2361ebd6-31bb-45ed-be79-5effc02d6972 790 0 2025-05-27 17:16:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:848495b658 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-848495b658-zwg5f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calife5480e45b6 [] [] }} ContainerID="8ec8097bd788fdb096753930e30305f9da61d364461c7672586cb5599169d88c" Namespace="calico-apiserver" Pod="calico-apiserver-848495b658-zwg5f" WorkloadEndpoint="localhost-k8s-calico--apiserver--848495b658--zwg5f-" May 27 17:17:07.583768 containerd[1492]: 2025-05-27 17:17:07.399 [INFO][4116] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8ec8097bd788fdb096753930e30305f9da61d364461c7672586cb5599169d88c" Namespace="calico-apiserver" Pod="calico-apiserver-848495b658-zwg5f" WorkloadEndpoint="localhost-k8s-calico--apiserver--848495b658--zwg5f-eth0" May 27 17:17:07.583768 containerd[1492]: 2025-05-27 17:17:07.426 [INFO][4144] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ec8097bd788fdb096753930e30305f9da61d364461c7672586cb5599169d88c" HandleID="k8s-pod-network.8ec8097bd788fdb096753930e30305f9da61d364461c7672586cb5599169d88c" Workload="localhost-k8s-calico--apiserver--848495b658--zwg5f-eth0" May 27 17:17:07.583768 containerd[1492]: 2025-05-27 17:17:07.426 [INFO][4144] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8ec8097bd788fdb096753930e30305f9da61d364461c7672586cb5599169d88c" HandleID="k8s-pod-network.8ec8097bd788fdb096753930e30305f9da61d364461c7672586cb5599169d88c" Workload="localhost-k8s-calico--apiserver--848495b658--zwg5f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c960), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-848495b658-zwg5f", "timestamp":"2025-05-27 17:17:07.426674033 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:17:07.583768 containerd[1492]: 2025-05-27 17:17:07.427 [INFO][4144] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:17:07.583768 containerd[1492]: 2025-05-27 17:17:07.462 [INFO][4144] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:17:07.583768 containerd[1492]: 2025-05-27 17:17:07.462 [INFO][4144] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 17:17:07.583768 containerd[1492]: 2025-05-27 17:17:07.535 [INFO][4144] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8ec8097bd788fdb096753930e30305f9da61d364461c7672586cb5599169d88c" host="localhost" May 27 17:17:07.583768 containerd[1492]: 2025-05-27 17:17:07.541 [INFO][4144] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 17:17:07.583768 containerd[1492]: 2025-05-27 17:17:07.545 [INFO][4144] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 17:17:07.583768 containerd[1492]: 2025-05-27 17:17:07.547 [INFO][4144] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 17:17:07.583768 containerd[1492]: 2025-05-27 17:17:07.550 [INFO][4144] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 17:17:07.583768 containerd[1492]: 2025-05-27 17:17:07.550 [INFO][4144] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8ec8097bd788fdb096753930e30305f9da61d364461c7672586cb5599169d88c" host="localhost" May 27 17:17:07.583768 containerd[1492]: 2025-05-27 17:17:07.552 [INFO][4144] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8ec8097bd788fdb096753930e30305f9da61d364461c7672586cb5599169d88c May 27 17:17:07.583768 containerd[1492]: 2025-05-27 17:17:07.555 [INFO][4144] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8ec8097bd788fdb096753930e30305f9da61d364461c7672586cb5599169d88c" host="localhost" May 27 17:17:07.583768 containerd[1492]: 2025-05-27 17:17:07.562 [INFO][4144] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.8ec8097bd788fdb096753930e30305f9da61d364461c7672586cb5599169d88c" host="localhost" May 27 17:17:07.583768 containerd[1492]: 2025-05-27 17:17:07.563 [INFO][4144] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.8ec8097bd788fdb096753930e30305f9da61d364461c7672586cb5599169d88c" host="localhost" May 27 17:17:07.583768 containerd[1492]: 2025-05-27 17:17:07.563 [INFO][4144] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:17:07.583768 containerd[1492]: 2025-05-27 17:17:07.563 [INFO][4144] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="8ec8097bd788fdb096753930e30305f9da61d364461c7672586cb5599169d88c" HandleID="k8s-pod-network.8ec8097bd788fdb096753930e30305f9da61d364461c7672586cb5599169d88c" Workload="localhost-k8s-calico--apiserver--848495b658--zwg5f-eth0" May 27 17:17:07.584238 containerd[1492]: 2025-05-27 17:17:07.566 [INFO][4116] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8ec8097bd788fdb096753930e30305f9da61d364461c7672586cb5599169d88c" Namespace="calico-apiserver" Pod="calico-apiserver-848495b658-zwg5f" WorkloadEndpoint="localhost-k8s-calico--apiserver--848495b658--zwg5f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--848495b658--zwg5f-eth0", GenerateName:"calico-apiserver-848495b658-", Namespace:"calico-apiserver", SelfLink:"", UID:"2361ebd6-31bb-45ed-be79-5effc02d6972", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 16, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"848495b658", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-848495b658-zwg5f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calife5480e45b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:17:07.584238 containerd[1492]: 2025-05-27 17:17:07.566 [INFO][4116] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="8ec8097bd788fdb096753930e30305f9da61d364461c7672586cb5599169d88c" Namespace="calico-apiserver" Pod="calico-apiserver-848495b658-zwg5f" WorkloadEndpoint="localhost-k8s-calico--apiserver--848495b658--zwg5f-eth0" May 27 17:17:07.584238 containerd[1492]: 2025-05-27 17:17:07.566 [INFO][4116] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calife5480e45b6 ContainerID="8ec8097bd788fdb096753930e30305f9da61d364461c7672586cb5599169d88c" Namespace="calico-apiserver" Pod="calico-apiserver-848495b658-zwg5f" WorkloadEndpoint="localhost-k8s-calico--apiserver--848495b658--zwg5f-eth0" May 27 17:17:07.584238 containerd[1492]: 2025-05-27 17:17:07.569 [INFO][4116] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ec8097bd788fdb096753930e30305f9da61d364461c7672586cb5599169d88c" Namespace="calico-apiserver" Pod="calico-apiserver-848495b658-zwg5f" WorkloadEndpoint="localhost-k8s-calico--apiserver--848495b658--zwg5f-eth0" May 27 17:17:07.584238 containerd[1492]: 2025-05-27 17:17:07.569 [INFO][4116] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8ec8097bd788fdb096753930e30305f9da61d364461c7672586cb5599169d88c" Namespace="calico-apiserver" Pod="calico-apiserver-848495b658-zwg5f" WorkloadEndpoint="localhost-k8s-calico--apiserver--848495b658--zwg5f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--848495b658--zwg5f-eth0", GenerateName:"calico-apiserver-848495b658-", Namespace:"calico-apiserver", SelfLink:"", UID:"2361ebd6-31bb-45ed-be79-5effc02d6972", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 16, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"848495b658", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8ec8097bd788fdb096753930e30305f9da61d364461c7672586cb5599169d88c", Pod:"calico-apiserver-848495b658-zwg5f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calife5480e45b6", MAC:"7a:51:e0:7c:ae:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:17:07.584238 containerd[1492]: 2025-05-27 17:17:07.580 [INFO][4116] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8ec8097bd788fdb096753930e30305f9da61d364461c7672586cb5599169d88c" Namespace="calico-apiserver" Pod="calico-apiserver-848495b658-zwg5f" WorkloadEndpoint="localhost-k8s-calico--apiserver--848495b658--zwg5f-eth0" May 27 17:17:07.605593 containerd[1492]: time="2025-05-27T17:17:07.604476843Z" level=info msg="connecting to shim 8ec8097bd788fdb096753930e30305f9da61d364461c7672586cb5599169d88c" address="unix:///run/containerd/s/bdd3b8140355e0a4ee817c23fa4d8cfe556238d70ffca92d44ee340759bc1d5c" namespace=k8s.io protocol=ttrpc version=3 May 27 17:17:07.614594 containerd[1492]: time="2025-05-27T17:17:07.614512855Z" level=info msg="Container 1b59dfb18b58bc4951a7d4d7f8e94de8fda13178b4527fc5792421e025bb4320: CDI devices from CRI Config.CDIDevices: []" May 27 17:17:07.619475 containerd[1492]: time="2025-05-27T17:17:07.619428057Z" level=info msg="CreateContainer within sandbox \"15d5c30b852364d0538d29514ad8601224eb7911400810367c9e3c63d955d91f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1b59dfb18b58bc4951a7d4d7f8e94de8fda13178b4527fc5792421e025bb4320\"" May 27 17:17:07.620687 containerd[1492]: time="2025-05-27T17:17:07.620661717Z" level=info msg="StartContainer for \"1b59dfb18b58bc4951a7d4d7f8e94de8fda13178b4527fc5792421e025bb4320\"" May 27 17:17:07.628453 containerd[1492]: time="2025-05-27T17:17:07.628372536Z" level=info msg="connecting to shim 1b59dfb18b58bc4951a7d4d7f8e94de8fda13178b4527fc5792421e025bb4320" address="unix:///run/containerd/s/3e344b5286981b540bfeee998ed12906c9480c2588dd767f69071241008777ab" protocol=ttrpc version=3 May 27 17:17:07.631573 systemd[1]: Started cri-containerd-8ec8097bd788fdb096753930e30305f9da61d364461c7672586cb5599169d88c.scope - libcontainer container 8ec8097bd788fdb096753930e30305f9da61d364461c7672586cb5599169d88c. May 27 17:17:07.653609 systemd[1]: Started cri-containerd-1b59dfb18b58bc4951a7d4d7f8e94de8fda13178b4527fc5792421e025bb4320.scope - libcontainer container 1b59dfb18b58bc4951a7d4d7f8e94de8fda13178b4527fc5792421e025bb4320. May 27 17:17:07.658463 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 17:17:07.682225 containerd[1492]: time="2025-05-27T17:17:07.682174937Z" level=info msg="StartContainer for \"1b59dfb18b58bc4951a7d4d7f8e94de8fda13178b4527fc5792421e025bb4320\" returns successfully" May 27 17:17:07.688089 containerd[1492]: time="2025-05-27T17:17:07.688057186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-848495b658-zwg5f,Uid:2361ebd6-31bb-45ed-be79-5effc02d6972,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8ec8097bd788fdb096753930e30305f9da61d364461c7672586cb5599169d88c\"" May 27 17:17:07.690054 containerd[1492]: time="2025-05-27T17:17:07.690021363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 17:17:08.356162 containerd[1492]: time="2025-05-27T17:17:08.355867272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v7797,Uid:b3279a25-7c72-48a0-98d2-a10f51772f5c,Namespace:calico-system,Attempt:0,}" May 27 17:17:08.467792 systemd-networkd[1420]: cali17aed5a206f: Link UP May 27 17:17:08.467928 systemd-networkd[1420]: cali17aed5a206f: Gained carrier May 27 17:17:08.492046 containerd[1492]: 2025-05-27 17:17:08.380 [INFO][4326] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:17:08.492046 containerd[1492]: 2025-05-27 17:17:08.398 [INFO][4326] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--v7797-eth0 csi-node-driver- calico-system b3279a25-7c72-48a0-98d2-a10f51772f5c 683 0 2025-05-27 17:16:47 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-v7797 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali17aed5a206f [] [] }} ContainerID="4204077d1d2bb6636b04401553c8ad87765c175d3434ed2f8b26667b4bdf15be" Namespace="calico-system" Pod="csi-node-driver-v7797" WorkloadEndpoint="localhost-k8s-csi--node--driver--v7797-" May 27 17:17:08.492046 containerd[1492]: 2025-05-27 17:17:08.398 [INFO][4326] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4204077d1d2bb6636b04401553c8ad87765c175d3434ed2f8b26667b4bdf15be" Namespace="calico-system" Pod="csi-node-driver-v7797" WorkloadEndpoint="localhost-k8s-csi--node--driver--v7797-eth0" May 27 17:17:08.492046 containerd[1492]: 2025-05-27 17:17:08.425 [INFO][4340] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4204077d1d2bb6636b04401553c8ad87765c175d3434ed2f8b26667b4bdf15be" HandleID="k8s-pod-network.4204077d1d2bb6636b04401553c8ad87765c175d3434ed2f8b26667b4bdf15be" Workload="localhost-k8s-csi--node--driver--v7797-eth0" May 27 17:17:08.492046 containerd[1492]: 2025-05-27 17:17:08.425 [INFO][4340] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4204077d1d2bb6636b04401553c8ad87765c175d3434ed2f8b26667b4bdf15be" HandleID="k8s-pod-network.4204077d1d2bb6636b04401553c8ad87765c175d3434ed2f8b26667b4bdf15be" Workload="localhost-k8s-csi--node--driver--v7797-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002a9110), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-v7797", "timestamp":"2025-05-27 17:17:08.425024538 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:17:08.492046 containerd[1492]: 2025-05-27 17:17:08.425 [INFO][4340] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:17:08.492046 containerd[1492]: 2025-05-27 17:17:08.425 [INFO][4340] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:17:08.492046 containerd[1492]: 2025-05-27 17:17:08.425 [INFO][4340] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 17:17:08.492046 containerd[1492]: 2025-05-27 17:17:08.436 [INFO][4340] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4204077d1d2bb6636b04401553c8ad87765c175d3434ed2f8b26667b4bdf15be" host="localhost" May 27 17:17:08.492046 containerd[1492]: 2025-05-27 17:17:08.440 [INFO][4340] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 17:17:08.492046 containerd[1492]: 2025-05-27 17:17:08.444 [INFO][4340] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 17:17:08.492046 containerd[1492]: 2025-05-27 17:17:08.446 [INFO][4340] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 17:17:08.492046 containerd[1492]: 2025-05-27 17:17:08.448 [INFO][4340] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 17:17:08.492046 containerd[1492]: 2025-05-27 17:17:08.448 [INFO][4340] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4204077d1d2bb6636b04401553c8ad87765c175d3434ed2f8b26667b4bdf15be" host="localhost" May 27 17:17:08.492046 containerd[1492]: 2025-05-27 17:17:08.450 [INFO][4340] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4204077d1d2bb6636b04401553c8ad87765c175d3434ed2f8b26667b4bdf15be May 27 17:17:08.492046 containerd[1492]: 2025-05-27 17:17:08.454 [INFO][4340] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4204077d1d2bb6636b04401553c8ad87765c175d3434ed2f8b26667b4bdf15be" host="localhost" May 27 17:17:08.492046 containerd[1492]: 2025-05-27 17:17:08.462 [INFO][4340] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.4204077d1d2bb6636b04401553c8ad87765c175d3434ed2f8b26667b4bdf15be" host="localhost" May 27 17:17:08.492046 containerd[1492]: 2025-05-27 17:17:08.462 [INFO][4340] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.4204077d1d2bb6636b04401553c8ad87765c175d3434ed2f8b26667b4bdf15be" host="localhost" May 27 17:17:08.492046 containerd[1492]: 2025-05-27 17:17:08.462 [INFO][4340] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:17:08.492046 containerd[1492]: 2025-05-27 17:17:08.462 [INFO][4340] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="4204077d1d2bb6636b04401553c8ad87765c175d3434ed2f8b26667b4bdf15be" HandleID="k8s-pod-network.4204077d1d2bb6636b04401553c8ad87765c175d3434ed2f8b26667b4bdf15be" Workload="localhost-k8s-csi--node--driver--v7797-eth0" May 27 17:17:08.493056 containerd[1492]: 2025-05-27 17:17:08.465 [INFO][4326] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4204077d1d2bb6636b04401553c8ad87765c175d3434ed2f8b26667b4bdf15be" Namespace="calico-system" Pod="csi-node-driver-v7797" WorkloadEndpoint="localhost-k8s-csi--node--driver--v7797-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--v7797-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b3279a25-7c72-48a0-98d2-a10f51772f5c", ResourceVersion:"683", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 16, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-v7797", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali17aed5a206f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:17:08.493056 containerd[1492]: 2025-05-27 17:17:08.465 [INFO][4326] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="4204077d1d2bb6636b04401553c8ad87765c175d3434ed2f8b26667b4bdf15be" Namespace="calico-system" Pod="csi-node-driver-v7797" WorkloadEndpoint="localhost-k8s-csi--node--driver--v7797-eth0" May 27 17:17:08.493056 containerd[1492]: 2025-05-27 17:17:08.465 [INFO][4326] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali17aed5a206f ContainerID="4204077d1d2bb6636b04401553c8ad87765c175d3434ed2f8b26667b4bdf15be" Namespace="calico-system" Pod="csi-node-driver-v7797" WorkloadEndpoint="localhost-k8s-csi--node--driver--v7797-eth0" May 27 17:17:08.493056 containerd[1492]: 2025-05-27 17:17:08.469 [INFO][4326] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4204077d1d2bb6636b04401553c8ad87765c175d3434ed2f8b26667b4bdf15be" Namespace="calico-system" Pod="csi-node-driver-v7797" WorkloadEndpoint="localhost-k8s-csi--node--driver--v7797-eth0" May 27 17:17:08.493056 containerd[1492]: 2025-05-27 17:17:08.470 [INFO][4326] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4204077d1d2bb6636b04401553c8ad87765c175d3434ed2f8b26667b4bdf15be" Namespace="calico-system" Pod="csi-node-driver-v7797" WorkloadEndpoint="localhost-k8s-csi--node--driver--v7797-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--v7797-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b3279a25-7c72-48a0-98d2-a10f51772f5c", ResourceVersion:"683", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 16, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4204077d1d2bb6636b04401553c8ad87765c175d3434ed2f8b26667b4bdf15be", Pod:"csi-node-driver-v7797", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali17aed5a206f", MAC:"ae:e9:b7:d5:34:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:17:08.493056 containerd[1492]: 2025-05-27 17:17:08.482 [INFO][4326] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4204077d1d2bb6636b04401553c8ad87765c175d3434ed2f8b26667b4bdf15be" Namespace="calico-system" Pod="csi-node-driver-v7797" WorkloadEndpoint="localhost-k8s-csi--node--driver--v7797-eth0" May 27 17:17:08.532548 containerd[1492]: time="2025-05-27T17:17:08.532296625Z" level=info msg="connecting to shim 4204077d1d2bb6636b04401553c8ad87765c175d3434ed2f8b26667b4bdf15be" address="unix:///run/containerd/s/847a09d5c4c04b64a06cda63975ab624fe0606b3f0c7fa83ea712eb3cd5f0cff" namespace=k8s.io protocol=ttrpc version=3 May 27 17:17:08.540887 kubelet[2628]: I0527 17:17:08.540411 2628 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-d69xj" podStartSLOduration=34.540386172 podStartE2EDuration="34.540386172s" podCreationTimestamp="2025-05-27 17:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:17:08.516627996 +0000 UTC m=+40.252207057" watchObservedRunningTime="2025-05-27 17:17:08.540386172 +0000 UTC m=+40.275965233" May 27 17:17:08.562860 systemd[1]: Started sshd@7-10.0.0.128:22-10.0.0.1:49332.service - OpenSSH per-connection server daemon (10.0.0.1:49332). May 27 17:17:08.575690 systemd[1]: Started cri-containerd-4204077d1d2bb6636b04401553c8ad87765c175d3434ed2f8b26667b4bdf15be.scope - libcontainer container 4204077d1d2bb6636b04401553c8ad87765c175d3434ed2f8b26667b4bdf15be. May 27 17:17:08.593506 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 17:17:08.608835 containerd[1492]: time="2025-05-27T17:17:08.608671556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v7797,Uid:b3279a25-7c72-48a0-98d2-a10f51772f5c,Namespace:calico-system,Attempt:0,} returns sandbox id \"4204077d1d2bb6636b04401553c8ad87765c175d3434ed2f8b26667b4bdf15be\"" May 27 17:17:08.628715 sshd[4390]: Accepted publickey for core from 10.0.0.1 port 49332 ssh2: RSA SHA256:ZZNcfTFkFYX46lZGwGlqysxQ9Yikwv1d/hmoNWRTIVY May 27 17:17:08.630677 sshd-session[4390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:17:08.636921 systemd-logind[1472]: New session 8 of user core. May 27 17:17:08.641653 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 17:17:08.682700 systemd-networkd[1420]: cali9e129371e88: Gained IPv6LL May 27 17:17:08.824011 sshd[4410]: Connection closed by 10.0.0.1 port 49332 May 27 17:17:08.824867 sshd-session[4390]: pam_unix(sshd:session): session closed for user core May 27 17:17:08.828903 systemd-logind[1472]: Session 8 logged out. Waiting for processes to exit. May 27 17:17:08.829221 systemd[1]: sshd@7-10.0.0.128:22-10.0.0.1:49332.service: Deactivated successfully. May 27 17:17:08.833651 systemd[1]: session-8.scope: Deactivated successfully. May 27 17:17:08.836327 systemd-logind[1472]: Removed session 8. May 27 17:17:08.938570 systemd-networkd[1420]: calife5480e45b6: Gained IPv6LL May 27 17:17:09.354586 containerd[1492]: time="2025-05-27T17:17:09.354520775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f7778c848-f6n2h,Uid:8fe5a4c6-ca92-4cfe-a53a-76c1984b9ca3,Namespace:calico-system,Attempt:0,}" May 27 17:17:09.354586 containerd[1492]: time="2025-05-27T17:17:09.354571697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-848495b658-6kgv7,Uid:deaf44af-7419-4d5c-a162-bb9c189b0506,Namespace:calico-apiserver,Attempt:0,}" May 27 17:17:09.354955 containerd[1492]: time="2025-05-27T17:17:09.354640180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-vm748,Uid:493ad74e-e26f-46b2-b905-4b7831d09a1c,Namespace:calico-system,Attempt:0,}" May 27 17:17:09.354955 containerd[1492]: time="2025-05-27T17:17:09.354521655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-59wwh,Uid:9298ee2d-a669-4c63-86c0-df6bc71a1662,Namespace:kube-system,Attempt:0,}" May 27 17:17:09.511096 systemd-networkd[1420]: cali649b19ce893: Link UP May 27 17:17:09.512631 systemd-networkd[1420]: cali649b19ce893: Gained carrier May 27 17:17:09.527340 containerd[1492]: 2025-05-27 17:17:09.396 [INFO][4458] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:17:09.527340 containerd[1492]: 2025-05-27 17:17:09.427 [INFO][4458] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--848495b658--6kgv7-eth0 calico-apiserver-848495b658- calico-apiserver deaf44af-7419-4d5c-a162-bb9c189b0506 786 0 2025-05-27 17:16:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:848495b658 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-848495b658-6kgv7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali649b19ce893 [] [] }} ContainerID="c33ea8e671138cedef0cc2e05da231d2ff363bc38ea4097c62d750c256d7b8f9" Namespace="calico-apiserver" Pod="calico-apiserver-848495b658-6kgv7" WorkloadEndpoint="localhost-k8s-calico--apiserver--848495b658--6kgv7-" May 27 17:17:09.527340 containerd[1492]: 2025-05-27 17:17:09.427 [INFO][4458] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c33ea8e671138cedef0cc2e05da231d2ff363bc38ea4097c62d750c256d7b8f9" Namespace="calico-apiserver" Pod="calico-apiserver-848495b658-6kgv7" WorkloadEndpoint="localhost-k8s-calico--apiserver--848495b658--6kgv7-eth0" May 27 17:17:09.527340 containerd[1492]: 2025-05-27 17:17:09.461 [INFO][4513] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c33ea8e671138cedef0cc2e05da231d2ff363bc38ea4097c62d750c256d7b8f9" HandleID="k8s-pod-network.c33ea8e671138cedef0cc2e05da231d2ff363bc38ea4097c62d750c256d7b8f9" Workload="localhost-k8s-calico--apiserver--848495b658--6kgv7-eth0" May 27 17:17:09.527340 containerd[1492]: 2025-05-27 17:17:09.462 [INFO][4513] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c33ea8e671138cedef0cc2e05da231d2ff363bc38ea4097c62d750c256d7b8f9" HandleID="k8s-pod-network.c33ea8e671138cedef0cc2e05da231d2ff363bc38ea4097c62d750c256d7b8f9" Workload="localhost-k8s-calico--apiserver--848495b658--6kgv7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000596af0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-848495b658-6kgv7", "timestamp":"2025-05-27 17:17:09.461863215 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:17:09.527340 containerd[1492]: 2025-05-27 17:17:09.462 [INFO][4513] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:17:09.527340 containerd[1492]: 2025-05-27 17:17:09.462 [INFO][4513] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:17:09.527340 containerd[1492]: 2025-05-27 17:17:09.463 [INFO][4513] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 17:17:09.527340 containerd[1492]: 2025-05-27 17:17:09.475 [INFO][4513] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c33ea8e671138cedef0cc2e05da231d2ff363bc38ea4097c62d750c256d7b8f9" host="localhost" May 27 17:17:09.527340 containerd[1492]: 2025-05-27 17:17:09.481 [INFO][4513] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 17:17:09.527340 containerd[1492]: 2025-05-27 17:17:09.485 [INFO][4513] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 17:17:09.527340 containerd[1492]: 2025-05-27 17:17:09.487 [INFO][4513] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 17:17:09.527340 containerd[1492]: 2025-05-27 17:17:09.490 [INFO][4513] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 17:17:09.527340 containerd[1492]: 2025-05-27 17:17:09.490 [INFO][4513] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c33ea8e671138cedef0cc2e05da231d2ff363bc38ea4097c62d750c256d7b8f9" host="localhost" May 27 17:17:09.527340 containerd[1492]: 2025-05-27 17:17:09.491 [INFO][4513] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c33ea8e671138cedef0cc2e05da231d2ff363bc38ea4097c62d750c256d7b8f9 May 27 17:17:09.527340 containerd[1492]: 2025-05-27 17:17:09.496 [INFO][4513] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c33ea8e671138cedef0cc2e05da231d2ff363bc38ea4097c62d750c256d7b8f9" host="localhost" May 27 17:17:09.527340 containerd[1492]: 2025-05-27 17:17:09.503 [INFO][4513] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.c33ea8e671138cedef0cc2e05da231d2ff363bc38ea4097c62d750c256d7b8f9" host="localhost" May 27 17:17:09.527340 containerd[1492]: 2025-05-27 17:17:09.503 [INFO][4513] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.c33ea8e671138cedef0cc2e05da231d2ff363bc38ea4097c62d750c256d7b8f9" host="localhost" May 27 17:17:09.527340 containerd[1492]: 2025-05-27 17:17:09.503 [INFO][4513] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:17:09.527340 containerd[1492]: 2025-05-27 17:17:09.503 [INFO][4513] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="c33ea8e671138cedef0cc2e05da231d2ff363bc38ea4097c62d750c256d7b8f9" HandleID="k8s-pod-network.c33ea8e671138cedef0cc2e05da231d2ff363bc38ea4097c62d750c256d7b8f9" Workload="localhost-k8s-calico--apiserver--848495b658--6kgv7-eth0" May 27 17:17:09.528197 containerd[1492]: 2025-05-27 17:17:09.505 [INFO][4458] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c33ea8e671138cedef0cc2e05da231d2ff363bc38ea4097c62d750c256d7b8f9" Namespace="calico-apiserver" Pod="calico-apiserver-848495b658-6kgv7" WorkloadEndpoint="localhost-k8s-calico--apiserver--848495b658--6kgv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--848495b658--6kgv7-eth0", GenerateName:"calico-apiserver-848495b658-", Namespace:"calico-apiserver", SelfLink:"", UID:"deaf44af-7419-4d5c-a162-bb9c189b0506", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 16, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"848495b658", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-848495b658-6kgv7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali649b19ce893", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:17:09.528197 containerd[1492]: 2025-05-27 17:17:09.506 [INFO][4458] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="c33ea8e671138cedef0cc2e05da231d2ff363bc38ea4097c62d750c256d7b8f9" Namespace="calico-apiserver" Pod="calico-apiserver-848495b658-6kgv7" WorkloadEndpoint="localhost-k8s-calico--apiserver--848495b658--6kgv7-eth0" May 27 17:17:09.528197 containerd[1492]: 2025-05-27 17:17:09.506 [INFO][4458] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali649b19ce893 ContainerID="c33ea8e671138cedef0cc2e05da231d2ff363bc38ea4097c62d750c256d7b8f9" Namespace="calico-apiserver" Pod="calico-apiserver-848495b658-6kgv7" WorkloadEndpoint="localhost-k8s-calico--apiserver--848495b658--6kgv7-eth0" May 27 17:17:09.528197 containerd[1492]: 2025-05-27 17:17:09.512 [INFO][4458] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c33ea8e671138cedef0cc2e05da231d2ff363bc38ea4097c62d750c256d7b8f9" Namespace="calico-apiserver" Pod="calico-apiserver-848495b658-6kgv7" WorkloadEndpoint="localhost-k8s-calico--apiserver--848495b658--6kgv7-eth0" May 27 17:17:09.528197 containerd[1492]: 2025-05-27 17:17:09.513 [INFO][4458] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c33ea8e671138cedef0cc2e05da231d2ff363bc38ea4097c62d750c256d7b8f9" Namespace="calico-apiserver" Pod="calico-apiserver-848495b658-6kgv7" WorkloadEndpoint="localhost-k8s-calico--apiserver--848495b658--6kgv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--848495b658--6kgv7-eth0", GenerateName:"calico-apiserver-848495b658-", Namespace:"calico-apiserver", SelfLink:"", UID:"deaf44af-7419-4d5c-a162-bb9c189b0506", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 16, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"848495b658", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c33ea8e671138cedef0cc2e05da231d2ff363bc38ea4097c62d750c256d7b8f9", Pod:"calico-apiserver-848495b658-6kgv7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali649b19ce893", MAC:"06:17:72:b5:f0:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:17:09.528197 containerd[1492]: 2025-05-27 17:17:09.524 [INFO][4458] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c33ea8e671138cedef0cc2e05da231d2ff363bc38ea4097c62d750c256d7b8f9" Namespace="calico-apiserver" Pod="calico-apiserver-848495b658-6kgv7" WorkloadEndpoint="localhost-k8s-calico--apiserver--848495b658--6kgv7-eth0" May 27 17:17:09.549552 containerd[1492]: time="2025-05-27T17:17:09.549479856Z" level=info msg="connecting to shim c33ea8e671138cedef0cc2e05da231d2ff363bc38ea4097c62d750c256d7b8f9" address="unix:///run/containerd/s/884b397ffe62306ff4e1ff53b43c1d52482c202a345433a4bba3c853697ea28a" namespace=k8s.io protocol=ttrpc version=3 May 27 17:17:09.576622 systemd[1]: Started cri-containerd-c33ea8e671138cedef0cc2e05da231d2ff363bc38ea4097c62d750c256d7b8f9.scope - libcontainer container c33ea8e671138cedef0cc2e05da231d2ff363bc38ea4097c62d750c256d7b8f9. May 27 17:17:09.598424 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 17:17:09.614854 systemd-networkd[1420]: calib0c8f333e3d: Link UP May 27 17:17:09.615996 systemd-networkd[1420]: calib0c8f333e3d: Gained carrier May 27 17:17:09.632817 containerd[1492]: 2025-05-27 17:17:09.423 [INFO][4451] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:17:09.632817 containerd[1492]: 2025-05-27 17:17:09.443 [INFO][4451] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5f7778c848--f6n2h-eth0 calico-kube-controllers-5f7778c848- calico-system 8fe5a4c6-ca92-4cfe-a53a-76c1984b9ca3 785 0 2025-05-27 17:16:47 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5f7778c848 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5f7778c848-f6n2h eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib0c8f333e3d [] [] }} ContainerID="618a8ab479827df33563931d1d459bc15e30b2c298f5d686bd14f418dfdd7752" Namespace="calico-system" Pod="calico-kube-controllers-5f7778c848-f6n2h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f7778c848--f6n2h-" May 27 17:17:09.632817 containerd[1492]: 2025-05-27 17:17:09.444 [INFO][4451] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="618a8ab479827df33563931d1d459bc15e30b2c298f5d686bd14f418dfdd7752" Namespace="calico-system" Pod="calico-kube-controllers-5f7778c848-f6n2h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f7778c848--f6n2h-eth0" May 27 17:17:09.632817 containerd[1492]: 2025-05-27 17:17:09.482 [INFO][4534] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="618a8ab479827df33563931d1d459bc15e30b2c298f5d686bd14f418dfdd7752" HandleID="k8s-pod-network.618a8ab479827df33563931d1d459bc15e30b2c298f5d686bd14f418dfdd7752" Workload="localhost-k8s-calico--kube--controllers--5f7778c848--f6n2h-eth0" May 27 17:17:09.632817 containerd[1492]: 2025-05-27 17:17:09.482 [INFO][4534] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="618a8ab479827df33563931d1d459bc15e30b2c298f5d686bd14f418dfdd7752" HandleID="k8s-pod-network.618a8ab479827df33563931d1d459bc15e30b2c298f5d686bd14f418dfdd7752" Workload="localhost-k8s-calico--kube--controllers--5f7778c848--f6n2h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cdb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5f7778c848-f6n2h", "timestamp":"2025-05-27 17:17:09.482379531 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:17:09.632817 containerd[1492]: 2025-05-27 17:17:09.482 [INFO][4534] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:17:09.632817 containerd[1492]: 2025-05-27 17:17:09.503 [INFO][4534] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:17:09.632817 containerd[1492]: 2025-05-27 17:17:09.504 [INFO][4534] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 17:17:09.632817 containerd[1492]: 2025-05-27 17:17:09.576 [INFO][4534] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.618a8ab479827df33563931d1d459bc15e30b2c298f5d686bd14f418dfdd7752" host="localhost" May 27 17:17:09.632817 containerd[1492]: 2025-05-27 17:17:09.582 [INFO][4534] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 17:17:09.632817 containerd[1492]: 2025-05-27 17:17:09.587 [INFO][4534] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 17:17:09.632817 containerd[1492]: 2025-05-27 17:17:09.588 [INFO][4534] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 17:17:09.632817 containerd[1492]: 2025-05-27 17:17:09.591 [INFO][4534] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 17:17:09.632817 containerd[1492]: 2025-05-27 17:17:09.592 [INFO][4534] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.618a8ab479827df33563931d1d459bc15e30b2c298f5d686bd14f418dfdd7752" host="localhost" May 27 17:17:09.632817 containerd[1492]: 2025-05-27 17:17:09.593 [INFO][4534] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.618a8ab479827df33563931d1d459bc15e30b2c298f5d686bd14f418dfdd7752 May 27 17:17:09.632817 containerd[1492]: 2025-05-27 17:17:09.599 [INFO][4534] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.618a8ab479827df33563931d1d459bc15e30b2c298f5d686bd14f418dfdd7752" host="localhost" May 27 17:17:09.632817 containerd[1492]: 2025-05-27 17:17:09.606 [INFO][4534] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.618a8ab479827df33563931d1d459bc15e30b2c298f5d686bd14f418dfdd7752" host="localhost" May 27 17:17:09.632817 containerd[1492]: 2025-05-27 17:17:09.606 [INFO][4534] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.618a8ab479827df33563931d1d459bc15e30b2c298f5d686bd14f418dfdd7752" host="localhost" May 27 17:17:09.632817 containerd[1492]: 2025-05-27 17:17:09.607 [INFO][4534] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:17:09.632817 containerd[1492]: 2025-05-27 17:17:09.607 [INFO][4534] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="618a8ab479827df33563931d1d459bc15e30b2c298f5d686bd14f418dfdd7752" HandleID="k8s-pod-network.618a8ab479827df33563931d1d459bc15e30b2c298f5d686bd14f418dfdd7752" Workload="localhost-k8s-calico--kube--controllers--5f7778c848--f6n2h-eth0" May 27 17:17:09.633604 containerd[1492]: 2025-05-27 17:17:09.609 [INFO][4451] cni-plugin/k8s.go 418: Populated endpoint ContainerID="618a8ab479827df33563931d1d459bc15e30b2c298f5d686bd14f418dfdd7752" Namespace="calico-system" Pod="calico-kube-controllers-5f7778c848-f6n2h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f7778c848--f6n2h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f7778c848--f6n2h-eth0", GenerateName:"calico-kube-controllers-5f7778c848-", Namespace:"calico-system", SelfLink:"", UID:"8fe5a4c6-ca92-4cfe-a53a-76c1984b9ca3", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 16, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f7778c848", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5f7778c848-f6n2h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib0c8f333e3d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:17:09.633604 containerd[1492]: 2025-05-27 17:17:09.611 [INFO][4451] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="618a8ab479827df33563931d1d459bc15e30b2c298f5d686bd14f418dfdd7752" Namespace="calico-system" Pod="calico-kube-controllers-5f7778c848-f6n2h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f7778c848--f6n2h-eth0" May 27 17:17:09.633604 containerd[1492]: 2025-05-27 17:17:09.611 [INFO][4451] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib0c8f333e3d ContainerID="618a8ab479827df33563931d1d459bc15e30b2c298f5d686bd14f418dfdd7752" Namespace="calico-system" Pod="calico-kube-controllers-5f7778c848-f6n2h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f7778c848--f6n2h-eth0" May 27 17:17:09.633604 containerd[1492]: 2025-05-27 17:17:09.613 [INFO][4451] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="618a8ab479827df33563931d1d459bc15e30b2c298f5d686bd14f418dfdd7752" Namespace="calico-system" Pod="calico-kube-controllers-5f7778c848-f6n2h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f7778c848--f6n2h-eth0" May 27 17:17:09.633604 containerd[1492]: 2025-05-27 17:17:09.613 [INFO][4451] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="618a8ab479827df33563931d1d459bc15e30b2c298f5d686bd14f418dfdd7752" Namespace="calico-system" Pod="calico-kube-controllers-5f7778c848-f6n2h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f7778c848--f6n2h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f7778c848--f6n2h-eth0", GenerateName:"calico-kube-controllers-5f7778c848-", Namespace:"calico-system", SelfLink:"", UID:"8fe5a4c6-ca92-4cfe-a53a-76c1984b9ca3", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 16, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f7778c848", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"618a8ab479827df33563931d1d459bc15e30b2c298f5d686bd14f418dfdd7752", Pod:"calico-kube-controllers-5f7778c848-f6n2h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib0c8f333e3d", MAC:"fe:17:b1:46:90:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:17:09.633604 containerd[1492]: 2025-05-27 17:17:09.628 [INFO][4451] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="618a8ab479827df33563931d1d459bc15e30b2c298f5d686bd14f418dfdd7752" Namespace="calico-system" Pod="calico-kube-controllers-5f7778c848-f6n2h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f7778c848--f6n2h-eth0" May 27 17:17:09.639037 containerd[1492]: time="2025-05-27T17:17:09.638996106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-848495b658-6kgv7,Uid:deaf44af-7419-4d5c-a162-bb9c189b0506,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c33ea8e671138cedef0cc2e05da231d2ff363bc38ea4097c62d750c256d7b8f9\"" May 27 17:17:09.660280 containerd[1492]: time="2025-05-27T17:17:09.660239855Z" level=info msg="connecting to shim 618a8ab479827df33563931d1d459bc15e30b2c298f5d686bd14f418dfdd7752" address="unix:///run/containerd/s/4570e1fb8368010b00149a42804986a4ec77953075329481de6de927e7a3973a" namespace=k8s.io protocol=ttrpc version=3 May 27 17:17:09.683843 systemd[1]: Started cri-containerd-618a8ab479827df33563931d1d459bc15e30b2c298f5d686bd14f418dfdd7752.scope - libcontainer container 618a8ab479827df33563931d1d459bc15e30b2c298f5d686bd14f418dfdd7752. May 27 17:17:09.703999 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 17:17:09.731032 systemd-networkd[1420]: cali3dc75d14786: Link UP May 27 17:17:09.732185 systemd-networkd[1420]: cali3dc75d14786: Gained carrier May 27 17:17:09.737841 containerd[1492]: time="2025-05-27T17:17:09.737784307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f7778c848-f6n2h,Uid:8fe5a4c6-ca92-4cfe-a53a-76c1984b9ca3,Namespace:calico-system,Attempt:0,} returns sandbox id \"618a8ab479827df33563931d1d459bc15e30b2c298f5d686bd14f418dfdd7752\"" May 27 17:17:09.746819 containerd[1492]: 2025-05-27 17:17:09.402 [INFO][4487] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:17:09.746819 containerd[1492]: 2025-05-27 17:17:09.431 [INFO][4487] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--59wwh-eth0 coredns-668d6bf9bc- kube-system 9298ee2d-a669-4c63-86c0-df6bc71a1662 779 0 2025-05-27 17:16:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-59wwh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3dc75d14786 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="688f502e503e834bf2b323694ef6b77c299329e2bd77ffb851f02b64cb1ee5db" Namespace="kube-system" Pod="coredns-668d6bf9bc-59wwh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--59wwh-" May 27 17:17:09.746819 containerd[1492]: 2025-05-27 17:17:09.431 [INFO][4487] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="688f502e503e834bf2b323694ef6b77c299329e2bd77ffb851f02b64cb1ee5db" Namespace="kube-system" Pod="coredns-668d6bf9bc-59wwh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--59wwh-eth0" May 27 17:17:09.746819 containerd[1492]: 2025-05-27 17:17:09.483 [INFO][4520] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="688f502e503e834bf2b323694ef6b77c299329e2bd77ffb851f02b64cb1ee5db" HandleID="k8s-pod-network.688f502e503e834bf2b323694ef6b77c299329e2bd77ffb851f02b64cb1ee5db" Workload="localhost-k8s-coredns--668d6bf9bc--59wwh-eth0" May 27 17:17:09.746819 containerd[1492]: 2025-05-27 17:17:09.484 [INFO][4520] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="688f502e503e834bf2b323694ef6b77c299329e2bd77ffb851f02b64cb1ee5db" HandleID="k8s-pod-network.688f502e503e834bf2b323694ef6b77c299329e2bd77ffb851f02b64cb1ee5db" Workload="localhost-k8s-coredns--668d6bf9bc--59wwh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e1010), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-59wwh", "timestamp":"2025-05-27 17:17:09.483961164 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:17:09.746819 containerd[1492]: 2025-05-27 17:17:09.484 [INFO][4520] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:17:09.746819 containerd[1492]: 2025-05-27 17:17:09.607 [INFO][4520] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:17:09.746819 containerd[1492]: 2025-05-27 17:17:09.607 [INFO][4520] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 17:17:09.746819 containerd[1492]: 2025-05-27 17:17:09.677 [INFO][4520] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.688f502e503e834bf2b323694ef6b77c299329e2bd77ffb851f02b64cb1ee5db" host="localhost" May 27 17:17:09.746819 containerd[1492]: 2025-05-27 17:17:09.687 [INFO][4520] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 17:17:09.746819 containerd[1492]: 2025-05-27 17:17:09.698 [INFO][4520] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 17:17:09.746819 containerd[1492]: 2025-05-27 17:17:09.701 [INFO][4520] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 17:17:09.746819 containerd[1492]: 2025-05-27 17:17:09.705 [INFO][4520] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 17:17:09.746819 containerd[1492]: 2025-05-27 17:17:09.706 [INFO][4520] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.688f502e503e834bf2b323694ef6b77c299329e2bd77ffb851f02b64cb1ee5db" host="localhost" May 27 17:17:09.746819 containerd[1492]: 2025-05-27 17:17:09.710 [INFO][4520] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.688f502e503e834bf2b323694ef6b77c299329e2bd77ffb851f02b64cb1ee5db May 27 17:17:09.746819 containerd[1492]: 2025-05-27 17:17:09.715 [INFO][4520] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.688f502e503e834bf2b323694ef6b77c299329e2bd77ffb851f02b64cb1ee5db" host="localhost" May 27 17:17:09.746819 containerd[1492]: 2025-05-27 17:17:09.722 [INFO][4520] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.688f502e503e834bf2b323694ef6b77c299329e2bd77ffb851f02b64cb1ee5db" host="localhost" May 27 17:17:09.746819 containerd[1492]: 2025-05-27 17:17:09.722 [INFO][4520] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.688f502e503e834bf2b323694ef6b77c299329e2bd77ffb851f02b64cb1ee5db" host="localhost" May 27 17:17:09.746819 containerd[1492]: 2025-05-27 17:17:09.722 [INFO][4520] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:17:09.746819 containerd[1492]: 2025-05-27 17:17:09.725 [INFO][4520] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="688f502e503e834bf2b323694ef6b77c299329e2bd77ffb851f02b64cb1ee5db" HandleID="k8s-pod-network.688f502e503e834bf2b323694ef6b77c299329e2bd77ffb851f02b64cb1ee5db" Workload="localhost-k8s-coredns--668d6bf9bc--59wwh-eth0" May 27 17:17:09.747642 containerd[1492]: 2025-05-27 17:17:09.728 [INFO][4487] cni-plugin/k8s.go 418: Populated endpoint ContainerID="688f502e503e834bf2b323694ef6b77c299329e2bd77ffb851f02b64cb1ee5db" Namespace="kube-system" Pod="coredns-668d6bf9bc-59wwh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--59wwh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--59wwh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9298ee2d-a669-4c63-86c0-df6bc71a1662", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 16, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-59wwh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3dc75d14786", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:17:09.747642 containerd[1492]: 2025-05-27 17:17:09.728 [INFO][4487] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="688f502e503e834bf2b323694ef6b77c299329e2bd77ffb851f02b64cb1ee5db" Namespace="kube-system" Pod="coredns-668d6bf9bc-59wwh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--59wwh-eth0" May 27 17:17:09.747642 containerd[1492]: 2025-05-27 17:17:09.728 [INFO][4487] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3dc75d14786 ContainerID="688f502e503e834bf2b323694ef6b77c299329e2bd77ffb851f02b64cb1ee5db" Namespace="kube-system" Pod="coredns-668d6bf9bc-59wwh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--59wwh-eth0" May 27 17:17:09.747642 containerd[1492]: 2025-05-27 17:17:09.733 [INFO][4487] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="688f502e503e834bf2b323694ef6b77c299329e2bd77ffb851f02b64cb1ee5db" Namespace="kube-system" Pod="coredns-668d6bf9bc-59wwh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--59wwh-eth0" May 27 17:17:09.747642 containerd[1492]: 2025-05-27 17:17:09.733 [INFO][4487] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="688f502e503e834bf2b323694ef6b77c299329e2bd77ffb851f02b64cb1ee5db" Namespace="kube-system" Pod="coredns-668d6bf9bc-59wwh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--59wwh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--59wwh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9298ee2d-a669-4c63-86c0-df6bc71a1662", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 16, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"688f502e503e834bf2b323694ef6b77c299329e2bd77ffb851f02b64cb1ee5db", Pod:"coredns-668d6bf9bc-59wwh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3dc75d14786", MAC:"be:21:79:57:a8:38", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:17:09.747642 containerd[1492]: 2025-05-27 17:17:09.744 [INFO][4487] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="688f502e503e834bf2b323694ef6b77c299329e2bd77ffb851f02b64cb1ee5db" Namespace="kube-system" Pod="coredns-668d6bf9bc-59wwh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--59wwh-eth0" May 27 17:17:09.767068 containerd[1492]: time="2025-05-27T17:17:09.766907784Z" level=info msg="connecting to shim 688f502e503e834bf2b323694ef6b77c299329e2bd77ffb851f02b64cb1ee5db" address="unix:///run/containerd/s/582d5003ea20ecffa306d7cfdaa05f3c4ebebcedeba3ed15f6a9c2f3021b9774" namespace=k8s.io protocol=ttrpc version=3 May 27 17:17:09.770677 systemd-networkd[1420]: cali17aed5a206f: Gained IPv6LL May 27 17:17:09.798601 systemd[1]: Started cri-containerd-688f502e503e834bf2b323694ef6b77c299329e2bd77ffb851f02b64cb1ee5db.scope - libcontainer container 688f502e503e834bf2b323694ef6b77c299329e2bd77ffb851f02b64cb1ee5db. May 27 17:17:09.820861 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 17:17:09.823390 systemd-networkd[1420]: cali8b6fa15a6ce: Link UP May 27 17:17:09.824711 systemd-networkd[1420]: cali8b6fa15a6ce: Gained carrier May 27 17:17:09.840782 containerd[1492]: 2025-05-27 17:17:09.406 [INFO][4468] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:17:09.840782 containerd[1492]: 2025-05-27 17:17:09.436 [INFO][4468] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--78d55f7ddc--vm748-eth0 goldmane-78d55f7ddc- calico-system 493ad74e-e26f-46b2-b905-4b7831d09a1c 788 0 2025-05-27 17:16:46 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-78d55f7ddc-vm748 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8b6fa15a6ce [] [] }} ContainerID="c8d0e23760cfe3f282f0474fb17de23dbf2ee0831d45d576a74c349f45aa61a0" Namespace="calico-system" Pod="goldmane-78d55f7ddc-vm748" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--vm748-" May 27 17:17:09.840782 containerd[1492]: 2025-05-27 17:17:09.436 [INFO][4468] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c8d0e23760cfe3f282f0474fb17de23dbf2ee0831d45d576a74c349f45aa61a0" Namespace="calico-system" Pod="goldmane-78d55f7ddc-vm748" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--vm748-eth0" May 27 17:17:09.840782 containerd[1492]: 2025-05-27 17:17:09.483 [INFO][4522] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8d0e23760cfe3f282f0474fb17de23dbf2ee0831d45d576a74c349f45aa61a0" HandleID="k8s-pod-network.c8d0e23760cfe3f282f0474fb17de23dbf2ee0831d45d576a74c349f45aa61a0" Workload="localhost-k8s-goldmane--78d55f7ddc--vm748-eth0" May 27 17:17:09.840782 containerd[1492]: 2025-05-27 17:17:09.483 [INFO][4522] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c8d0e23760cfe3f282f0474fb17de23dbf2ee0831d45d576a74c349f45aa61a0" HandleID="k8s-pod-network.c8d0e23760cfe3f282f0474fb17de23dbf2ee0831d45d576a74c349f45aa61a0" Workload="localhost-k8s-goldmane--78d55f7ddc--vm748-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004ac730), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-78d55f7ddc-vm748", "timestamp":"2025-05-27 17:17:09.483189688 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:17:09.840782 containerd[1492]: 2025-05-27 17:17:09.484 [INFO][4522] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:17:09.840782 containerd[1492]: 2025-05-27 17:17:09.722 [INFO][4522] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:17:09.840782 containerd[1492]: 2025-05-27 17:17:09.722 [INFO][4522] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 17:17:09.840782 containerd[1492]: 2025-05-27 17:17:09.777 [INFO][4522] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c8d0e23760cfe3f282f0474fb17de23dbf2ee0831d45d576a74c349f45aa61a0" host="localhost" May 27 17:17:09.840782 containerd[1492]: 2025-05-27 17:17:09.787 [INFO][4522] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 17:17:09.840782 containerd[1492]: 2025-05-27 17:17:09.794 [INFO][4522] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 17:17:09.840782 containerd[1492]: 2025-05-27 17:17:09.796 [INFO][4522] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 17:17:09.840782 containerd[1492]: 2025-05-27 17:17:09.800 [INFO][4522] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 17:17:09.840782 containerd[1492]: 2025-05-27 17:17:09.800 [INFO][4522] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c8d0e23760cfe3f282f0474fb17de23dbf2ee0831d45d576a74c349f45aa61a0" host="localhost" May 27 17:17:09.840782 containerd[1492]: 2025-05-27 17:17:09.803 [INFO][4522] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c8d0e23760cfe3f282f0474fb17de23dbf2ee0831d45d576a74c349f45aa61a0 May 27 17:17:09.840782 containerd[1492]: 2025-05-27 17:17:09.808 [INFO][4522] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c8d0e23760cfe3f282f0474fb17de23dbf2ee0831d45d576a74c349f45aa61a0" host="localhost" May 27 17:17:09.840782 containerd[1492]: 2025-05-27 17:17:09.817 [INFO][4522] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.c8d0e23760cfe3f282f0474fb17de23dbf2ee0831d45d576a74c349f45aa61a0" host="localhost" May 27 17:17:09.840782 containerd[1492]: 2025-05-27 17:17:09.818 [INFO][4522] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.c8d0e23760cfe3f282f0474fb17de23dbf2ee0831d45d576a74c349f45aa61a0" host="localhost" May 27 17:17:09.840782 containerd[1492]: 2025-05-27 17:17:09.818 [INFO][4522] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:17:09.840782 containerd[1492]: 2025-05-27 17:17:09.818 [INFO][4522] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="c8d0e23760cfe3f282f0474fb17de23dbf2ee0831d45d576a74c349f45aa61a0" HandleID="k8s-pod-network.c8d0e23760cfe3f282f0474fb17de23dbf2ee0831d45d576a74c349f45aa61a0" Workload="localhost-k8s-goldmane--78d55f7ddc--vm748-eth0" May 27 17:17:09.841967 containerd[1492]: 2025-05-27 17:17:09.820 [INFO][4468] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c8d0e23760cfe3f282f0474fb17de23dbf2ee0831d45d576a74c349f45aa61a0" Namespace="calico-system" Pod="goldmane-78d55f7ddc-vm748" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--vm748-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--78d55f7ddc--vm748-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"493ad74e-e26f-46b2-b905-4b7831d09a1c", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 16, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-78d55f7ddc-vm748", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8b6fa15a6ce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:17:09.841967 containerd[1492]: 2025-05-27 17:17:09.820 [INFO][4468] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="c8d0e23760cfe3f282f0474fb17de23dbf2ee0831d45d576a74c349f45aa61a0" Namespace="calico-system" Pod="goldmane-78d55f7ddc-vm748" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--vm748-eth0" May 27 17:17:09.841967 containerd[1492]: 2025-05-27 17:17:09.820 [INFO][4468] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8b6fa15a6ce ContainerID="c8d0e23760cfe3f282f0474fb17de23dbf2ee0831d45d576a74c349f45aa61a0" Namespace="calico-system" Pod="goldmane-78d55f7ddc-vm748" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--vm748-eth0" May 27 17:17:09.841967 containerd[1492]: 2025-05-27 17:17:09.825 [INFO][4468] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8d0e23760cfe3f282f0474fb17de23dbf2ee0831d45d576a74c349f45aa61a0" Namespace="calico-system" Pod="goldmane-78d55f7ddc-vm748" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--vm748-eth0" May 27 17:17:09.841967 containerd[1492]: 2025-05-27 17:17:09.825 [INFO][4468] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c8d0e23760cfe3f282f0474fb17de23dbf2ee0831d45d576a74c349f45aa61a0" Namespace="calico-system" Pod="goldmane-78d55f7ddc-vm748" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--vm748-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--78d55f7ddc--vm748-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"493ad74e-e26f-46b2-b905-4b7831d09a1c", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 16, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c8d0e23760cfe3f282f0474fb17de23dbf2ee0831d45d576a74c349f45aa61a0", Pod:"goldmane-78d55f7ddc-vm748", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8b6fa15a6ce", MAC:"f6:b6:4b:01:4d:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:17:09.841967 containerd[1492]: 2025-05-27 17:17:09.837 [INFO][4468] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c8d0e23760cfe3f282f0474fb17de23dbf2ee0831d45d576a74c349f45aa61a0" Namespace="calico-system" Pod="goldmane-78d55f7ddc-vm748" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--vm748-eth0" May 27 17:17:09.868023 containerd[1492]: time="2025-05-27T17:17:09.867710239Z" level=info msg="connecting to shim c8d0e23760cfe3f282f0474fb17de23dbf2ee0831d45d576a74c349f45aa61a0" address="unix:///run/containerd/s/33ae846195dfdd993b9e389e282ed8c952151b3476630210e9f982c99065267f" namespace=k8s.io protocol=ttrpc version=3 May 27 17:17:09.868023 containerd[1492]: time="2025-05-27T17:17:09.867844325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-59wwh,Uid:9298ee2d-a669-4c63-86c0-df6bc71a1662,Namespace:kube-system,Attempt:0,} returns sandbox id \"688f502e503e834bf2b323694ef6b77c299329e2bd77ffb851f02b64cb1ee5db\"" May 27 17:17:09.874456 containerd[1492]: time="2025-05-27T17:17:09.874398511Z" level=info msg="CreateContainer within sandbox \"688f502e503e834bf2b323694ef6b77c299329e2bd77ffb851f02b64cb1ee5db\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 17:17:09.883466 containerd[1492]: time="2025-05-27T17:17:09.882871385Z" level=info msg="Container 30bcf0c1e3c6daf92e39b3141b71728d6a311127253e24b09b76bb13aedfdb3a: CDI devices from CRI Config.CDIDevices: []" May 27 17:17:09.890972 containerd[1492]: time="2025-05-27T17:17:09.890936801Z" level=info msg="CreateContainer within sandbox \"688f502e503e834bf2b323694ef6b77c299329e2bd77ffb851f02b64cb1ee5db\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"30bcf0c1e3c6daf92e39b3141b71728d6a311127253e24b09b76bb13aedfdb3a\"" May 27 17:17:09.892480 containerd[1492]: time="2025-05-27T17:17:09.892364227Z" level=info msg="StartContainer for \"30bcf0c1e3c6daf92e39b3141b71728d6a311127253e24b09b76bb13aedfdb3a\"" May 27 17:17:09.893169 containerd[1492]: time="2025-05-27T17:17:09.893145424Z" level=info msg="connecting to shim 30bcf0c1e3c6daf92e39b3141b71728d6a311127253e24b09b76bb13aedfdb3a" address="unix:///run/containerd/s/582d5003ea20ecffa306d7cfdaa05f3c4ebebcedeba3ed15f6a9c2f3021b9774" protocol=ttrpc version=3 May 27 17:17:09.899622 systemd[1]: Started cri-containerd-c8d0e23760cfe3f282f0474fb17de23dbf2ee0831d45d576a74c349f45aa61a0.scope - libcontainer container c8d0e23760cfe3f282f0474fb17de23dbf2ee0831d45d576a74c349f45aa61a0. May 27 17:17:09.922549 systemd[1]: Started cri-containerd-30bcf0c1e3c6daf92e39b3141b71728d6a311127253e24b09b76bb13aedfdb3a.scope - libcontainer container 30bcf0c1e3c6daf92e39b3141b71728d6a311127253e24b09b76bb13aedfdb3a. May 27 17:17:09.937702 kubelet[2628]: I0527 17:17:09.937659 2628 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:17:09.941666 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 17:17:09.990980 containerd[1492]: time="2025-05-27T17:17:09.990933299Z" level=info msg="StartContainer for \"30bcf0c1e3c6daf92e39b3141b71728d6a311127253e24b09b76bb13aedfdb3a\" returns successfully" May 27 17:17:10.011214 containerd[1492]: time="2025-05-27T17:17:10.011175869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-vm748,Uid:493ad74e-e26f-46b2-b905-4b7831d09a1c,Namespace:calico-system,Attempt:0,} returns sandbox id \"c8d0e23760cfe3f282f0474fb17de23dbf2ee0831d45d576a74c349f45aa61a0\"" May 27 17:17:10.251116 containerd[1492]: time="2025-05-27T17:17:10.250996686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:17:10.251978 containerd[1492]: time="2025-05-27T17:17:10.251944129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=44453213" May 27 17:17:10.252739 containerd[1492]: time="2025-05-27T17:17:10.252709564Z" level=info msg="ImageCreate event name:\"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:17:10.257092 containerd[1492]: time="2025-05-27T17:17:10.257054681Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:17:10.257705 containerd[1492]: time="2025-05-27T17:17:10.257668269Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"45822470\" in 2.567612345s" May 27 17:17:10.257746 containerd[1492]: time="2025-05-27T17:17:10.257703711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\"" May 27 17:17:10.258768 containerd[1492]: time="2025-05-27T17:17:10.258667395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 27 17:17:10.260869 containerd[1492]: time="2025-05-27T17:17:10.260840293Z" level=info msg="CreateContainer within sandbox \"8ec8097bd788fdb096753930e30305f9da61d364461c7672586cb5599169d88c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 17:17:10.266252 containerd[1492]: time="2025-05-27T17:17:10.266216658Z" level=info msg="Container a93465db478be0fc4db2b55556609862aa6b397e81125b7e3fbeeb174dd22aa3: CDI devices from CRI Config.CDIDevices: []" May 27 17:17:10.271895 containerd[1492]: time="2025-05-27T17:17:10.271841593Z" level=info msg="CreateContainer within sandbox \"8ec8097bd788fdb096753930e30305f9da61d364461c7672586cb5599169d88c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a93465db478be0fc4db2b55556609862aa6b397e81125b7e3fbeeb174dd22aa3\"" May 27 17:17:10.272553 containerd[1492]: time="2025-05-27T17:17:10.272533705Z" level=info msg="StartContainer for \"a93465db478be0fc4db2b55556609862aa6b397e81125b7e3fbeeb174dd22aa3\"" May 27 17:17:10.273602 containerd[1492]: time="2025-05-27T17:17:10.273523710Z" level=info msg="connecting to shim a93465db478be0fc4db2b55556609862aa6b397e81125b7e3fbeeb174dd22aa3" address="unix:///run/containerd/s/bdd3b8140355e0a4ee817c23fa4d8cfe556238d70ffca92d44ee340759bc1d5c" protocol=ttrpc version=3 May 27 17:17:10.292593 systemd[1]: Started cri-containerd-a93465db478be0fc4db2b55556609862aa6b397e81125b7e3fbeeb174dd22aa3.scope - libcontainer container a93465db478be0fc4db2b55556609862aa6b397e81125b7e3fbeeb174dd22aa3. May 27 17:17:10.352334 containerd[1492]: time="2025-05-27T17:17:10.352258687Z" level=info msg="StartContainer for \"a93465db478be0fc4db2b55556609862aa6b397e81125b7e3fbeeb174dd22aa3\" returns successfully" May 27 17:17:10.516191 kubelet[2628]: I0527 17:17:10.515551 2628 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-59wwh" podStartSLOduration=36.515533706 podStartE2EDuration="36.515533706s" podCreationTimestamp="2025-05-27 17:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:17:10.514491139 +0000 UTC m=+42.250070200" watchObservedRunningTime="2025-05-27 17:17:10.515533706 +0000 UTC m=+42.251112767" May 27 17:17:10.554596 kubelet[2628]: I0527 17:17:10.554537 2628 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-848495b658-zwg5f" podStartSLOduration=22.98557467 podStartE2EDuration="25.554518397s" podCreationTimestamp="2025-05-27 17:16:45 +0000 UTC" firstStartedPulling="2025-05-27 17:17:07.689591502 +0000 UTC m=+39.425170523" lastFinishedPulling="2025-05-27 17:17:10.258535189 +0000 UTC m=+41.994114250" observedRunningTime="2025-05-27 17:17:10.541129669 +0000 UTC m=+42.276708730" watchObservedRunningTime="2025-05-27 17:17:10.554518397 +0000 UTC m=+42.290097458" May 27 17:17:10.666706 systemd-networkd[1420]: cali649b19ce893: Gained IPv6LL May 27 17:17:10.972163 systemd-networkd[1420]: vxlan.calico: Link UP May 27 17:17:10.972170 systemd-networkd[1420]: vxlan.calico: Gained carrier May 27 17:17:10.986886 systemd-networkd[1420]: cali8b6fa15a6ce: Gained IPv6LL May 27 17:17:11.050564 systemd-networkd[1420]: calib0c8f333e3d: Gained IPv6LL May 27 17:17:11.498639 systemd-networkd[1420]: cali3dc75d14786: Gained IPv6LL May 27 17:17:11.508735 kubelet[2628]: I0527 17:17:11.508700 2628 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:17:11.688747 containerd[1492]: time="2025-05-27T17:17:11.687788834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:17:11.690269 containerd[1492]: time="2025-05-27T17:17:11.690189221Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8226240" May 27 17:17:11.691414 containerd[1492]: time="2025-05-27T17:17:11.691388434Z" level=info msg="ImageCreate event name:\"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:17:11.698309 containerd[1492]: time="2025-05-27T17:17:11.697710874Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"9595481\" in 1.439015238s" May 27 17:17:11.698465 containerd[1492]: time="2025-05-27T17:17:11.698266139Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:17:11.698465 containerd[1492]: time="2025-05-27T17:17:11.698395185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\"" May 27 17:17:11.701661 containerd[1492]: time="2025-05-27T17:17:11.701625208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 17:17:11.702156 containerd[1492]: time="2025-05-27T17:17:11.702109630Z" level=info msg="CreateContainer within sandbox \"4204077d1d2bb6636b04401553c8ad87765c175d3434ed2f8b26667b4bdf15be\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 27 17:17:11.719487 containerd[1492]: time="2025-05-27T17:17:11.718461435Z" level=info msg="Container e2a59256c972d743d141282e3033e2301f7bd0e80ccdbe3d622862fbf5d8345d: CDI devices from CRI Config.CDIDevices: []" May 27 17:17:11.725596 containerd[1492]: time="2025-05-27T17:17:11.725547670Z" level=info msg="CreateContainer within sandbox \"4204077d1d2bb6636b04401553c8ad87765c175d3434ed2f8b26667b4bdf15be\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e2a59256c972d743d141282e3033e2301f7bd0e80ccdbe3d622862fbf5d8345d\"" May 27 17:17:11.726009 containerd[1492]: time="2025-05-27T17:17:11.725985409Z" level=info msg="StartContainer for \"e2a59256c972d743d141282e3033e2301f7bd0e80ccdbe3d622862fbf5d8345d\"" May 27 17:17:11.728211 containerd[1492]: time="2025-05-27T17:17:11.728166266Z" level=info msg="connecting to shim e2a59256c972d743d141282e3033e2301f7bd0e80ccdbe3d622862fbf5d8345d" address="unix:///run/containerd/s/847a09d5c4c04b64a06cda63975ab624fe0606b3f0c7fa83ea712eb3cd5f0cff" protocol=ttrpc version=3 May 27 17:17:11.748607 systemd[1]: Started cri-containerd-e2a59256c972d743d141282e3033e2301f7bd0e80ccdbe3d622862fbf5d8345d.scope - libcontainer container e2a59256c972d743d141282e3033e2301f7bd0e80ccdbe3d622862fbf5d8345d. May 27 17:17:11.784257 containerd[1492]: time="2025-05-27T17:17:11.784208072Z" level=info msg="StartContainer for \"e2a59256c972d743d141282e3033e2301f7bd0e80ccdbe3d622862fbf5d8345d\" returns successfully" May 27 17:17:11.951363 containerd[1492]: time="2025-05-27T17:17:11.951308486Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:17:11.952473 containerd[1492]: time="2025-05-27T17:17:11.952135762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 27 17:17:11.954214 containerd[1492]: time="2025-05-27T17:17:11.954183613Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"45822470\" in 252.169988ms" May 27 17:17:11.954313 containerd[1492]: time="2025-05-27T17:17:11.954299018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\"" May 27 17:17:11.955338 containerd[1492]: time="2025-05-27T17:17:11.955303543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 27 17:17:11.957878 containerd[1492]: time="2025-05-27T17:17:11.957821615Z" level=info msg="CreateContainer within sandbox \"c33ea8e671138cedef0cc2e05da231d2ff363bc38ea4097c62d750c256d7b8f9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 17:17:11.965469 containerd[1492]: time="2025-05-27T17:17:11.964855807Z" level=info msg="Container ef26b6b0a0b372e74da9e72a9b69c9e399e2f05d67ff5800c8c8e4f33dc8a529: CDI devices from CRI Config.CDIDevices: []" May 27 17:17:11.973027 containerd[1492]: time="2025-05-27T17:17:11.972993688Z" level=info msg="CreateContainer within sandbox \"c33ea8e671138cedef0cc2e05da231d2ff363bc38ea4097c62d750c256d7b8f9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ef26b6b0a0b372e74da9e72a9b69c9e399e2f05d67ff5800c8c8e4f33dc8a529\"" May 27 17:17:11.973413 containerd[1492]: time="2025-05-27T17:17:11.973386225Z" level=info msg="StartContainer for \"ef26b6b0a0b372e74da9e72a9b69c9e399e2f05d67ff5800c8c8e4f33dc8a529\"" May 27 17:17:11.975842 containerd[1492]: time="2025-05-27T17:17:11.975817253Z" level=info msg="connecting to shim ef26b6b0a0b372e74da9e72a9b69c9e399e2f05d67ff5800c8c8e4f33dc8a529" address="unix:///run/containerd/s/884b397ffe62306ff4e1ff53b43c1d52482c202a345433a4bba3c853697ea28a" protocol=ttrpc version=3 May 27 17:17:11.996610 systemd[1]: Started cri-containerd-ef26b6b0a0b372e74da9e72a9b69c9e399e2f05d67ff5800c8c8e4f33dc8a529.scope - libcontainer container ef26b6b0a0b372e74da9e72a9b69c9e399e2f05d67ff5800c8c8e4f33dc8a529. May 27 17:17:12.029819 containerd[1492]: time="2025-05-27T17:17:12.029730337Z" level=info msg="StartContainer for \"ef26b6b0a0b372e74da9e72a9b69c9e399e2f05d67ff5800c8c8e4f33dc8a529\" returns successfully" May 27 17:17:12.526462 kubelet[2628]: I0527 17:17:12.525631 2628 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-848495b658-6kgv7" podStartSLOduration=25.210938318 podStartE2EDuration="27.52561444s" podCreationTimestamp="2025-05-27 17:16:45 +0000 UTC" firstStartedPulling="2025-05-27 17:17:09.64038221 +0000 UTC m=+41.375961271" lastFinishedPulling="2025-05-27 17:17:11.955058332 +0000 UTC m=+43.690637393" observedRunningTime="2025-05-27 17:17:12.522873921 +0000 UTC m=+44.258452942" watchObservedRunningTime="2025-05-27 17:17:12.52561444 +0000 UTC m=+44.261193501" May 27 17:17:12.714985 systemd-networkd[1420]: vxlan.calico: Gained IPv6LL May 27 17:17:13.518306 kubelet[2628]: I0527 17:17:13.518274 2628 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:17:13.845050 systemd[1]: Started sshd@8-10.0.0.128:22-10.0.0.1:51108.service - OpenSSH per-connection server daemon (10.0.0.1:51108). May 27 17:17:13.930540 containerd[1492]: time="2025-05-27T17:17:13.930485125Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:17:13.931405 containerd[1492]: time="2025-05-27T17:17:13.931097591Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=48045219" May 27 17:17:13.932931 containerd[1492]: time="2025-05-27T17:17:13.932863546Z" level=info msg="ImageCreate event name:\"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:17:13.935028 containerd[1492]: time="2025-05-27T17:17:13.934976875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:17:13.935528 containerd[1492]: time="2025-05-27T17:17:13.935500257Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"49414428\" in 1.980162793s" May 27 17:17:13.935563 containerd[1492]: time="2025-05-27T17:17:13.935539099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\"" May 27 17:17:13.942479 containerd[1492]: time="2025-05-27T17:17:13.938658671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 17:17:13.943063 containerd[1492]: time="2025-05-27T17:17:13.943015656Z" level=info msg="CreateContainer within sandbox \"618a8ab479827df33563931d1d459bc15e30b2c298f5d686bd14f418dfdd7752\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 27 17:17:13.953650 containerd[1492]: time="2025-05-27T17:17:13.953614386Z" level=info msg="Container eed7a4b2930d459130b4f5ab4bb09578b4df9cbad29f824125d81daf89da2929: CDI devices from CRI Config.CDIDevices: []" May 27 17:17:13.960190 containerd[1492]: time="2025-05-27T17:17:13.960025898Z" level=info msg="CreateContainer within sandbox \"618a8ab479827df33563931d1d459bc15e30b2c298f5d686bd14f418dfdd7752\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"eed7a4b2930d459130b4f5ab4bb09578b4df9cbad29f824125d81daf89da2929\"" May 27 17:17:13.960608 containerd[1492]: time="2025-05-27T17:17:13.960550480Z" level=info msg="StartContainer for \"eed7a4b2930d459130b4f5ab4bb09578b4df9cbad29f824125d81daf89da2929\"" May 27 17:17:13.961800 containerd[1492]: time="2025-05-27T17:17:13.961733330Z" level=info msg="connecting to shim eed7a4b2930d459130b4f5ab4bb09578b4df9cbad29f824125d81daf89da2929" address="unix:///run/containerd/s/4570e1fb8368010b00149a42804986a4ec77953075329481de6de927e7a3973a" protocol=ttrpc version=3 May 27 17:17:13.969900 sshd[5088]: Accepted publickey for core from 10.0.0.1 port 51108 ssh2: RSA SHA256:ZZNcfTFkFYX46lZGwGlqysxQ9Yikwv1d/hmoNWRTIVY May 27 17:17:13.971916 sshd-session[5088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:17:13.978331 systemd-logind[1472]: New session 9 of user core. May 27 17:17:13.984628 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 17:17:13.987551 systemd[1]: Started cri-containerd-eed7a4b2930d459130b4f5ab4bb09578b4df9cbad29f824125d81daf89da2929.scope - libcontainer container eed7a4b2930d459130b4f5ab4bb09578b4df9cbad29f824125d81daf89da2929. May 27 17:17:14.033520 containerd[1492]: time="2025-05-27T17:17:14.033481146Z" level=info msg="StartContainer for \"eed7a4b2930d459130b4f5ab4bb09578b4df9cbad29f824125d81daf89da2929\" returns successfully" May 27 17:17:14.095356 containerd[1492]: time="2025-05-27T17:17:14.095224150Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:17:14.098113 containerd[1492]: time="2025-05-27T17:17:14.097966344Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:17:14.098113 containerd[1492]: time="2025-05-27T17:17:14.098033547Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 17:17:14.098357 kubelet[2628]: E0527 17:17:14.098246 2628 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:17:14.098357 kubelet[2628]: E0527 17:17:14.098297 2628 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:17:14.099339 kubelet[2628]: E0527 17:17:14.099167 2628 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f2qsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-vm748_calico-system(493ad74e-e26f-46b2-b905-4b7831d09a1c): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:17:14.099565 containerd[1492]: time="2025-05-27T17:17:14.098661693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 27 17:17:14.100846 kubelet[2628]: E0527 17:17:14.100810 2628 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-vm748" podUID="493ad74e-e26f-46b2-b905-4b7831d09a1c" May 27 17:17:14.236322 sshd[5103]: Connection closed by 10.0.0.1 port 51108 May 27 17:17:14.236667 sshd-session[5088]: pam_unix(sshd:session): session closed for user core May 27 17:17:14.240100 systemd[1]: sshd@8-10.0.0.128:22-10.0.0.1:51108.service: Deactivated successfully. May 27 17:17:14.242905 systemd[1]: session-9.scope: Deactivated successfully. May 27 17:17:14.243655 systemd-logind[1472]: Session 9 logged out. Waiting for processes to exit. May 27 17:17:14.244816 systemd-logind[1472]: Removed session 9. May 27 17:17:14.523826 kubelet[2628]: E0527 17:17:14.523645 2628 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-vm748" podUID="493ad74e-e26f-46b2-b905-4b7831d09a1c" May 27 17:17:14.551075 kubelet[2628]: I0527 17:17:14.551012 2628 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5f7778c848-f6n2h" podStartSLOduration=23.355876457 podStartE2EDuration="27.550994403s" podCreationTimestamp="2025-05-27 17:16:47 +0000 UTC" firstStartedPulling="2025-05-27 17:17:09.741088141 +0000 UTC m=+41.476667202" lastFinishedPulling="2025-05-27 17:17:13.936206087 +0000 UTC m=+45.671785148" observedRunningTime="2025-05-27 17:17:14.549253891 +0000 UTC m=+46.284832952" watchObservedRunningTime="2025-05-27 17:17:14.550994403 +0000 UTC m=+46.286573464" May 27 17:17:15.295023 containerd[1492]: time="2025-05-27T17:17:15.294969585Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:17:15.296014 containerd[1492]: time="2025-05-27T17:17:15.295812099Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=13749925" May 27 17:17:15.296819 containerd[1492]: time="2025-05-27T17:17:15.296785418Z" level=info msg="ImageCreate event name:\"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:17:15.298708 containerd[1492]: time="2025-05-27T17:17:15.298667295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:17:15.299471 containerd[1492]: time="2025-05-27T17:17:15.299426686Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"15119118\" in 1.200736792s" May 27 17:17:15.299539 containerd[1492]: time="2025-05-27T17:17:15.299476688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\"" May 27 17:17:15.302256 containerd[1492]: time="2025-05-27T17:17:15.302205879Z" level=info msg="CreateContainer within sandbox \"4204077d1d2bb6636b04401553c8ad87765c175d3434ed2f8b26667b4bdf15be\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 27 17:17:15.310966 containerd[1492]: time="2025-05-27T17:17:15.310913514Z" level=info msg="Container 3350f018dcbd4c7ade871a84b44988c1f12cd20ec1b7b20a61397953a6889d32: CDI devices from CRI Config.CDIDevices: []" May 27 17:17:15.319137 containerd[1492]: time="2025-05-27T17:17:15.319093247Z" level=info msg="CreateContainer within sandbox \"4204077d1d2bb6636b04401553c8ad87765c175d3434ed2f8b26667b4bdf15be\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3350f018dcbd4c7ade871a84b44988c1f12cd20ec1b7b20a61397953a6889d32\"" May 27 17:17:15.319834 containerd[1492]: time="2025-05-27T17:17:15.319757554Z" level=info msg="StartContainer for \"3350f018dcbd4c7ade871a84b44988c1f12cd20ec1b7b20a61397953a6889d32\"" May 27 17:17:15.321749 containerd[1492]: time="2025-05-27T17:17:15.321712593Z" level=info msg="connecting to shim 3350f018dcbd4c7ade871a84b44988c1f12cd20ec1b7b20a61397953a6889d32" address="unix:///run/containerd/s/847a09d5c4c04b64a06cda63975ab624fe0606b3f0c7fa83ea712eb3cd5f0cff" protocol=ttrpc version=3 May 27 17:17:15.360684 systemd[1]: Started cri-containerd-3350f018dcbd4c7ade871a84b44988c1f12cd20ec1b7b20a61397953a6889d32.scope - libcontainer container 3350f018dcbd4c7ade871a84b44988c1f12cd20ec1b7b20a61397953a6889d32. May 27 17:17:15.409706 containerd[1492]: time="2025-05-27T17:17:15.409658214Z" level=info msg="StartContainer for \"3350f018dcbd4c7ade871a84b44988c1f12cd20ec1b7b20a61397953a6889d32\" returns successfully" May 27 17:17:15.540212 kubelet[2628]: I0527 17:17:15.540072 2628 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-v7797" podStartSLOduration=21.850298487 podStartE2EDuration="28.540032882s" podCreationTimestamp="2025-05-27 17:16:47 +0000 UTC" firstStartedPulling="2025-05-27 17:17:08.610627329 +0000 UTC m=+40.346206350" lastFinishedPulling="2025-05-27 17:17:15.300361724 +0000 UTC m=+47.035940745" observedRunningTime="2025-05-27 17:17:15.539841394 +0000 UTC m=+47.275420455" watchObservedRunningTime="2025-05-27 17:17:15.540032882 +0000 UTC m=+47.275611943" May 27 17:17:15.562727 containerd[1492]: time="2025-05-27T17:17:15.562687004Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eed7a4b2930d459130b4f5ab4bb09578b4df9cbad29f824125d81daf89da2929\" id:\"6204768c42ce5ec4c56f8fb8d96cde498a390abeb932f520a7bbed5331a59c96\" pid:5201 exited_at:{seconds:1748366235 nanos:562382152}" May 27 17:17:16.422030 kubelet[2628]: I0527 17:17:16.421995 2628 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 27 17:17:16.423143 kubelet[2628]: I0527 17:17:16.423125 2628 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 27 17:17:18.356233 containerd[1492]: time="2025-05-27T17:17:18.356137401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 17:17:18.513827 containerd[1492]: time="2025-05-27T17:17:18.513783155Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:17:18.515172 containerd[1492]: time="2025-05-27T17:17:18.515122486Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:17:18.515246 containerd[1492]: time="2025-05-27T17:17:18.515209209Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 17:17:18.515368 kubelet[2628]: E0527 17:17:18.515322 2628 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:17:18.515665 kubelet[2628]: E0527 17:17:18.515374 2628 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:17:18.515665 kubelet[2628]: E0527 17:17:18.515502 2628 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:85fac0f966784eb3ae65fa9fa496f2d0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kgjcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8687f89567-wlhb2_calico-system(2fb07cfb-44c7-4704-bd84-8ea56f199724): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:17:18.517533 containerd[1492]: time="2025-05-27T17:17:18.517503178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 17:17:18.669684 containerd[1492]: time="2025-05-27T17:17:18.669550876Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:17:18.681458 containerd[1492]: time="2025-05-27T17:17:18.681384492Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:17:18.681584 containerd[1492]: time="2025-05-27T17:17:18.681431574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 17:17:18.681732 kubelet[2628]: E0527 17:17:18.681687 2628 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:17:18.681819 kubelet[2628]: E0527 17:17:18.681752 2628 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:17:18.681946 kubelet[2628]: E0527 17:17:18.681870 2628 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kgjcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8687f89567-wlhb2_calico-system(2fb07cfb-44c7-4704-bd84-8ea56f199724): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:17:18.683230 kubelet[2628]: E0527 17:17:18.683185 2628 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-8687f89567-wlhb2" podUID="2fb07cfb-44c7-4704-bd84-8ea56f199724" May 27 17:17:19.259502 systemd[1]: Started sshd@9-10.0.0.128:22-10.0.0.1:51126.service - OpenSSH per-connection server daemon (10.0.0.1:51126). May 27 17:17:19.318485 sshd[5224]: Accepted publickey for core from 10.0.0.1 port 51126 ssh2: RSA SHA256:ZZNcfTFkFYX46lZGwGlqysxQ9Yikwv1d/hmoNWRTIVY May 27 17:17:19.320134 sshd-session[5224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:17:19.324419 systemd-logind[1472]: New session 10 of user core. May 27 17:17:19.331595 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 17:17:19.560796 sshd[5226]: Connection closed by 10.0.0.1 port 51126 May 27 17:17:19.561346 sshd-session[5224]: pam_unix(sshd:session): session closed for user core May 27 17:17:19.575607 systemd[1]: sshd@9-10.0.0.128:22-10.0.0.1:51126.service: Deactivated successfully. May 27 17:17:19.577712 systemd[1]: session-10.scope: Deactivated successfully. May 27 17:17:19.578427 systemd-logind[1472]: Session 10 logged out. Waiting for processes to exit. May 27 17:17:19.582050 systemd[1]: Started sshd@10-10.0.0.128:22-10.0.0.1:51128.service - OpenSSH per-connection server daemon (10.0.0.1:51128). May 27 17:17:19.583002 systemd-logind[1472]: Removed session 10. May 27 17:17:19.645731 sshd[5243]: Accepted publickey for core from 10.0.0.1 port 51128 ssh2: RSA SHA256:ZZNcfTFkFYX46lZGwGlqysxQ9Yikwv1d/hmoNWRTIVY May 27 17:17:19.647649 sshd-session[5243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:17:19.655734 systemd-logind[1472]: New session 11 of user core. May 27 17:17:19.660668 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 17:17:19.857253 sshd[5246]: Connection closed by 10.0.0.1 port 51128 May 27 17:17:19.857701 sshd-session[5243]: pam_unix(sshd:session): session closed for user core May 27 17:17:19.867829 systemd[1]: sshd@10-10.0.0.128:22-10.0.0.1:51128.service: Deactivated successfully. May 27 17:17:19.870085 systemd[1]: session-11.scope: Deactivated successfully. May 27 17:17:19.872833 systemd-logind[1472]: Session 11 logged out. Waiting for processes to exit. May 27 17:17:19.874995 systemd-logind[1472]: Removed session 11. May 27 17:17:19.877239 systemd[1]: Started sshd@11-10.0.0.128:22-10.0.0.1:51142.service - OpenSSH per-connection server daemon (10.0.0.1:51142). May 27 17:17:19.939814 sshd[5258]: Accepted publickey for core from 10.0.0.1 port 51142 ssh2: RSA SHA256:ZZNcfTFkFYX46lZGwGlqysxQ9Yikwv1d/hmoNWRTIVY May 27 17:17:19.941130 sshd-session[5258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:17:19.945696 systemd-logind[1472]: New session 12 of user core. May 27 17:17:19.960617 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 17:17:20.104752 sshd[5260]: Connection closed by 10.0.0.1 port 51142 May 27 17:17:20.105071 sshd-session[5258]: pam_unix(sshd:session): session closed for user core May 27 17:17:20.108518 systemd[1]: sshd@11-10.0.0.128:22-10.0.0.1:51142.service: Deactivated successfully. May 27 17:17:20.110291 systemd[1]: session-12.scope: Deactivated successfully. May 27 17:17:20.111044 systemd-logind[1472]: Session 12 logged out. Waiting for processes to exit. May 27 17:17:20.112131 systemd-logind[1472]: Removed session 12. May 27 17:17:25.125311 systemd[1]: Started sshd@12-10.0.0.128:22-10.0.0.1:49978.service - OpenSSH per-connection server daemon (10.0.0.1:49978). May 27 17:17:25.186567 sshd[5281]: Accepted publickey for core from 10.0.0.1 port 49978 ssh2: RSA SHA256:ZZNcfTFkFYX46lZGwGlqysxQ9Yikwv1d/hmoNWRTIVY May 27 17:17:25.187840 sshd-session[5281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:17:25.191469 systemd-logind[1472]: New session 13 of user core. May 27 17:17:25.201584 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 17:17:25.336071 sshd[5283]: Connection closed by 10.0.0.1 port 49978 May 27 17:17:25.336591 sshd-session[5281]: pam_unix(sshd:session): session closed for user core May 27 17:17:25.350626 systemd[1]: sshd@12-10.0.0.128:22-10.0.0.1:49978.service: Deactivated successfully. May 27 17:17:25.352819 systemd[1]: session-13.scope: Deactivated successfully. May 27 17:17:25.356653 systemd-logind[1472]: Session 13 logged out. Waiting for processes to exit. May 27 17:17:25.365023 systemd-logind[1472]: Removed session 13. May 27 17:17:25.370572 systemd[1]: Started sshd@13-10.0.0.128:22-10.0.0.1:49986.service - OpenSSH per-connection server daemon (10.0.0.1:49986). May 27 17:17:25.434030 sshd[5297]: Accepted publickey for core from 10.0.0.1 port 49986 ssh2: RSA SHA256:ZZNcfTFkFYX46lZGwGlqysxQ9Yikwv1d/hmoNWRTIVY May 27 17:17:25.435577 sshd-session[5297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:17:25.439525 systemd-logind[1472]: New session 14 of user core. May 27 17:17:25.449610 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 17:17:25.663213 sshd[5299]: Connection closed by 10.0.0.1 port 49986 May 27 17:17:25.664273 sshd-session[5297]: pam_unix(sshd:session): session closed for user core May 27 17:17:25.672704 systemd[1]: sshd@13-10.0.0.128:22-10.0.0.1:49986.service: Deactivated successfully. May 27 17:17:25.675295 systemd[1]: session-14.scope: Deactivated successfully. May 27 17:17:25.676340 systemd-logind[1472]: Session 14 logged out. Waiting for processes to exit. May 27 17:17:25.678724 systemd-logind[1472]: Removed session 14. May 27 17:17:25.682291 systemd[1]: Started sshd@14-10.0.0.128:22-10.0.0.1:50000.service - OpenSSH per-connection server daemon (10.0.0.1:50000). May 27 17:17:25.736833 sshd[5311]: Accepted publickey for core from 10.0.0.1 port 50000 ssh2: RSA SHA256:ZZNcfTFkFYX46lZGwGlqysxQ9Yikwv1d/hmoNWRTIVY May 27 17:17:25.738694 sshd-session[5311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:17:25.742895 systemd-logind[1472]: New session 15 of user core. May 27 17:17:25.758631 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 17:17:26.544780 sshd[5313]: Connection closed by 10.0.0.1 port 50000 May 27 17:17:26.545288 sshd-session[5311]: pam_unix(sshd:session): session closed for user core May 27 17:17:26.553888 systemd[1]: sshd@14-10.0.0.128:22-10.0.0.1:50000.service: Deactivated successfully. May 27 17:17:26.556905 systemd[1]: session-15.scope: Deactivated successfully. May 27 17:17:26.560067 systemd-logind[1472]: Session 15 logged out. Waiting for processes to exit. May 27 17:17:26.563283 systemd-logind[1472]: Removed session 15. May 27 17:17:26.567742 systemd[1]: Started sshd@15-10.0.0.128:22-10.0.0.1:50010.service - OpenSSH per-connection server daemon (10.0.0.1:50010). May 27 17:17:26.623219 sshd[5333]: Accepted publickey for core from 10.0.0.1 port 50010 ssh2: RSA SHA256:ZZNcfTFkFYX46lZGwGlqysxQ9Yikwv1d/hmoNWRTIVY May 27 17:17:26.624878 sshd-session[5333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:17:26.629511 systemd-logind[1472]: New session 16 of user core. May 27 17:17:26.637594 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 17:17:26.893723 sshd[5335]: Connection closed by 10.0.0.1 port 50010 May 27 17:17:26.894261 sshd-session[5333]: pam_unix(sshd:session): session closed for user core May 27 17:17:26.907363 systemd[1]: sshd@15-10.0.0.128:22-10.0.0.1:50010.service: Deactivated successfully. May 27 17:17:26.909283 systemd[1]: session-16.scope: Deactivated successfully. May 27 17:17:26.910059 systemd-logind[1472]: Session 16 logged out. Waiting for processes to exit. May 27 17:17:26.913460 systemd[1]: Started sshd@16-10.0.0.128:22-10.0.0.1:50026.service - OpenSSH per-connection server daemon (10.0.0.1:50026). May 27 17:17:26.914601 systemd-logind[1472]: Removed session 16. May 27 17:17:26.978041 sshd[5347]: Accepted publickey for core from 10.0.0.1 port 50026 ssh2: RSA SHA256:ZZNcfTFkFYX46lZGwGlqysxQ9Yikwv1d/hmoNWRTIVY May 27 17:17:26.979763 sshd-session[5347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:17:26.985588 systemd-logind[1472]: New session 17 of user core. May 27 17:17:26.999615 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 17:17:27.115044 sshd[5349]: Connection closed by 10.0.0.1 port 50026 May 27 17:17:27.115366 sshd-session[5347]: pam_unix(sshd:session): session closed for user core May 27 17:17:27.118691 systemd[1]: sshd@16-10.0.0.128:22-10.0.0.1:50026.service: Deactivated successfully. May 27 17:17:27.122686 systemd[1]: session-17.scope: Deactivated successfully. May 27 17:17:27.123429 systemd-logind[1472]: Session 17 logged out. Waiting for processes to exit. May 27 17:17:27.124523 systemd-logind[1472]: Removed session 17. May 27 17:17:27.851830 kubelet[2628]: I0527 17:17:27.851758 2628 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:17:28.358271 containerd[1492]: time="2025-05-27T17:17:28.357613486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 17:17:28.546897 containerd[1492]: time="2025-05-27T17:17:28.546837737Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:17:28.547855 containerd[1492]: time="2025-05-27T17:17:28.547790409Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:17:28.547948 containerd[1492]: time="2025-05-27T17:17:28.547893453Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 17:17:28.548084 kubelet[2628]: E0527 17:17:28.548034 2628 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:17:28.548152 kubelet[2628]: E0527 17:17:28.548091 2628 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:17:28.548275 kubelet[2628]: E0527 17:17:28.548212 2628 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f2qsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-vm748_calico-system(493ad74e-e26f-46b2-b905-4b7831d09a1c): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:17:28.549616 kubelet[2628]: E0527 17:17:28.549569 2628 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-vm748" podUID="493ad74e-e26f-46b2-b905-4b7831d09a1c" May 27 17:17:30.395561 kubelet[2628]: E0527 17:17:30.394387 2628 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-8687f89567-wlhb2" podUID="2fb07cfb-44c7-4704-bd84-8ea56f199724" May 27 17:17:32.128008 systemd[1]: Started sshd@17-10.0.0.128:22-10.0.0.1:50028.service - OpenSSH per-connection server daemon (10.0.0.1:50028). May 27 17:17:32.189036 sshd[5378]: Accepted publickey for core from 10.0.0.1 port 50028 ssh2: RSA SHA256:ZZNcfTFkFYX46lZGwGlqysxQ9Yikwv1d/hmoNWRTIVY May 27 17:17:32.190529 sshd-session[5378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:17:32.194585 systemd-logind[1472]: New session 18 of user core. May 27 17:17:32.203619 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 17:17:32.324119 sshd[5380]: Connection closed by 10.0.0.1 port 50028 May 27 17:17:32.323399 sshd-session[5378]: pam_unix(sshd:session): session closed for user core May 27 17:17:32.327541 systemd[1]: sshd@17-10.0.0.128:22-10.0.0.1:50028.service: Deactivated successfully. May 27 17:17:32.331940 systemd[1]: session-18.scope: Deactivated successfully. May 27 17:17:32.332654 systemd-logind[1472]: Session 18 logged out. Waiting for processes to exit. May 27 17:17:32.334810 systemd-logind[1472]: Removed session 18. May 27 17:17:32.544435 containerd[1492]: time="2025-05-27T17:17:32.544319410Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a3228acf259a7bee02746a073490e847b52994e7358789bf14d654c01e6a8418\" id:\"8f5b402eea7b2b4ff3a16c4a4ace67aa9b4f2f66f3c26405d1330c3a5bc6e299\" pid:5403 exited_at:{seconds:1748366252 nanos:543821954}" May 27 17:17:37.334945 systemd[1]: Started sshd@18-10.0.0.128:22-10.0.0.1:45842.service - OpenSSH per-connection server daemon (10.0.0.1:45842). May 27 17:17:37.405209 sshd[5419]: Accepted publickey for core from 10.0.0.1 port 45842 ssh2: RSA SHA256:ZZNcfTFkFYX46lZGwGlqysxQ9Yikwv1d/hmoNWRTIVY May 27 17:17:37.406786 sshd-session[5419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:17:37.413377 systemd-logind[1472]: New session 19 of user core. May 27 17:17:37.423589 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 17:17:37.624540 sshd[5421]: Connection closed by 10.0.0.1 port 45842 May 27 17:17:37.625312 sshd-session[5419]: pam_unix(sshd:session): session closed for user core May 27 17:17:37.628884 systemd[1]: sshd@18-10.0.0.128:22-10.0.0.1:45842.service: Deactivated successfully. May 27 17:17:37.632926 systemd[1]: session-19.scope: Deactivated successfully. May 27 17:17:37.634922 systemd-logind[1472]: Session 19 logged out. Waiting for processes to exit. May 27 17:17:37.637659 systemd-logind[1472]: Removed session 19. May 27 17:17:40.360294 kubelet[2628]: E0527 17:17:40.360252 2628 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-vm748" podUID="493ad74e-e26f-46b2-b905-4b7831d09a1c" May 27 17:17:42.354657 containerd[1492]: time="2025-05-27T17:17:42.354612707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 17:17:42.518473 containerd[1492]: time="2025-05-27T17:17:42.518374833Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:17:42.519466 containerd[1492]: time="2025-05-27T17:17:42.519349733Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:17:42.519466 containerd[1492]: time="2025-05-27T17:17:42.519355573Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 17:17:42.519625 kubelet[2628]: E0527 17:17:42.519583 2628 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:17:42.519949 kubelet[2628]: E0527 17:17:42.519636 2628 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:17:42.519949 kubelet[2628]: E0527 17:17:42.519750 2628 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:85fac0f966784eb3ae65fa9fa496f2d0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kgjcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8687f89567-wlhb2_calico-system(2fb07cfb-44c7-4704-bd84-8ea56f199724): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:17:42.522623 containerd[1492]: time="2025-05-27T17:17:42.522538668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 17:17:42.636744 systemd[1]: Started sshd@19-10.0.0.128:22-10.0.0.1:52448.service - OpenSSH per-connection server daemon (10.0.0.1:52448). May 27 17:17:42.669572 containerd[1492]: time="2025-05-27T17:17:42.669416620Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:17:42.670627 containerd[1492]: time="2025-05-27T17:17:42.670500237Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:17:42.670627 containerd[1492]: time="2025-05-27T17:17:42.670562916Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 17:17:42.671041 kubelet[2628]: E0527 17:17:42.670977 2628 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:17:42.671133 kubelet[2628]: E0527 17:17:42.671050 2628 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:17:42.671474 kubelet[2628]: E0527 17:17:42.671192 2628 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kgjcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8687f89567-wlhb2_calico-system(2fb07cfb-44c7-4704-bd84-8ea56f199724): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:17:42.672457 kubelet[2628]: E0527 17:17:42.672369 2628 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-8687f89567-wlhb2" podUID="2fb07cfb-44c7-4704-bd84-8ea56f199724" May 27 17:17:42.711467 sshd[5440]: Accepted publickey for core from 10.0.0.1 port 52448 ssh2: RSA SHA256:ZZNcfTFkFYX46lZGwGlqysxQ9Yikwv1d/hmoNWRTIVY May 27 17:17:42.712629 sshd-session[5440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:17:42.718530 systemd-logind[1472]: New session 20 of user core. May 27 17:17:42.724587 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 17:17:42.928877 sshd[5442]: Connection closed by 10.0.0.1 port 52448 May 27 17:17:42.929141 sshd-session[5440]: pam_unix(sshd:session): session closed for user core May 27 17:17:42.934175 systemd[1]: sshd@19-10.0.0.128:22-10.0.0.1:52448.service: Deactivated successfully. May 27 17:17:42.936135 systemd[1]: session-20.scope: Deactivated successfully. May 27 17:17:42.936975 systemd-logind[1472]: Session 20 logged out. Waiting for processes to exit. May 27 17:17:42.938091 systemd-logind[1472]: Removed session 20.